Hi! On Thu, Feb 2, 2012 at 18:19, pablo pazos <pazospablo at hotmail.com> wrote: > I don't know if this is crazy talk or if it's seems reasonable to you. Please > let me know :D
Not crazy, but maybe overly complicated. Perhaps it would be a good idea to use a layered approach? 1. An existing distributed version control system (DVCS) like GIT (and an agreed directory structure and naming conventions within it) for storing versioning and distributing archetype source files etc. 2. A set of tools (that can also be run in batch mode or by "commit hooks" in the DVCS) that can validate and convert sources to alternative formats and create html pages (and other formats) for listing and browsing the resulting assets 3. A search function (maybe also using existing online search services) that can be used by both humans and machines (via an API) 4. Clinician-friendly GUIs with CKM-like functions that hides/incorporates layers 1,2, and 3 to end users that want CKM. #1 is available already including free hosting possibilities (but without provider-lock in since the whole version history is replicated) #2 I think e.g. Seref, Tom and others have come very far with already The output from #1 & #2 could be served as static files on any webserver and thus make it easy for any organization to set up. No API more than normal HTTP will be needed for read operations, for write operations the API of the DVCS will likely be enough. #3 should be considered carefully before putting too much resources into new development. If processes in step #2 can create good enough labeling/tagging/ontology-linking to resources or meta-resources (like auto-generated descriptive web pages) then both existing online search engines and locally run ones could just pick that up using standard mechanisms #4 will need more work, I don't know if any parts of the CKM can be useful and open sourced to help such an effort. It would be nice if the discussions relating to an artifact (e.g. an archetype) could be stored and versioned in the same backend system as the artifact itself, there are GIT/DVCS-based wikis that might do part of the job. The key benefit of a DVCS based approach is the "distributed" nature that allows creative initiatives without asking for centralized permission. It allows easy (auditable) cross-pollination of ideas and code/archetypes between developers or regional developer organisations in a way that is a lot harder with centralized approaches like Subversion, the current CKM etc. It's hard to describe, but techies can have a look at some active projects at Github to get a feel for it. Best regards, Erik Sundvall erik.sundvall at liu.se http://www.imt.liu.se/~erisu/? Tel: +46-13-286733 On Thu, Feb 2, 2012 at 18:19, pablo pazos <pazospablo at hotmail.com> wrote: > Hi all, > > What I've been thinking is to share the same interface/protocol to do simple > tasks on distributed CKMs like: > > Adding an artifact (archetype, template, termset, terminology service, > process definition, gui template, etc.), lets think only about archetypes > for now. > Updating an archetype (with version management) > Listing all the archetypes > Listing all the archetypes for one RM type (i.e. composition, entry, action, > etc.) > Listing all the archetypes that archetype_id match a regexp > Listing all the archetypes that match some text (free text search) > Retrieve archetype in ADL/XML/JSON/YAML by archetype_id > > > > Ian, about the requirements you mention: > >> Basic requirements >> >> 1. Query across multiple repositories using multiple criteria, based >> on something similar to the CKM ontology. > > For this I think something like DNS protocol will do the job > (http://en.wikipedia.org/wiki/Domain_Name_System). DNS could do a > distributed search by redirecting the query if some server doesn't have the > answer. So, if we have a service to register artifacts on CKM servers > (service 1. on the list above), with an "artifact registered notification > protocol", and another protocol for "CKM server discovery", we could > implement the distributed search. > >> 2. Establish references to remote assets, almost certainly caching the >> referenced asset locally > > This would be the a mix of "adding and artifact" and "artifact registered > notification protocol". > >> 3. Establish subscriptions to remote assets, to enable change notification >> etc >> > And this would be included in the "CKM server discovery protocol", that > could defined like some services provided by the NDP protocol, using > messages like RA, NS, NA, ... to discover CKM servers to create a CKM > network over Internet: > http://fengnet.com/book/CCIE%20Professional%20Development%20Routing%20TCPIP%20Volume%20I/ch02lev1sec5.html > I think some of these services could be found also in the ICMP > protocol:?http://www.networksorcery.com/enp/protocol/icmp.htm > > > Just to clarify my thoughts, I don't think we need a network protocol!!! I > think we could create our protocols to handle artifacts in a distributed way > reusing some ideas from those proven protocols that our machines run > everyday to connect to the Internet and access distributed resourses. > > > How this stuff could work in reality? > > We need a set of "root CKM servers", always online, that could answer our > queries or redirect to some another server that could answer (like DNS). > In those servers (could be only one, like openEHR.org CKM), other servers > could advertise themselves to form part of the CKM network, this could be > done like an ICMP or NDP router advertisement. Those servers could download > also a list of servers currently connected to the network, and update the > list anytime. > The children servers could be not always online, so each entry on the root > server registry should have a lifetime, that when reached, the entry is > eliminated from the list (like in ICMP > http://www.networksorcery.com/enp/protocol/icmp/msg9.htm?). This could > trigger a notification to the other members of the network, to update their > server list. > When an artifact is added to a server, it should notify other servers in the > network, so they could know what server has the original copy of the > artifact, and maybe they can make a copy of the artifact that sould be > read-only on those servers that cache a copy. The cached archetype could > have a lifetime, that when reached, a new copy of that archetype should be > downloaded from the original server, if the server is still online, or renew > the lifetime if the original server is offline. > Then a query received by any CKM could be responded or redirected to other > server, and all servers in the network could keep up with all the archetypes > created worldwide. > > > > I don't know if this is crazy talk or if it's seems reasonable to you. > Please let me know :D > > > Kind regards, > Pablo. > > > > _______________________________________________ > openEHR-technical mailing list > openEHR-technical at openehr.org > http://lists.chime.ucl.ac.uk/mailman/listinfo/openehr-technical >

