Nifi HL7 processor is built using HAPI API, which supports z-segments
http://hl7api.sourceforge.net/xref/ca/uhn/hl7v2/examples/CustomModelClasses.html


On Wed, Oct 12, 2016 at 10:10 PM, Martin Gainty <mgai...@hotmail.com> wrote:

>
>
>
> > From: dbis...@gmail.com
> > Date: Wed, 12 Oct 2016 20:42:04 -0400
> > Subject: RE: HL7 messages to Kafka consumer
> > To: users@kafka.apache.org
> >
> > I did it with HAPI API and Kafka producer way back when and it worked
> well.
> > Times have changed, If you consider using Apache Nifi, besides native HL7
> > processor,
> MG>since this is where i get 99% of the applications i work on I have to
> ask will Nifi process Z segments?
> MG>if Nifi does not not process  Z segments you might want to delay being
> a Nifi evangelist and go with
> MG>aforementioned solution
>  you can push to Kafka by dragging a processor on canvas. HL7
> > processor also is built on HAPI API. Here's an example but instead of
> Kafka
> > it's pushing to Solr, replacing solr processor with Kafka will do a
> trick.
> MG>kafka server.properties does support a zk provider so kafka server can
> ingest resultset(s) from zk
> ############################# Zookeeper #############################
> # Zookeeper connection string (see zookeeper docs for details).# This is a
> comma separated host:port pairs, each corresponding to a zk# server. e.g. "
> 127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".# You can also append an
> optional chroot string to the urls to specify the# root directory for all
> kafka znodes.
> zookeeper.connect=localhost:2181
> # Timeout in ms for connecting to zookeeper
> zookeeper.connection.timeout.ms=6000
> MG>kafkas clear advantage over zk is to control flow by pausing or
> resuming partitions to your kafka consumer
> MG>possible side-effect of relying only on zks provider would disable this
> control-flow capability of kafka
> > Old and new consumer API is available.
> >
> > https://community.hortonworks.com/articles/20318/visualize-
> patients-complaints-to-their-doctors-usi.html
> >
> > On Oct 12, 2016 4:33 PM, "Martin Gainty" <mgai...@hotmail.com> wrote:
> >
> > > provisionally accomplished task by embedding A01,A03 and A08 HL7
> > > Event-types into SOAP 1.2 Envelopes
> > > I remember having difficulty transporting over a non-dedicated
> transport
> > > such as what Kafka implements
> > > Producer Embeds Fragment1 into SOAPEnvelope
> > > Producer Sends Fragment1-SOAPEnvelope of A01
> > > Consumer pulls Fragment1 of A01 from SOAP1.2 Body and places
> SOAPEnvelope
> > > into cache
> > > Consumer quiesces connection presumably so other SOAP 1.2 messages can
> be
> > > transported
> > > Consumer re-activates connection when sufficient bandwidth
> detected(higher
> > > priirity SOAP1.2 envelopes have been transmitted)
> > > Producer Embed Fragment2 into SOAPEnvelope
> > >
> > > Producer Sends Fragment2-SOAPEnvelope of A01
> > > Consumer pulls Fragment2 of A01 from SOAP1.2Body and places into cache
> > > When Consumer detects EOT Consumer aggregates n Fragments from cache to
> > > all-inclusive A01 event
> > > Consumer parses A01 to segments
> > > Consumer parses attributes of each segment
> > > Consumer insert(s)/update(s) segment-attribute(s)  into database
> > > Consumer displays updated individual segment-attributes to UI and or
> > > displays inserted segment-attributes to UI
> > >
> > > Clear?Martin
> > > ______________________________________________
> > >
> > >
> > >
> > > > From: samglo...@cloudera.com
> > > > Date: Wed, 12 Oct 2016 09:22:32 -0500
> > > > Subject: HL7 messages to Kafka consumer
> > > > To: users@kafka.apache.org
> > > >
> > > > Has anyone done this?   I'm working with medical hospital company
> that
> > > > wants to ingest HL7 messages into Kafka cluster, topics.
> > > >
> > > > Any guidance appreciated.
> > > >
> > > > --
> > > > *Sam Glover*
> > > > Solutions Architect
> > > >
> > > > *M*   512.550.5363 samglo...@cloudera.com
> > > > 515 Congress Ave, Suite 1212 | Austin, TX | 78701
> > > > Celebrating a decade of community accomplishments
> > > > cloudera.com/hadoop10
> > > > #hadoop10
> > >
>
>

Reply via email to