Mark,

As an EDI and IT professional I agree with much of what you say below.
However, there is one point that I disagree with...it's your statement that
"Proving that application systems meet the new HIPAA business
requirements..." is a HIPAA requirement. Actually, it's not....all that the
law requires is that the transaction conform to the standard data content
and the standard format, which is based on the X12 standards. What the
internal apps do is irrelevant in terms of whether they have been
remediated, replaced, or whatever. I don't mean to imply that backend apps
must at some point either be remediated or replaced, but that is not a
requirement of HIPAA.

Thus, in order to "prove" compliance with the HIPAA Electronic Transactions
Final Rule requirements, an entity only needs to demonstrate that it can
generate a complying transaction or receive a complying transaction. Once
that's been proven, the internal processing of the transaction data can be
totally manual or semi-automated or totally automated. The law doesn't care.
Of course, manual or semi-automated internal processing doesn't get one to
administrative simplification, does it? But the law does NOT mandate that
any covered entity redesign their internal business processes for cost
reductions and/or simplification. Those of us that have been in the EDI game
for a few decades know that just implementing an EDI capability doesn't
create any business benefit UNLESS the internal business processes are
redesigned as well.

Re your comment about machine readable IGs - what a concept. This capability
has existed for several years using the UN/EDIFACT IMPDEF message
specification. IMPDEF was developed cooperatively between the UN/CEFACT and
the X12 Committee and its specifically intended to be able to convey in
machine readable syntax based on the international standard for EDI, which
all good commercial translators should be able to process, an organization's
implementation definition for any X12 transaction set or UN/EDIFACT message.
Personally, my opinion is that it's a major shame that the HIPAA
implementation guide specifications were made available ONLY in the Adobe
.pdf format and not ALSO in the IMPDEF format for automated processing. Oh
well......

BTW, a colleague of mine already has the HIPAA 270, 820, 834 and 835 ICs and
a portion of the 837D in IMPDEF form.

Rachel
Rachel Foerster
Principal
Rachel Foerster & Associates, Ltd.
Professionals in EDI & Electronic Commerce
39432 North Avenue
Beach Park, IL 60099
Phone: 847-872-8070
Fax: 847-872-6860
http://www.rfa-edi.com


-----Original Message-----
From: Mark A Lott [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, July 16, 2002 12:31 PM
To: 'Christopher J. Feahr, OD'; [EMAIL PROTECTED]; [EMAIL PROTECTED]
Cc: 'David Kibbe'; 'William J. Kammerer'; [EMAIL PROTECTED]; 'Kathleen
Connor'
Subject: RE: Testing for levels 3, 4, and 6


Chris,

I was referring to all CE's and the entire philosophy and methodology being
used today for EDI validation, creating test cases based on IG's, testing
internal applications and translators. I have a very long history in the
field of quality assurance and managing the certification of global systems
involving millions of trading partners using multiple file formats, all of
which must be accepted in the correct format, pass all internal edits,
processed through the application, balanced, checked against AR and AP
requirements before the partner is allowed to send files in production.
There are many lessons learned that we can apply to this endeavor. Really
understanding what the test conditions and test cases are meant for is
critical in order to understand what was tested, how it was tested, what the
results were and how they compared to the expected results. Proving that
application systems meet the new HIPAA business requirements, regression
test cases, translators mappings, EDI validation (to name a few) is very
serious business and should be treated as such.

In analyzing the current methods being used to achieve these same results
for HIPAA, it struck me that we need a paradigm shift into the world of
quality assurance education and best practices. I do not see these
principles being brought forward in literature or discussions. That is why
we are forming a group of like-minded QA professionals to enhance the
current education initiatives underway. For instance, there are currently
many CE's that are trying to create test files using EDI test file
generators, hiring consultants to try and learn them, in the hope of
creating a test case to validate that applications and translators are
functioning according to business requirements. This approach is time
consuming, costly and very error prone due to many factors including the
understanding of the IG's, a test file tool learning curve and the myriad of
combinations based on the situational elements. It is unreasonable to place
that burden upon the business stakeholders, business analysts and test
analysts. There is a much more educated process that can accomplish exactly
what the industry needs to do without throwing more people and money at the
problem.

I have said all along that stand alone EDI validation tools or integrated
EDI validators within translators are only as good as the developers and
testers that built them, and since we are all human there are bound to be
errors. There is a way around this problem. What is worse is that some CE's
are not even using an EDI validation tool and there are just trusting their
translators because they say they are compliant. Having a proper HIPAA file
format does not mean the system will produce a compliant file. Some
organizations will have a rude awakening! It all goes back to IT departments
thinking that quality assurance is a nice to have, instead it is a must
have. I also agree with you that trading partners must test, there is no
logical way for them to avoid this and anyone pushing for that will only
cause additional problems once applications go live, which by the way,
errors found in production have the highest cost to fix. There is a way to
reduce future trading partner testing requirements but it isn't done just by
looking at a set of EDI validation reports.

You are also correct that having people trying to decompose IG's to
understand and create test cases is tedious, error prone, involves rework,
not very cost effective and doesn't make good business sense. Machine
readable IG's.....what a concept!  I compliment you on your forward
thinking. Look for it very shortly.... I expect it to have a tremendous
effect on the testing initiatives for all CE's. Wish I could share more...

On a side note: I have about three feet of rope I can let you borrow, I know
it not long enough but I am sure the other members of the group might be
able to contribute the rest.....

Mark A Lott
President
HIPAA Testing, Inc.

www.hipaatesting.com

Office: 480-946-7200
Cell:   480-580-4415
Fax:  877-825-8309

 -----Original Message-----
From:   Christopher J. Feahr, OD [mailto:[EMAIL PROTECTED]]
Sent:   Monday, July 15, 2002 8:09 PM
To:     '[EMAIL PROTECTED]'; [EMAIL PROTECTED]
Cc:     David Kibbe; William J. Kammerer; [EMAIL PROTECTED]; Kathleen Connor
Subject:        RE: Testing for levels 3, 4, and 6

Dear Group:
(I apologize for the length.. but I think this is one of our most important
issues at the moment)

If I understood Mark Lott's basic premise, it seemed to be oriented toward
*providers* testing their claim generation systems.  Since there is no
standard format for the data before it becomes a "claim", I would think
that each PMS vendor will have to create a test bed of typical encounter
data, recognizable to his system.... perhaps by de-identifying several
thousand sets of real encounter data from a client's working system.

Even so,  the PMS vendor must still somehow determine if he is creating
defective transaction sets out of this data... i.e. he must apply the 7
types of scrutiny being proposed by WEDI testing group.

This leads to a core issue that continues to pop up and was a theme of a
mini (un-official) "white paper" circulated recently on the testing
listserve.  Ultimately, CEs want to compare their test transactions
directly to the *IG*.  But since the logic of the IG is only represented in
human-readable form, we are forced to build validation engines that (to the
best of each builder's knowledge) faithfully represent the logic of the
IG.  Lots of people are building these engines... some offered as
independent testing services and others simply as a component of the local
"translator" application.  THE PROBLEM THAT WON'T GO AWAY, however, is that
none of these transaction-validation engines can be ABSOLUTELY determined
or "certified" to be working correctly.... for several reasons, including:

1. You cannot reliably map human *understanding* of a complex, narrative
description of a methodology into a hard computer application.  By its very
nature, such mappings are ALWAYS going to be a little fuzzy, and will
become clearer and more accurate with each revision... much like mapping a
collection of human-understandable business requirements into a business
application .
2. Inconsistencies and ambiguities certainly do exist within and among the
present IGs.  In fact, TG3 has a committee combing through the IGs,
documenting these as we speak.
3. The situational elements that are dependent on simply whether or not a
payor wants the data, will require thousands of "companion guides" that
would be impractical to try to program into standard/independent testing
engines.  These will still require testing by each trading pair.

All of this seems to cry out for machine-readable IG formats.  Until those
exist, however, it would seem to be "every man for himself" with regard to
"testing"... with the "buck stopping" ultimately at the level of each
individual set of trading partners.  Several folks have "done the math" on
testing each trading pair and determined (correctly) that there aren't
enough testing-hours available to do that... assuming that at least some of
our resources have be reserved for delivering health care!

Even if we DO agree on ONE common validation engine... perhaps based on
some open-source logic-model that neutralized all the "anti-competitive"
issues... it could still have a logic flaw that was not discovered until
thousands of CEs had already been "certified" with it.  Given the
complexity of the logic and the almost infinite variety of data conditions,

I would imagine that flaws are more or less guaranteed.

I can't visualize a short-term (i.e., before H-day) solution for this, but
one thing has become very clear to me:

Standards CONCEPTS for a machine-based information system MUST be
accompanied by a STANDARD, MACHINE-READABLE methodology for REPRESENTING
those concepts.  As long as the primary representation of the standard
remains in an exclusively human-readable form (e.g., a .pdf document), then
the standards concepts will necessarily have to be filtered through a
SECOND committee of human brains, before they are incorporated into EACH
business application.  This second pass through a brain-committee (the SDO,
being the initial pass) introduces unnecessary interpretive error and
should be eliminated.  Wherever possible, the SDO committee should deliver
the initial standard/IG in a machine readable form that has been examined
for internal consistency and checked for conflict with other standards.

Besides being horribly expensive to understand and implement, our present
paper-based IGs will be a NIGHTMARE to maintain and keep harmonized with
other IGs.  Maintenance and IG-harmonization could be vastly streamlined
with machine-readable IGs.  I think this needs to have a very high priority
assigned to it.  In fact, a methodology capable of eliminating this much
basic implementation and maintenance expense across an entire industry,
deserves a professional (i.e., paid) development team and serious funding.

Folks, we have the domain experts... we have the IT experts... we have the
excellent (albeit, human readable) IGs that have been carefully hammered
out in over a decade of discussion... we at least have the money that we
are poised to spend on implementing and maintaining  this the "hard
way".  All we need is the will to spend that money today on a fast-track
implementation of the BETTER WAY to represent standards.  I don't see how
we can lose with such an investment... unless the task of making IGs
machine readable is a lot harder than I think it is.  But when you think
about it, we are ultimately doing that anyway...  although in a
non-standard way, one user-application at a time.  Essentially, each
translator vendor is making his/her best guess at a "machine readable
representation of the IG".

I realize that, to some extent, this suggestion pushes the "standards
committee" toward something that looks more like a "software development
house", but I think this is an unavoidable course for standards
development/maintenance... if we expect standards to keep up with the state
of the art in business applications, or (perhaps more importantly) if we do
not want standards to become a burden that slows down the development of
innovative business applications.  The present volunteer-driven SDO model
will continue to be the optimal "incubator" environment for standards.  To
sustain what I'm suggesting here, however, we will need to add a
non-profit, revenue-based wing, in which licenses fees from sophisticated,
machine-readable standards fuel a more organized and efficient SDO
model.  This entity can also handle registries and many other industry-wide
components.

(What was that? Did somebody say, "Get a rope!"?)

Best regards,
Chris




**********************************************************************
To be removed from this list, send a message to: [EMAIL PROTECTED]
Please note that it may take up to 72 hours to process your request.

======================================================
The WEDI SNIP listserv to which you are subscribed is not moderated.  The discussions 
on this listserv therefore represent the views of the individual participants, and do 
not necessarily represent the views of the WEDI Board of Directors nor WEDI SNIP.  If 
you wish to receive an official opinion, post your question to the WEDI SNIP Issues 
Database at http://snip.wedi.org/tracking/.
Posting of advertisements or other commercial use of this listserv is specifically 
prohibited.

Reply via email to