LTK Team,

The following is a short summary to inform you of the progress that is 
being made on the LLRP-Java Api.  We had a very productive call that 
included Christian, Matthias (from Auto-ID) and Prasith, Kyle, Matt 
(from Pramari).  Putting so many Java people together is usually a 
recipe for disaster but this time it turned out very well.  We came to a 
consensus on a lot of items and in general we feel that we came up with 
a good design and roadmap for building a robust and reusable Java 
Toolkit API.

The following is a short summary of the discussion items and decisions 
that were made as well as some reasons into why.  There were also a few 
issues that came up which will be handled with further offline research 
and another call.

1)  Design/Philosophical High Level Items
      - We all agreed that it would be beneficial to use the common LLRP 
abstract and binary descriptions (XML and/or XSD see point 3 below) to 
use as the base framework for building out the new framework classes.
      - We also agreed that in order for the toolkit to be maintainable 
it should either fully implement a code generation approach or a full 
api approach.  It is not beneficial to have a hybrid approach as it 
causes ambiguity in where and how the code should be changed.  This 
exclude some base classes which hopefully will only be built once such 
as the types and parent classes.
      - The role of each message classes is to be able to encode/decode 
binary first and then to encode/decode to xml (based on llrpdef)
      - We both agreed that Joe Hoag's initial contribution is of great 
value and to not re-invent the wheel with this new design but to build 
upon what he had started and incorporate all the knowledge and best 
practices that we now know from the past few months of being knee deep 
in LLRP.

2)  Lower level Design Items
      -  We agreed that the toolkit would strictly be a message api 
which means that for now we will not have a connection or device 
communication profile.  The role of the api is to provide a well 
designed java library for generating and understanding bit level LLRP 
messages.  However, the design will be open enough to allow contribution 
of this module as readers start hitting the market anyone can contribute 
several different interfaces and make the toolkit more robust.
      -  After some discussion we favor the approach of having each 
message class have an individual encode/decode message as opposed to one 
monolithic encoder/decoder class.  We felt that this was better for 
modularity, understandability and also works better with our code 
generation approach where we can lay out structures for each of the 
encode/decode operations for a class.
      - The types structures are a necessity and we generally agree on 
this as something that was not complete earlier.
      - The strongly typed interface choice parameters for items such as 
Specs was also mutually agreed upon.

3)  XSD/XML Discussion (warning near-religious topics ahead)
     - One of the more interesting topics and something that the group 
dived into depth on was the necessity for both the LLRP.xsd (abstract 
LLRP description) and LLRPDef.xml (LLRP Binary encoding) in the LLRP 
common package.  Even though we understood what the intent of each of 
them are we left wondering due to some of the redundancy in what they 
describe and somewhat similar roles if they are both absolutely 
necessary.  Even though we had differing sides on which was more useful 
(XSD or XML) we both agreed on the general duplication of certain 
elements and information that is common between both.
      * This is an Item that Christian and his group will research 
further and provide a detailed analysis on and would like to defer the 
discussion of this for that email thread.

4)  Next Steps
   - Pramari team to look into Apache Velocity and compare it with 
initial approach of using XSLT to come up with benefits/drawbacks.
   - Auto-ID team to look into benefits of XSLT and compare with 
existing Velocity knowledge.
   - Make decision for code generation in next call - come up with 
reference code generation description templates.
   - Share design source files and make some minor tweaks to some 
message methods and convenience methods (ongoing and as needed)
   - First start to map out the binary encode/decode process and validate
   - Then create the xml encode/decode process and validate provide 
reference samples for community involvement
   - Create testing strategy for testing generated classes
   - Pramari to validate new library by integrating into Rifidi and 
doing old/new comparison
   - And many more that we have not foreseen :)

So that is where the LLRP-Java effort stands right now and I think that 
we made great progress this week.  We are almost at a design consensus 
and as soon as we do some more research and make a decision we will 
start on the implementation.  We also have a clear plan on how to go 
about with things and believe that once the initial framework is 
complete it will be coherent enough that other people can contribute 
with encode/decode specs for binary and xml and also in helping execute 
a coherent testing strategy.

We are of course open to suggestions so please feel free to comment on 
what you see above.   Also if there is anyone else interested in 
contributing to the java toolkit then please let us know.  As soon as we 
can nail down the design and implement some of the vital pieces we 
should have enough examples and documentation for others to jump in and 
take a crack.

Thanks,

LTK-Java Team


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
llrp-toolkit-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/llrp-toolkit-devel

Reply via email to