Re: [higgins-dev] IdAS changes proposalPaul,

> Do you think it is practical to implement this:
> +----------------------------------------+
> | Upper CP that implements .idas.api2    |
> | SPARQL api but read/writes "raw"       |
> | entities/attributes from lower CP      |
> +----------------------------------------+
> +----------------------------------------+
> | Lower CP implements existing .idas.api |
> +----------------------------------------+

I think we are able to implement basic aspects of SPARQL which will satisfy our 
requirements. However it will be difficult to implement many aspects of SPARQL 
such as FILTER functions in WHERE clause (moreover, there is no any equivalent 
of those functions in idas.api.IFilter). For example, if I want to use 
regex(..) SPARQL FILTER function in Upper CP, I'll need first select all 
entities from old CP, and than make additional check selecting entities which 
conform to the regexp.

Thanks,
Sergey Lyakhov
  ----- Original Message ----- 
  From: Paul Trevithick 
  To: higgins-dev 
  Cc: Vadym Synakh ; Paul Trevithick ; Igor Tsinman 
  Sent: Thursday, October 15, 2009 4:31 PM
  Subject: Re: [higgins-dev] IdAS changes proposal


  Sergey,

  Hmmm, this is a tough one. We don't want to lose the investments in the 
existing CPs (the old .idas.api). Yet we don't want to create a burden for new 
CP developers. While we mull this over, I have a question. Do you think it is 
practical to implement this:


    +----------------------------------------+
    | Upper CP that implements .idas.api2    |
    | SPARQL api but read/writes "raw"       |
    | entities/attributes from lower CP      |
    +----------------------------------------+
    +----------------------------------------+
    | Lower CP implements existing .idas.api |
    +----------------------------------------+


  If so, then we could maintain both the lower and the upper APIs. Any CP that 
didn't want to support the .api2 (upper api) wouldn't have to, there because 
they could use the upper "adapter" CP. The result might be very slow, but at 
least it (might) work. And if good SPARQL performance was required, then the CP 
would be force to do a native implementation of .idas.api2.

  [One really interesting benefit of implementing SPARQL is that with the above 
adapter plus a web service front end, we can expose any IdAS data source as a 
SPARQL endpoint. Then we'd have XDI and SPARQL endpoints for the Attribute 
Service. The Linked Object Data (LOD) semweb folks are creating lots of SPARQL 
endpoints-we'd dovetail with these efforts.

  --Paul



  On 10/15/09 6:23 AM, "Sergey Lyakhov" <[email protected]> wrote:


    Paul,

    Sorry for delay.

    > 3. Jim Sermersheim invented IFilter because we needed something and 
SPARQL wasn't yet established. Now that it is, I wonder if we shouldn't give it 
another look 
     
    It would be very convinient to use SPARQL for  RDF-based context providers 
(like jena CP). However it would be hard to implement all aspects of SPARQL for 
context providers which are not based on RDF (JNDI, XML, Hibernate etc.).
    > When you go to make these changes, it will be critical to load into your 
workbench every possible context 
    > provider that you can find so that you can fix them so that they don't 
all break.

    It will take a lot of work to implement new filter/model for all providers. 
So, I suppose there is a sence to put new IdAS interfaces into a new project 
(like org.eclipse.higgins.idas.api2) and than fix all providers to support  
these new interfaces. What do you think about this?
     
    Thanks,
    Sergey Lyakhov


      ----- Original Message ----- 
       
      From:  Paul  Trevithick <mailto:[email protected]>  
       
      To: higgins-dev <mailto:[email protected]>  
       
      Cc: Vadym Synakh <mailto:[email protected]>  ; Paul Trevithick 
<mailto:[email protected]>   ; Igor  Tsinman <mailto:[email protected]>  
       
      Sent: Monday, September 28, 2009 3:11  AM
       
      Subject: Re: [higgins-dev] IdAS changes  proposal
       

      Sergey,

      My responses:
       

        1.. agree    
        2.. agree    
        3.. Jim Sermersheim  invented IFilter because we needed something and 
SPARQL wasn't yet  established. Now that it is, I wonder if we shouldn't give 
it another look    
        4.. (4.1): short  answer: no. Longer answer: cdm.owl is an attempt to 
approximate in owl  concepts that cannot be directly operationalized in real 
RDF/OWL based  systems. Only higgins.owl should be imported and used. Cdm.owl 
is just an  attempt at explanation. It can be ignored. (4.2) A lot of OWL URLS 
end in  .owl, but it isn't a firm requirement or  convention.


      When you go to make these changes, it will be  critical to load into your 
workbench every possible context provider that you  can find so that you can 
fix them so that they don't all break.  

      --Paul

      On 9/23/09 12:07 PM, "Sergey Lyakhov" <[email protected]>  wrote:

       

        Paul,

        I suppose, cdm:entityId is redundant and we can use rdf:ID  instead. As 
a result:

        1.1. In this case IEntity.getEntityID() will retun  rdf:ID.
        1.2. In case of blank entity (previously known as a complex  value) it 
should return null.
        1.3. entityId attribute will be  eliminated.

        I suppose we need to do the following changes to IdAS interfaces  to be 
compatible with CDM:

        2.1. BlankEntity class has  been eliminated from cdm.owl. So, I suppose 
we need to do the same for IdAS  interfaces and replace IBlankEntity with 
IEntity (eliminate IBlankEntity  interface).
         
        Because there is no any difference between entity  and complex value, 
we can define the following:

        2.2. If Entity has been  created by IContext.addEntity(entityType, 
entityID) method, it should always  have entityID (should not be a blank 
entity). In other words, a unique value  should be generated by a context and 
used as entityId, if no entityId  passed.
        2.3. If Entity has been created by IAttribute.addValue(URI)  method, it 
should be a blank entity.
        2.4. If Entity has been added by  IAttribute.addValue(IAttributeValue) 
it should be the same type as passed  entity. If passed entity is a blank 
entity, new blank entity should be  created as a copy of passed, otherwise a 
reference to the existent (non  blank) entity should be created.
        2.5. When Entity is deleted, all its  subentities which are a blank 
entity should be deleted  too.
         
        Also we need more flex IFilter API:
         
        3.1.  IFilter should be able to query both types of entities as blank 
as  usual.
        3.2. IFilter should be able to query a separate value (entity or  
simple value) of any nesting level, not only direct attributes of  Entity.
         
        Also I have some notes about CDM:
         
        4.1.  CDM.owl contains entityRelation and contextRelation object 
properties. Do we  need to reflect them in IdAS interfaces?
        4.2. Namespase of cdm.owl 
http://www.eclipse.org/higgins/ontologies/2008/6/cdm.owl  ends with .owl. Is it 
correct?

        Thanks,
        Sergey  Lyakhov




       

--------------------------------------------------------------------------


      _______________________________________________
      higgins-dev mailing  list
      [email protected]
      https://dev.eclipse.org/mailman/listinfo/higgins-dev




------------------------------------------------------------------------------


  _______________________________________________
  higgins-dev mailing list
  [email protected]
  https://dev.eclipse.org/mailman/listinfo/higgins-dev
_______________________________________________
higgins-dev mailing list
[email protected]
https://dev.eclipse.org/mailman/listinfo/higgins-dev

Reply via email to