Sergey,

Are you proposing that:
1. We define a new .api2 that replaces the IFilter stuff with SPARQL.
2. We define a sub-set of SPARQL that could be used with .api2 for purposes
of creating an adaptor CP, yet would still have acceptable performance.
3. We would implement the full new .api2 in any new CPs that are based on
RDF technology directly (e.g. Jena) or on something like RDF (e.g. XDI). [Of
course as you know Jena has an add-on (ARQ) SPARQL processor, so if we use
ARQ + Jena we¹re ³done² from a raw functionality point of view‹we just have
to adapt to the IdAS .api2]
4. We will inform Mary Ruddy ASAP about any new tech (e.g. ARQ or newer
versions of Jena) we want to use for this new CP so we can get the Eclipse
legal process going.

--Paul

On 10/15/09 12:16 PM, "Sergey Lyakhov" <[email protected]> wrote:

> Paul,
>  
> Actually, I did mean the following:
>  
> 1. Main point - it is difficult to implement full SPARQL specification in
> Upper CP because it is really difficult task.  In other words, we can
> implement "restricted" SPARQL functionality where some queries will not work.
>  
> 2. (as you understood)  some semantics can¹t be expressed in the .api CP using
> .api.IFilter. For such queries (where regex() is present for an example) Upper
> CP will work solwly.
>  
> Thanks,
> Sergey Lyakhov
>>  
>> ----- Original Message -----
>>  
>> From:  Paul  Trevithick <mailto:[email protected]>
>>  
>> To: higgins-dev <mailto:[email protected]>
>>  
>> Cc: Vadym Synakh <mailto:[email protected]>  ; Paul Trevithick
>> <mailto:[email protected]>   ; Igor  Tsinman <mailto:[email protected]>
>>  
>> Sent: Thursday, October 15, 2009 6:41  PM
>>  
>> Subject: Re: [higgins-dev] IdAS changes  proposal
>>  
>> 
>> Sergey,
>> 
>> Let me see if I understand what you are  saying. Are you saying this:
>> 
>>  
>> * We could  implement the .api2 CP as shown below, but it will be difficult
>> to implement  in it many aspects of SPARQL because the semantics can¹t be
>> expressed in the  .api CP using .api.IFilter.
>> 
>> If yes, then I was  thinking was different. I was not assuming that
>> .api.IFilter semantics  were sufficient to express the SPARQL semantics
>> directly. I was, however,  assuming that the upper .api2 CP may in some cases
>> have to read (using lower  .api CP) many, most, and sometimes ALL (!)
>> entities from the lower .api CP and  perform the SPARQL WHERE filtering
>> algorithms itself. And this is why I was  saying that the performance may be
>> very bad when this two layer approach is  taken.
>> 
>> I¹m looking for a solution that allows the old .api to be  maintained and to
>> be able to reuse these ³old² CPs by adapting them with the  upper .api2 CP.
>> If the performance is too bad, then the developer can  implement a ³native²
>> (not two layered) CP using .api2.
>> 
>> --Paul
>> 
>> On  10/15/09 11:27 AM, "Sergey Lyakhov" <[email protected]>  wrote:
>> 
>>  
>>> Paul,
>>> 
>>>> > Do you  think it is practical to implement this:
>>>> >  +----------------------------------------+
>>>> > | Upper CP that  implements .idas.api2    |
>>>> > | SPARQL api but  read/writes ³raw²       |
>>>> > |  entities/attributes from lower CP      |
>>>> >  +----------------------------------------+
>>>> >  +----------------------------------------+
>>>> > | Lower CP implements  existing .idas.api |
>>>> >  +----------------------------------------+
>>> 
>>> I think we are able to implement basic aspects of SPARQL which  will satisfy
>>> our requirements. However it will be difficult to implement  many aspects of
>>> SPARQL such as FILTER functions in WHERE clause (moreover,  there is no any
>>> equivalent of those functions in idas.api.IFilter). For  example, if I want
>>> to use regex(..) SPARQL FILTER function in  Upper CP, I'll need first select
>>> all entities from old CP,  and than make additional check selecting entities
>>> which conform to the  regexp.
>>> 
>>> Thanks,
>>> Sergey Lyakhov
>>>  
>>>> 
>>>> ----- Original Message -----
>>>>  
>>>> From:  Paul  Trevithick <mailto:[email protected]>
>>>>  
>>>> To: higgins-dev <mailto:[email protected]>
>>>>  
>>>> Cc: Vadym Synakh <mailto:[email protected]>   ; Paul Trevithick
>>>> <mailto:[email protected]>    ; Igor  Tsinman
>>>> <mailto:[email protected]>
>>>>  
>>>> Sent: Thursday, October 15, 2009 4:31   PM
>>>>  
>>>> Subject: Re: [higgins-dev] IdAS changes   proposal
>>>>  
>>>> 
>>>> Sergey,
>>>> 
>>>> Hmmm, this is a tough one.  We don¹t  want to lose the investments in the
>>>> existing CPs (the old  .idas.api). Yet we  don¹t want to create a burden
>>>> for new CP  developers. While we mull this over,  I have a question. Do you
>>>> think  it is practical to implement  this:
>>>> 
>>>>  
>>>>  
>>>>> +----------------------------------------+
>>>>> | Upper  CP  that implements .idas.api2    |
>>>>> | SPARQL api  but  read/writes ³raw²       |
>>>>> |   entities/attributes from lower CP        |
>>>>> +----------------------------------------+
>>>>> +----------------------------------------+
>>>>> |   Lower CP implements existing .idas.api   |
>>>>> +----------------------------------------+
>>>> 
>>>> If so, then we could maintain both the lower and the  upper  APIs. Any CP
>>>> that didn¹t want to support the .api2 (upper api)  wouldn¹t have  to, there
>>>> because they could use the upper ³adapter²  CP. The result might be  very
>>>> slow, but at least it (might) work. And  if good SPARQL performance was
>>>> required, then the CP would be force  to do a native implementation of
>>>> .idas.api2.
>>>> 
>>>> [One really  interesting benefit of implementing SPARQL is  that with the
>>>> above  adapter plus a web service front end, we can expose any  IdAS data
>>>> source as a SPARQL endpoint. Then we¹d have XDI and SPARQL endpoints   for
>>>> the Attribute Service. The Linked Object Data (LOD) semweb folks  are
>>>> creating lots of SPARQL endpoints‹we¹d dovetail with these   efforts.
>>>> 
>>>> --Paul
>>>> 
>>>> 
>>>> 
>>>> On  10/15/09 6:23 AM, "Sergey Lyakhov" <[email protected]>   wrote:
>>>> 
>>>>  
>>>>  
>>>>> Paul,
>>>>> 
>>>>> Sorry for   delay.
>>>>> 
>>>>>> > 3. Jim Sermersheim  invented  IFilter because we needed something and
>>>>>> SPARQL wasn¹t yet  established. Now  that it is, I wonder if we shouldn¹t
>>>>>> give it  another look
>>>>>  
>>>>> It would  be very convinient to use SPARQL for   RDF-based context
>>>>> providers (like jena CP). However it would be hard  to implement  all
>>>>> aspects of SPARQL for context providers which are not based  on  RDF
>>>>> (JNDI, XML, Hibernate etc.).
>>>>>> > When you go to make these   changes, it will be critical to load into
>>>>>> your workbench every  possible  context
>>>>>> > provider that you can find so that you  can fix them so that  they
>>>>>> don¹t all break.
>>>>> 
>>>>> It  will take a lot of work to  implement new filter/model for all
>>>>> providers. So,  I suppose there  is a sence to put new IdAS interfaces
>>>>> into a new project  (like  org.eclipse.higgins.idas.api2) and than fix all
>>>>> providers to support    these new interfaces. What do you think about
>>>>> this?
>>>>>  
>>>>> Thanks,
>>>>> Sergey  Lyakhov
>>>>>  
>>>>>  
>>>>>> 
>>>>>> ----- Original Message -----
>>>>>>  
>>>>>> From:  Paul  Trevithick <mailto:[email protected]>
>>>>>>  
>>>>>> To: higgins-dev <mailto:[email protected]>
>>>>>>  
>>>>>> Cc: Vadym Synakh <mailto:[email protected]>    ; Paul Trevithick
>>>>>> <mailto:[email protected]>     ; Igor  Tsinman
>>>>>> <mailto:[email protected]>
>>>>>>  
>>>>>> Sent: Monday, September 28, 2009 3:11    AM
>>>>>>  
>>>>>> Subject: Re: [higgins-dev] IdAS  changes   proposal
>>>>>>  
>>>>>> 
>>>>>> Sergey,
>>>>>> 
>>>>>> My   responses:
>>>>>>  
>>>>>>  
>>>>>>  
>>>>>> 1. agree    
>>>>>> 2. agree    
>>>>>> 3. Jim   Sermersheim  invented IFilter because we needed something  and
>>>>>> SPARQL wasn¹t yet  established. Now that it is, I  wonder if we
>>>>>> shouldn¹t give it another look
>>>>>> 4. (4.1):  short   answer: no. Longer answer: cdm.owl is an attempt  to
>>>>>> approximate in  owl  concepts that cannot be directly  operationalized in
>>>>>> real  RDF/OWL based  systems. Only  higgins.owl should be imported and
>>>>>> used. Cdm.owl is just an   attempt at explanation. It can be  ignored.
>>>>>> (4.2) A lot  of OWL URLS end in  .owl, but it isn¹t a firm  requirement
>>>>>> or  convention.
>>>>>> 
>>>>>> When you go to  make  these changes, it will be  critical to load into
>>>>>> your  workbench every  possible context provider that you  can  find so
>>>>>> that you can fix them  so that they don¹t all break.
>>>>>> 
>>>>>> --Paul
>>>>>> 
>>>>>> On 9/23/09 12:07  PM, "Sergey  Lyakhov" <[email protected]>
>>>>>> wrote:
>>>>>> 
>>>>>>  
>>>>>>  
>>>>>>  
>>>>>>> Paul,
>>>>>>> 
>>>>>>> I suppose,  cdm:entityId is redundant and we can use rdf:ID   instead.
>>>>>>> As a result:
>>>>>>> 
>>>>>>> 1.1. In this case  IEntity.getEntityID() will retun   rdf:ID.
>>>>>>> 1.2. In case of blank  entity (previously  known as a complex  value) it
>>>>>>> should return  null.
>>>>>>> 1.3.  entityId attribute will be   eliminated.
>>>>>>> 
>>>>>>> I suppose we need to  do the following changes to IdAS  interfaces  to
>>>>>>> be  compatible with CDM:
>>>>>>> 
>>>>>>> 2.1. BlankEntity  class  has  been eliminated from cdm.owl. So, I
>>>>>>> suppose we  need to do the  same for IdAS  interfaces and replace
>>>>>>> IBlankEntity with IEntity  (eliminate IBlankEntity   interface).
>>>>>>>  
>>>>>>> Because there is  no any  difference between entity  and complex value,
>>>>>>> we can define   the following:
>>>>>>> 
>>>>>>> 2.2. If Entity has  been  created by  IContext.addEntity(entityType,
>>>>>>> entityID)  method, it should always   have entityID (should not be a
>>>>>>> blank entity). In other words, a  unique value  should be  generated by
>>>>>>> a context and used as  entityId, if no entityId   passed.
>>>>>>> 2.3. If Entity has been  created by  IAttribute.addValue(URI)  method,
>>>>>>> it should be a blank   entity.
>>>>>>> 2.4. If Entity has been added by    IAttribute.addValue(IAttributeValue)
>>>>>>> it should be the  same type as  passed  entity. If passed entity is a
>>>>>>> blank  entity, new blank  entity should be  created as a copy of
>>>>>>> passed, otherwise a  reference to the existent (non   blank) entity
>>>>>>> should be  created.
>>>>>>> 2.5. When Entity is  deleted, all its  subentities which  are a blank
>>>>>>> entity  should be deleted  too.
>>>>>>>  
>>>>>>> Also we  need more  flex IFilter API:
>>>>>>>  
>>>>>>> 3.1.  IFilter should be   able to query both types of entities as blank
>>>>>>> as   usual.
>>>>>>> 3.2.  IFilter should be able to query a  separate value (entity or
>>>>>>> simple  value) of any nesting  level, not only direct attributes of
>>>>>>> Entity.
>>>>>>>  
>>>>>>> Also I have some notes about   CDM:
>>>>>>>  
>>>>>>> 4.1.  CDM.owl contains entityRelation  and  contextRelation object
>>>>>>> properties. Do we  need to  reflect them in  IdAS interfaces?
>>>>>>> 4.2. Namespase of cdm.owl
>>>>>>> http://www.eclipse.org/higgins/ontologies/2008/6/cdm.owl    ends with
>>>>>>> .owl. Is it correct?
>>>>>>> 
>>>>>>> Thanks,
>>>>>>> Sergey    Lyakhov
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>  
>>>>>>  
>>>>>>  
>>>>>> 
>>>>>>  
>>>>>> 
>>>>>> _______________________________________________
>>>>>> higgins-dev   mailing  list
>>>>>> [email protected]
>>>>>> https://dev.eclipse.org/mailman/listinfo/higgins-dev
>>>>>> 
>>>> 
>>>> 
>>>>  
>>>>  
>>>> 
>>>>  
>>>> 
>>>> _______________________________________________
>>>> higgins-dev  mailing  list
>>>> [email protected]
>>>> https://dev.eclipse.org/mailman/listinfo/higgins-dev
>>>> 
>>  
>> 
>>  
>> 
>>  
>> 
>> _______________________________________________
>> higgins-dev mailing  list
>> [email protected]
>> https://dev.eclipse.org/mailman/listinfo/higgins-dev
>> 

_______________________________________________
higgins-dev mailing list
[email protected]
https://dev.eclipse.org/mailman/listinfo/higgins-dev

Reply via email to