Thanks for the pointer, I will take a look there.

> On May 17, 2025, at 10:21 AM, Andy Seaborne <a...@apache.org> wrote:
> 
> Jim,
> 
> If you want to extend the execution of SPARQL, then look OpExecutor. This is 
> the place that SPARQL algebra become a stream of results.
> 
> TDB2 extends this for its join support - TDB execution works on internal 64 
> bit ids, not the full RDF term representation.
> 
> TDB2 also delays turning ids back into RDF terms. For an operation like 
> "count" all that is need is the row (BindingTDB), not the values of each cell 
> of the row.
> 
>    Andy
> 
> On 16/05/2025 21:41, Jim Balhoff wrote:
>> Hi,
>> I would like to provide a SPARQL endpoint over a virtual dataset. For the 
>> IRIs in the dataset, I can compute anything known about them based on the 
>> IRI itself. So rather than materializing the whole thing, I want to just 
>> respond to triple pattern queries and generate the data on the fly. I can 
>> see how I might implement the Model or Graph interfaces to support this, but 
>> wondering if there is another API I should use that would be most efficient 
>> for backing a SPARQL endpoint. For example, I could calculate counts for 
>> particular triple patterns without iterating through all the results. Or as 
>> another example I could imagine implementing certain joins involving two 
>> triple patterns more efficiently than by generating the data for both.
>> Thank you,
>> Jim
> 

Reply via email to