Bob,

Have you tried passing /dev/stdin as the argument for --data?  E.g:
arq --query=example.rq --data=/dev/stdin
That should wait on stdin until it hits EOF (ctrl+d).

This should let you piping from a file or wherever E.g.
cat people.n3 | arq --query=example.rq --data=/dev/stdin

If you need to pipe from multiple sources, such as if you need to have
multiple --data args, you could make a series of fifo pipes that data
was fed into by other processes.


Regards,
Colin A. Gross


On Sat, Feb 27, 2021 at 9:41 AM Bob DuCharme <[email protected]> wrote:

> This is just an idea. I like how jena's riot utility accepts data from
> stdin as long as you provide a --syntax parameter to tell it what
> serialization the stdin triples are. When I was at TopQuadrant I liked
> SPARQLMotion, their proprietary system for pipelining RDF through
> various steps to create an automated workflow. I wrote about doing
> something similar with Python in "Pipelining SPARQL queries in memory
> with the rdflib Python library" at
> http://www.bobdc.com/blog/pipelining-sparql-queries-in-m/ .
> <http://www.bobdc.com/blog/pipelining-sparql-queries-in-m/>
>
> I was thinking that if arq could accept triples via stdin like riot can,
> I could use curl to pull triples from an endpoint, pipe it to arq
> running a CONSTRUCT query that does some cleanup transformations, and
> then pipe the output of that to arq to run a SELECT query that pulls
> what I ultimately want, all on one command from the command line. (I see
> that arq already has a --syntax switch that is about the query itself,
> so it would need a different switch to specify the serialization of
> input data if it's coming from stdin.)
> <http://www.bobdc.com/blog/pipelining-sparql-queries-in-m/>
>
> Of course, I could do all this now from a shell script in which some
> lines save output to temporary files and later lines read those, but
> supporting stdin would add some flexibility. riot's ability to do this
> is inspiring!
>
> Thanks,
>
> Bob
>
>
>

Reply via email to