reasoner in default, data in other graph - no inferences?

2017-12-11 Thread Andrew U. Frank
i try to use the OWL reasoner in fusekii and the browser and have 
followed instructions on the web. I can make a reasoner work, if the 
reasoner and the data are in the same default graph. if i have the data 
in a different graph (i tend to separate my data into various graphs - 
perhaps this is not a good idea?) i have no reasoning.


i wish i could - at least - include in the reasoning one graph 
containing data. how to achieve this? is the reasoner only working on 
the data in the default graph?


i appreciate help!

andrew

my TDB file is:

@prefix :   .
@prefix tdb:    .
@prefix rdf:    .
@prefix ja:     .
@prefix rdfs:   .
@prefix fuseki:  .

:service_tdb_all  a   fuseki:Service ;
    fuseki:dataset    :dataset ;
    fuseki:name   "animals" ;
    fuseki:serviceQuery   "query" , "sparql" ;
    fuseki:serviceReadGraphStore  "get" ;
    fuseki:serviceReadWriteGraphStore "data" ;
    fuseki:serviceUpdate  "update" ;
    fuseki:serviceUpload  "upload" .

:dataset a ja:RDFDataset ;
    ja:defaultGraph   <#model_inf> ;
 .

<#model_inf> a ja:InfModel ;
 ja:baseModel <#graph> ;
 ja:reasoner [
 ja:reasonerURL 
 ] .

<#graph> rdf:type tdb:GraphTDB ;
  tdb:dataset :tdb_dataset_readwrite .

:tdb_dataset_readwrite
    a tdb:DatasetTDB ;
    tdb:location  "/home/frank/corpusLocal/animalsTest"
    .


--
em.o.Univ.Prof. Dr. sc.techn. Dr. h.c. Andrew U. Frank
 +43 1 58801 12710 direct
Geoinformation, TU Wien  +43 1 58801 12700 office
Gusshausstr. 27-29   +43 1 55801 12799 fax
1040 Wien Austria+43 676 419 25 72 mobil



Re: Report on loading wikidata

2017-12-11 Thread ajs6f
TDB1: https://jena.apache.org/documentation/tdb/architecture.html

Not sure if we have anything really good up right now for TDB2 (Andy?), but 
keep in mind that it is using MVCC, so the internals are seriously different.

ajs6f

> On Dec 11, 2017, at 1:33 PM, Laura Morales  wrote:
> 
> Did you run your Threadripper test using tdbloader, tdbloader2, or 
> tdb2.tdbloader?
> 
> @Andy where can I find a description of TDB1/2 binary format (how stuff is 
> stored in the files)?
>  
>  
> 
> Sent: Monday, December 11, 2017 at 11:31 AM
> From: "Dick Murray" 
> To: users@jena.apache.org
> Subject: Re: Report on loading wikidata
> Inline...
> 
> On 10 December 2017 at 23:03, Laura Morales  wrote:
> 
>> Thank you a lot Dick! Is this test for tdbloader, tdbloader2, or
>> tdb2.tdbloader?
>> 
>>> 32GB DDR4 quad channel
>> 
>> 2133 or higher?
>> 
> 
> 2133
> 
> 
>>> 3 x M.2 Samsung 960 EVO
>> 
>> Are these PCI-e disks? Or SATA? Also, what size and configuration?
> 
> 
> PCIe Turbo
> 
> 
>>> Is it possible to split the index files into separate folders?
>>> Or sym link the files, if I run the data phase, sym link, then run the
>> index phase?
>> 
>> What would you gain from this?
>> 
> 
> n index files need to be written so split the load across multiple devices,
> be that cores/controllers/storage. Potentially use a fast/expensive device
> to perform the load and copy the files over to a production grade device.
> Load device would have no redundancy as who cares if it throws a drive?
> Production devices are redundant as 5 9's requirement.
> 
> 
>> 
>>> 172K/sec 3h45m for truthy.
>> 
>> It still feels slow considering that you throw such a powerful machine to
>> it, but it's very interesting nonetheless! What I think after these tests,
>> is that the larger impact here is given by the M.2 disks
> 
> 
> Its also got 2 x SATAIII 6G drives and the load time doesn't increase by
> much using these. There's a fundamental limit at which degradation occurs
> as eventually stuff has to be swapped or committed which then cascades into
> stalls. As an ex DBA bulk loads always involved, dropping or disabling
> indexes, running overnight so users were asleep, building indexes, updating
> stats, present DB in TP mode to make users happy! Things have moved on but
> the same problems exists.
> 
> 
>> , and perhaps to a smaller scale by the DDR4 modules. When I tested with a
>> xeon+ddr3-1600, it didn't seem to make any difference. It would be
>> interesting to test with a more "mid-range setup" (iCore/xeon + DDR3) and
>> M.2 disks. Is this something that you can try as well?
>> 
> 
> IMHO it's not, our SLA equates to 50K/sec or 180M/hr quads an hour, so
> anything over this a bonus. But we don't work on getting 500M quads into a
> store at 150K/sec because this will eventually hit a ceiling. We work on
> getting concurrent 500M quads into stores at 75K/sec. Production
> environments are a completely different beast to having fun with a test
> setup.
> 
> Consider the simplified steps involved in getting a single quad into a
> store (please correct me Andy);
> 
> Read quad from source.
> Verify GSPO lexical and type.
> Check GSPO for uniqueness (read and compare) possibly x4 write to node->id
> lookup.
> Write indexes.
> Repeat n times.
> 
> Understand binary format and tweak appropriately for tdbloader2 ;-)
> 
> Broadly speaking you can affect the overall time and the elapsed time. What
> we refer to as the fast or clever problem. Simplistically, reduce the
> overall by loading more per second and reduce the elapsed time by loading
> more concurrently. I prefer going after the elapsed time with the divide
> and conquer approach because it yields more scalable results. This is why
> we run multiple stores (not just TDB) and query over them. This in itself
> is a trade because we need to use distinct when merging streams which can
> be RAM intensive. And we're really tight on the number of quads you can
> return! :-)



Re: Report on loading wikidata

2017-12-11 Thread Laura Morales
Did you run your Threadripper test using tdbloader, tdbloader2, or 
tdb2.tdbloader?

@Andy where can I find a description of TDB1/2 binary format (how stuff is 
stored in the files)?
 
 

Sent: Monday, December 11, 2017 at 11:31 AM
From: "Dick Murray" 
To: users@jena.apache.org
Subject: Re: Report on loading wikidata
Inline...

On 10 December 2017 at 23:03, Laura Morales  wrote:

> Thank you a lot Dick! Is this test for tdbloader, tdbloader2, or
> tdb2.tdbloader?
>
> > 32GB DDR4 quad channel
>
> 2133 or higher?
>

2133


> > 3 x M.2 Samsung 960 EVO
>
> Are these PCI-e disks? Or SATA? Also, what size and configuration?


PCIe Turbo


> > Is it possible to split the index files into separate folders?
> > Or sym link the files, if I run the data phase, sym link, then run the
> index phase?
>
> What would you gain from this?
>

n index files need to be written so split the load across multiple devices,
be that cores/controllers/storage. Potentially use a fast/expensive device
to perform the load and copy the files over to a production grade device.
Load device would have no redundancy as who cares if it throws a drive?
Production devices are redundant as 5 9's requirement.


>
> > 172K/sec 3h45m for truthy.
>
> It still feels slow considering that you throw such a powerful machine to
> it, but it's very interesting nonetheless! What I think after these tests,
> is that the larger impact here is given by the M.2 disks


Its also got 2 x SATAIII 6G drives and the load time doesn't increase by
much using these. There's a fundamental limit at which degradation occurs
as eventually stuff has to be swapped or committed which then cascades into
stalls. As an ex DBA bulk loads always involved, dropping or disabling
indexes, running overnight so users were asleep, building indexes, updating
stats, present DB in TP mode to make users happy! Things have moved on but
the same problems exists.


> , and perhaps to a smaller scale by the DDR4 modules. When I tested with a
> xeon+ddr3-1600, it didn't seem to make any difference. It would be
> interesting to test with a more "mid-range setup" (iCore/xeon + DDR3) and
> M.2 disks. Is this something that you can try as well?
>

IMHO it's not, our SLA equates to 50K/sec or 180M/hr quads an hour, so
anything over this a bonus. But we don't work on getting 500M quads into a
store at 150K/sec because this will eventually hit a ceiling. We work on
getting concurrent 500M quads into stores at 75K/sec. Production
environments are a completely different beast to having fun with a test
setup.

Consider the simplified steps involved in getting a single quad into a
store (please correct me Andy);

Read quad from source.
Verify GSPO lexical and type.
Check GSPO for uniqueness (read and compare) possibly x4 write to node->id
lookup.
Write indexes.
Repeat n times.

Understand binary format and tweak appropriately for tdbloader2 ;-)

Broadly speaking you can affect the overall time and the elapsed time. What
we refer to as the fast or clever problem. Simplistically, reduce the
overall by loading more per second and reduce the elapsed time by loading
more concurrently. I prefer going after the elapsed time with the divide
and conquer approach because it yields more scalable results. This is why
we run multiple stores (not just TDB) and query over them. This in itself
is a trade because we need to use distinct when merging streams which can
be RAM intensive. And we're really tight on the number of quads you can
return! :-)


Re: Report on loading wikidata

2017-12-11 Thread Andy Seaborne

>> "mid-range setup" (iCore/xeon + DDR3)

The distinction here is app server class machines and database server 
classes machines.


app servers typically have less RAM, less I/O bandwidth, less disk 
optimization, and also may have to share hardware. Any virtualization 
matters - some virtualization slows the I/O, some, like 
para-virtualization, does not.


I've come across this a couple of times now - organisations not running 
a graph store on a database server class machine. Graph stores are 
databases.


On 11/12/17 10:31, Dick Murray wrote:

Consider the simplified steps involved in getting a single quad into a
store (please correct me Andy);


That's right.



Read quad from source.
Verify GSPO lexical and type.


For each slot, get the NodeId, creating it, and indexing it, if needed.


Check GSPO for uniqueness (read and compare) possibly x4 write to node->id
lookup.


Doing the indexes at the same time is bad.


Write indexes.


Writing is cached.


Repeat n times.

Understand binary format and tweak appropriately for tdbloader2 ;-)


Re: Report on loading wikidata

2017-12-11 Thread Andy Seaborne

This is for the large amount of temporary space that tdbloader2 uses?

I got "latest-all" to load but I had to do some things with tdbloader2 
to work with a compresses data-triples.tmp.gz and also have sort write 
comprssed temporary files (I messed up a bit and set the gzip 
compression too high so it slowed things down).


There are some small problems with tdbloader2 with complex --sort-args 
(it only handles one single arg/value correctly).  My main trick was to 
put in a script for "sort" that had the required settings built-in. I 
wanted to set --compress, -T and the buffer size.


On 10/12/17 21:18, Dick Murray wrote:

Ryzen 1920X 3.5GHz, 32GB DDR4 quad channel, 3 x M.2 Samsung 960 EVO,
172K/sec 3h45m for truthy.

Is it possible to split the index files into separate folders?


Not built-in.  Symbolic links will work.

I'm keen on symbolic links here because built-in support would hard to 
keep all cases covered.




Or sym link the files, if I run the data phase, sym link, then run the
index phase?


Symbolic links will work.

"sort" can be configured to use a temporary folder as well.

The only place symbolic links will not work is for data-triples.tmp. It 
must not exist at all - we ought to change that to make it OK to have a 
zero-length file in place so it can be redirected ahead of time.


Andy



Point me in the right direction and I'll extend the TDB file open code.

Dick


On 7 Dec 2017 22:21, "Andy Seaborne"  wrote:



On 07/12/17 19:01, Laura Morales wrote:


Thank you a lot Andy, very informative (special thanks for specifying the
hardware).
For anybody reading this, I'd like to highlight the fact that the data
source is "latest-truthy" and not "latest-all".
 From what I understand, truthy leaves out a lot of data (50% ??) and "all"
is more than 4 billion triples.



4,787,194,669 Triples

Dick reported figures for truthy as well.

I used a *16G* machine, and it is a portable with all it's memory
architecture tradeoffs.

"all" is running ATM - it will be much slower due to RAM needs of
tdbloader2 for the data phase.  Not sure the figures will mean anything for
you.

I'd need a machine with (guess) 32G RAM which is still a small server these
days.

(A similar tree builder technique could be applied to the node index and
reduce the max RAM needs but - hey, ho - that's free software for you.)

 Andy



Re: Report on loading wikidata

2017-12-11 Thread Dick Murray
Inline...

On 10 December 2017 at 23:03, Laura Morales  wrote:

> Thank you a lot Dick! Is this test for tdbloader, tdbloader2, or
> tdb2.tdbloader?
>
> > 32GB DDR4 quad channel
>
> 2133 or higher?
>

2133


> > 3 x M.2 Samsung 960 EVO
>
> Are these PCI-e disks? Or SATA? Also, what size and configuration?


PCIe Turbo


> > Is it possible to split the index files into separate folders?
> > Or sym link the files, if I run the data phase, sym link, then run the
> index phase?
>
> What would you gain from this?
>

n index files need to be written so split the load across multiple devices,
be that cores/controllers/storage. Potentially use a fast/expensive device
to perform the load and copy the files over to a production grade device.
Load device would have no redundancy as who cares if it throws a drive?
Production devices are redundant as 5 9's requirement.


>
> > 172K/sec 3h45m for truthy.
>
> It still feels slow considering that you throw such a powerful machine to
> it, but it's very interesting nonetheless! What I think after these tests,
> is that the larger impact here is given by the M.2 disks


Its also got 2 x SATAIII 6G drives and the load time doesn't increase by
much using these. There's a fundamental limit at which degradation occurs
as eventually stuff has to be swapped or committed which then cascades into
stalls. As an ex DBA bulk loads always involved, dropping or disabling
indexes, running overnight so users were asleep, building indexes, updating
stats, present DB in TP mode to make users happy! Things have moved on but
the same problems exists.


> , and perhaps to a smaller scale by the DDR4 modules. When I tested with a
> xeon+ddr3-1600, it didn't seem to make any difference. It would be
> interesting to test with a more "mid-range setup" (iCore/xeon + DDR3) and
> M.2 disks. Is this something that you can try as well?
>

IMHO it's not, our SLA equates to 50K/sec or 180M/hr quads an hour, so
anything over this a bonus. But we don't work on getting 500M quads into a
store at 150K/sec because this will eventually hit a ceiling. We work on
getting concurrent 500M quads into stores at 75K/sec. Production
environments are a completely different beast to having fun with a test
setup.

Consider the simplified steps involved in getting a single quad into a
store (please correct me Andy);

Read quad from source.
Verify GSPO lexical and type.
Check GSPO for uniqueness (read and compare) possibly x4 write to node->id
lookup.
Write indexes.
Repeat n times.

Understand binary format and tweak appropriately for tdbloader2 ;-)

Broadly speaking you can affect the overall time and the elapsed time. What
we refer to as the fast or clever problem. Simplistically, reduce the
overall by loading more per second and reduce the elapsed time by loading
more concurrently. I prefer going after the elapsed time with the divide
and conquer approach because it yields more scalable results. This is why
we run multiple stores (not just TDB) and query over them. This in itself
is a trade because we need to use distinct when merging streams which can
be RAM intensive. And we're really tight on the number of quads you can
return! :-)


Re: How to install Jena tools system-wise

2017-12-11 Thread Laura Morales
Thanks.
 
 

Sent: Monday, December 11, 2017 at 9:58 AM
From: "Jean-Marc Vanel" 
To: "Jena users" 
Subject: Re: How to install Jena tools system-wise
This is a classical shell issue.
Here is what I do for many different tools for many years:

echo "export PATH=$PATH:~/apps/apache-jena-3.3.0/bin" >> ~/.bashrc
export PATH=$PATH:~/apps/apache-jena-3.3.0/bin

which riot
/home/jmv/apps/apache-jena-3.3.0/bin/riot
riot --help
riot [--time] [--check|--noCheck] [--sink] [--base=IRI] [--out=FORMAT]
[--compress] file ...
Parser control
--sink Parse but throw away output
--syntax=NAME Set syntax (otherwise syntax guessed from file
extension)
--base=URI Set the base URI (does not apply to N-triples
and N-Quads)
--check Addition checking of RDF terms
...

NOTES:

- because ~/.bashrc is changed, next time you start a shell , the
commands are there
- I don't do that currently, because my tools semantic_forms includes
Jena, and the Scala console has completion


2017-12-11 9:40 GMT+01:00 Laura Morales :

> As per title, I'd like to have the Jena tools in my PATH, but I don't know
> how. I mean the tools in "jena/bin/". I tried to copy "jena/bin/*" into
> "bin/" but they're not binaries, they are scripts that need some library
> that I think is available only in my "jena/" directory.
>



--
Jean-Marc Vanel
http://www.semantic-forms.cc:9111/display?displayuri=http://jmvanel.free.fr/jmv.rdf%23me#subject

Déductions SARL - Consulting, services, training,
Rule-based programming, Semantic Web
+33 (0)6 89 16 29 52
Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui


Avoid exception In the middle of an alloc-write

2017-12-11 Thread George News
Hi,

I'm facing the exception that I include below. I guess this is because I'm not 
properly opening a transaction or so. 

Let's try to explain a bit to guess if this is the problem:
- I have multiple graphs which I merge using MultiUnion
- I generate the MultiUnion in one transaction, but the use of the joined graph 
is done in another transaction.
- I use a single static final Dataset from 
TDBFactory.createDataset(TRIPLE_STORE_PATH);
- Read and write operations are done in different threads, so maybe we have 
started a join for read and in parallel we are writing on one of the graphs 
included in the union.


Any hint is welcome.

Regards,
Jorge


org.apache.jena.tdb.base.file.FileException: In the middle of an alloc-write
at 
org.apache.jena.tdb.base.objectfile.ObjectFileStorage.read(ObjectFileStorage.java:311)
 ~[jena-tdb-3.5.0.jar:3.5.0]
at 
org.apache.jena.tdb.base.objectfile.ObjectFileWrapper.read(ObjectFileWrapper.java:57)
 ~[jena-tdb-3.5.0.jar:3.5.0]
at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:78) 
~[jena-tdb-3.5.0.jar:3.5.0]
at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
 ~[jena-tdb-3.5.0.jar:3.5.0]
at 
org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
 ~[jena-tdb-3.5.0.jar:3.5.0]
at 
org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
 ~[jena-tdb-3.5.0.jar:3.5.0]
at 
org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128)
 ~[jena-tdb-3.5.0.jar:3.5.0]
at 
org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
 ~[jena-tdb-3.5.0.jar:3.5.0]
at 
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:50)
 ~[jena-tdb-3.5.0.jar:3.5.0]
at 
org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67)
 ~[jena-tdb-3.5.0.jar:3.5.0]
at org.apache.jena.tdb.lib.TupleLib.quad(TupleLib.java:129) 
~[jena-tdb-3.5.0.jar:3.5.0]
at org.apache.jena.tdb.lib.TupleLib.quad(TupleLib.java:123) 
~[jena-tdb-3.5.0.jar:3.5.0]
at 
org.apache.jena.tdb.lib.TupleLib.lambda$convertToQuads$3(TupleLib.java:59) 
~[jena-tdb-3.5.0.jar:3.5.0]
at org.apache.jena.atlas.iterator.Iter$2.next(Iter.java:270) 
~[jena-base-3.5.0.jar:3.5.0]
at org.apache.jena.atlas.iterator.Iter$2.next(Iter.java:270) 
~[jena-base-3.5.0.jar:3.5.0]
at org.apache.jena.atlas.iterator.Iter.next(Iter.java:875) 
~[jena-base-3.5.0.jar:3.5.0]
at 
org.apache.jena.util.iterator.WrappedIterator.next(WrappedIterator.java:94) 
~[jena-core-3.5.0.jar:3.5.0]
at 
org.apache.jena.util.iterator.WrappedIterator.next(WrappedIterator.java:94) 
~[jena-core-3.5.0.jar:3.5.0]
at 
org.apache.jena.util.iterator.FilterIterator.hasNext(FilterIterator.java:56) 
~[jena-core-3.5.0.jar:3.5.0]
at 
org.apache.jena.util.iterator.NiceIterator$1.hasNext(NiceIterator.java:105) 
~[jena-core-3.5.0.jar:3.5.0]
at 
org.apache.jena.util.iterator.WrappedIterator.hasNext(WrappedIterator.java:90) 
~[jena-core-3.5.0.jar:3.5.0]
at 
org.apache.jena.graph.compose.CompositionBase$1.hasNext(CompositionBase.java:94)
 ~[jena-core-3.5.0.jar:3.5.0]
at 
org.apache.jena.util.iterator.NiceIterator$1.hasNext(NiceIterator.java:105) 
~[jena-core-3.5.0.jar:3.5.0]
at 
org.apache.jena.util.iterator.WrappedIterator.hasNext(WrappedIterator.java:90) 
~[jena-core-3.5.0.jar:3.5.0]
at 
org.apache.jena.sparql.engine.iterator.QueryIterTriplePattern$TripleMapper.hasNextBinding(QueryIterTriplePattern.java:135)
 ~[jena-arq-3.5.0.jar:3.5.0]
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
 ~[jena-arq-3.5.0.jar:3.5.0]
at 
org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.hasNextBinding(QueryIterRepeatApply.java:74)
 ~[jena-arq-3.5.0.jar:3.5.0]
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
 ~[jena-arq-3.5.0.jar:3.5.0]
at 
org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.makeNextStage(QueryIterRepeatApply.java:101)
 ~[jena-arq-3.5.0.jar:3.5.0]
at 
org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.hasNextBinding(QueryIterRepeatApply.java:65)
 ~[jena-arq-3.5.0.jar:3.5.0]
at 
org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
 ~[jena-arq-3.5.0.jar:3.5.0]
at 
org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.makeNextStage(QueryIterRepeatApply.java:101)
 ~[jena-arq-3.5.0.jar:3.5.0]
at 
org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.hasNextBinding(QueryIterRepeatApply.java:65)
 ~[jena-arq-3.5.0.jar:3.5.0]
at 

Re: How to install Jena tools system-wise

2017-12-11 Thread Jean-Marc Vanel
This is a classical shell issue.
Here is what I do for many different tools for many years:

 echo "export PATH=$PATH:~/apps/apache-jena-3.3.0/bin"  >> ~/.bashrc
export PATH=$PATH:~/apps/apache-jena-3.3.0/bin

which riot
/home/jmv/apps/apache-jena-3.3.0/bin/riot
riot --help
riot [--time] [--check|--noCheck] [--sink] [--base=IRI] [--out=FORMAT]
[--compress] file ...
  Parser control
  --sink Parse but throw away output
  --syntax=NAME  Set syntax (otherwise syntax guessed from file
extension)
  --base=URI Set the base URI (does not apply to N-triples
and N-Quads)
  --checkAddition checking of RDF terms
...

NOTES:

   - because ~/.bashrc is changed, next time you start a shell , the
   commands are there
   - I don't do that currently, because my tools semantic_forms includes
   Jena, and the Scala console has completion


2017-12-11 9:40 GMT+01:00 Laura Morales :

> As per title, I'd like to have the Jena tools in my PATH, but I don't know
> how. I mean the tools in "jena/bin/". I tried to copy "jena/bin/*" into
> "bin/" but they're not binaries, they are scripts that need some library
> that I think is available only in my "jena/" directory.
>



-- 
Jean-Marc Vanel
http://www.semantic-forms.cc:9111/display?displayuri=http://jmvanel.free.fr/jmv.rdf%23me#subject

Déductions SARL - Consulting, services, training,
Rule-based programming, Semantic Web
+33 (0)6 89 16 29 52
Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui


How to install Jena tools system-wise

2017-12-11 Thread Laura Morales
As per title, I'd like to have the Jena tools in my PATH, but I don't know how. 
I mean the tools in "jena/bin/". I tried to copy "jena/bin/*" into "bin/" but 
they're not binaries, they are scripts that need some library that I think is 
available only in my "jena/" directory.