Hi,

t will depend on usage patterns. 2* 500 million isn't unreasonable but validating with your expected usage is essential.

The critical factors are the usage patterns and the hardware available. Number of queries, query complexity, number of updates, all matter. RAM is good (which is true for any database) as are SSDs if you do lots of update or need fast startup from cold.

Multiple requests, whether same service or different service, are competing for the same machine resources. Fuseki runs requests independently and in parallel. There are per-database transactions supporting multiple, truly parallel readers.

    Andy

On 18/03/16 09:35, Alexandra Kokkinaki wrote:
Hi,

after researching on TDB performance with Big Data, I would still like to
know:
We have one fuseki server exposing 2 sparql endpoints (2million triples
each) as data services. We are planning to add one more, but with Big data
500Million triples

    - For big data is it better to use many installations of fuseki server
    or
    - many data services under the same Fuseki server?


Could fuseki cope with two or more services with more than  500 Million
triples each?



How does Fuseki cope when it has to serve concurrent queries to the
different data services?



Many thanks,

Alexandra


Reply via email to