i should do some comparison of a fuseki store based application with others with rel db or proprietary dbs. i use fuseki and tdb stored on an ssd or a hard disk.

can i simplify measurements with putting pieces of the dataset in different graphs and then add more or less of these graphs to take a measure? say i have 5 named graphs, each with 10 million triples, do queries over 2, 3, 4 and 5 graphs give the same (or very similar) results than when i would load 20, 30, 40 and 50 million triples in a single named graph?

is performance different for a named or the default graph?

are there some rules which queries are linear in the amount of data in the graph? is it correct to assume that searching for a triples based on a single condition (?p a X) is logarithmic in the size of the data collection?

is there a document which gives some insight into the expected performance of queries?

thank you for any information!

andrew



On 12/22/2017 05:16 PM, Dick Murray wrote:
How big? How many?

On 22 Dec 2017 8:37 pm, "Dimov, Stefan" <[email protected]> wrote:

Hi all,

We have a project, which we’re trying to productize and we’re facing
certain operational issues with big size files. Especially with copying and
maintaining them on the productive cloud hardware (application nodes).

Did anybody have similar issues? How did you resolve them?

I will appreciate if someone shares their experience/problems/solutions.

Regards,
Stefan


--
em.o.Univ.Prof. Dr. sc.techn. Dr. h.c. Andrew U. Frank
                                 +43 1 58801 12710 direct
Geoinformation, TU Wien          +43 1 58801 12700 office
Gusshausstr. 27-29               +43 1 55801 12799 fax
1040 Wien Austria                +43 676 419 25 72 mobil

Reply via email to