Re: [Virtuoso-users] Write load high relative to disk write throughput / intensive JDBC sparql INSERT DATA INTO GRAPH

2017-03-22 Thread Hugh Williams
Hi Thomas,

Looking at your INI file I note in the “[Parameters]” section you have  
"MaxClientConnections = 10” set , which should be set to something like 100 
I would recommend as this param sets the number of ServerThreads available to 
the SQL database for processing requests, which includes SQL connection and 
threads used by the server for internal processes/tasks, which are allocated on 
demand. The "MaxClientConnections = 10” setting in the “[HTTP Server]” 
section can be left as is as this is used for the  HTTP Web Server thread pool 
and are pre-allocated on server startup for use in the thread pool.

Best Regards
Hugh Williams
Professional Services
OpenLink Software, Inc.  //  http://www.openlinksw.com/ 

Weblog   -- http://www.openlinksw.com/blogs/ 
LinkedIn -- http://www.linkedin.com/company/openlink-software/ 

Twitter  -- http://twitter.com/OpenLink 
Google+  -- http://plus.google.com/100570109519069333827/ 

Facebook -- http://www.facebook.com/OpenLinkSoftware 

Universal Data Access, Integration, and Management Technology Providers



> On 20 Mar 2017, at 11:02, Thomas Michaux  > wrote:
> 
> ...sorry, sent the INDEX information tab instead of "SPACE" info tab :
> 
> Table Index name  RowsPages   Row pages   Blob pages  Size
> DB.DBA.RDF_OBJDB.DBA.RDF_OBJ  30443064255445  255445  0   
> 1833.25 MB 
> DB.DBA.RDF_IRIDB.DBA.RDF_IRI  59063426206412  206412  0   
> 1247.19 MB 
> DB.DBA.RDF_IRIDB_DBA_RDF_IRI_UNQC_RI_ID   59063426183541  
> 183541  0   1282.95 MB 
> DB.DBA.RDF_OBJ_RO_FLAGS_WORDS DB.DBA.RDF_OBJ_RO_FLAGS_WORDS   8341023 179875  
> 179875  0   1332.22 MB 
> DB.DBA.RDF_OBJRO_VAL  30443064117863  117863  0   773.81 
> MB
> DB.DBA.RO_START   DB.DBA.RO_START 3044306485319   85319   0   
> 475.31 MB
> Le 20/03/2017 à 11:34, Thomas Michaux a écrit :
>> 
>> 
>> Le 19/03/2017 à 16:15, Hugh Williams a écrit :
>>> Hi Thomas,
>> Hi,
>>> Is the loading of the dataset now complete or it is still in progress as 
>>> you opening statement is not clear ?
>>> You should not need 40GB RAM for inserting and hosting 240 million triples, 
>>> which should require less then 10GB depending on how well they can be 
>>> compressed for storage in the database.
>> loading is complete, we finished at 243 188 427  triples  , hosting now 
>> requires 25GB ram, 15Gb disk, details :
>> 
>> void:triples 243188427 ; 
>>  void:classes 13 ; 
>>  void:entities 58523487 ; 
>>  void:distinctSubjects 58523514 ; 
>>  void:properties 32 ; 
>>  void:distinctObjects 73171603 . 
>> 
>> Total pages  1925120
>> Free pages   607377
>> Buffers  272
>> Buffers used 244554
>> Dirty buffers3
>> Wired down buffers   0
>> TableIndex name  Touches Reads   Read %
>> DB.DBA.RDF_QUAD  RDF_QUAD1562356553  36371   0
>> DB.DBA.RDF_QUAD  RDF_QUAD_POGS   609423455   16989   0
>> DB.DBA.RDF_QUAD  RDF_QUAD_SP 378769255   35822   0
>> DB.DBA.RDF_QUAD  RDF_QUAD_GS 340377017   16340
>>> I assume you have set the swappiness as suggested previously ?
>> yes, done, $ sysctl vm.swappiness
>> vm.swappiness = 10
>> 
>>> When you recompiled your Virtuoso was this done from the git stable/7 or 
>>> develop/7 branch , as I latter has a number of memory consumption fixes 
>>> that would not be in stable/7, thus I would suggest building from develop/7.
>> will investigate.
>> 
>> The two main problems we encountered while loading were :
>> 
>> - logs messages indicating "Flushing at 5.7 MB/s while application is making 
>> dirty pages at 1.7 MB/s." which we interpreted as not enough write speed 
>> while receiving lots of JDBC INSERTs (disk issue ? buffer issue ? ...)
>> 
>> - high memory consumption (40GB RAM), virtuoso process never releasing 
>> memory while loading, free RAM always going down...
>> 
>>> Have you provided a copy of your INI file previously,  if not can you 
>>> provide a copy ?
>> see attached (FYI QueryLog= was not active while loading)
>>> Do ensure the following params are set to 1 in order to clean up unused 
>>> threads/resources and reduce memory consumption of the Virtuoso server, 
>>> which can otherwise be construed as memory leaks.:
>>> 
>>> ThreadCleanupInterval= 1
>>> ResourcesCleanupInterval = 1
>> we have theses settings right.
>> 
>> Thanks for your help,
>> 
>> Thomas
>> 
>> if needed we model ORCID 2016 dataset using :
>> c1   c2
>> http://xmlns.com/foaf/0.1/Person   
>> 28021451
>> http://purl.org/ontology/bibo/Document 
>>   
>> 14283692
>> 

Re: [Virtuoso-users] Write load high relative to disk write throughput / intensive JDBC sparql INSERT DATA INTO GRAPH

2017-03-20 Thread Thomas Michaux

...sorry, sent the INDEX information tab instead of "SPACE" info tab :

Table   Index name  RowsPages   Row pages   Blob pages  Size
DB.DBA.RDF_OBJ  DB.DBA.RDF_OBJ  30443064255445  255445  0   1833.25 
MB
DB.DBA.RDF_IRI  DB.DBA.RDF_IRI  59063426206412  206412  0   1247.19 
MB
DB.DBA.RDF_IRI 	DB_DBA_RDF_IRI_UNQC_RI_ID 	59063426 	183541 	183541 	0 
1282.95 MB
DB.DBA.RDF_OBJ_RO_FLAGS_WORDS 	DB.DBA.RDF_OBJ_RO_FLAGS_WORDS 	8341023 
179875 	179875 	0 	1332.22 MB

DB.DBA.RDF_OBJ  RO_VAL  30443064117863  117863  0   773.81 MB
DB.DBA.RO_START DB.DBA.RO_START 3044306485319   85319   
0   475.31 MB


Le 20/03/2017 à 11:34, Thomas Michaux a écrit :




Le 19/03/2017 à 16:15, Hugh Williams a écrit :

Hi Thomas,

Hi,

Is the loading of the dataset now complete or it is still in progress as you 
opening statement is not clear ?
You should not need 40GB RAM for inserting and hosting 240 million triples, 
which should require less then 10GB depending on how well they can be 
compressed for storage in the database.
loading is complete, we finished at 243 188 427  triples  , hosting 
now requires 25GB ram, 15Gb disk, details :


void:triples 243188427 ;
 void:classes 13 ;
 void:entities 58523487 ;
 void:distinctSubjects 58523514 ;
 void:properties 32 ;
 void:distinctObjects 73171603 .

Total pages 1925120
Free pages  607377
Buffers 272
Buffers used244554
Dirty buffers   3
Wired down buffers  0

Table   Index name  Touches Reads   Read %
DB.DBA.RDF_QUAD RDF_QUAD1562356553  36371   0
DB.DBA.RDF_QUAD RDF_QUAD_POGS   609423455   16989   0
DB.DBA.RDF_QUAD RDF_QUAD_SP 378769255   35822   0
DB.DBA.RDF_QUAD RDF_QUAD_GS 340377017   16340


I assume you have set the swappiness as suggested previously ?

yes, done, $ sysctl vm.swappiness
vm.swappiness = 10


When you recompiled your Virtuoso was this done from the git stable/7 or 
develop/7 branch , as I latter has a number of memory consumption fixes that 
would not be in stable/7, thus I would suggest building from develop/7.

will investigate.

The two main problems we encountered while loading were :

- logs messages indicating "Flushing at 5.7 MB/s while application is 
making dirty pages at 1.7 MB/s." which we interpreted as not enough 
write speed while receiving lots of JDBC INSERTs (disk issue ? buffer 
issue ? ...)


- high memory consumption (40GB RAM), virtuoso process never releasing 
memory while loading, free RAM always going down...



Have you provided a copy of your INI file previously,  if not can you provide a 
copy ?

see attached (FYI QueryLog= was not active while loading)

Do ensure the following params are set to 1 in order to clean up unused 
threads/resources and reduce memory consumption of the Virtuoso server, which 
can otherwise be construed as memory leaks.:

ThreadCleanupInterval= 1
ResourcesCleanupInterval = 1

we have theses settings right.

Thanks for your help,

Thomas

if needed we model ORCID 2016 dataset using :
c1  c2
http://xmlns.com/foaf/0.1/Person
28021451
http://purl.org/ontology/bibo/Document  
14283692
http://purl.org/ontology/bibo/Journal   
9104659
http://xmlns.com/foaf/0.1/PersonalProfileDocument   
2527333
http://xmlns.com/foaf/0.1/Article   
974945
http://www.w3.org/ns/org#Membership 
807465
http://www.w3.org/2006/vcard/ns#Address 
807423
http://www.w3.org/ns/org#Organization   
807418
http://purl.org/ontology/bibo/Conference
769451
http://www.w3.org/ns/org#OrganizationalUnit 
649291
http://www.w3.org/2004/02/skos/core#Concept 
371731
http://purl.org/ontology/bibo/Book  
205493
http://www.w3.org/ns/org#Role   
168423
http://www.w3.org/1999/02/22-rdf-syntax-ns#Property 
170
http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat 
130
http://www.openlinksw.com/schemas/virtrdf#array-of-QuadMapFormat
98
http://www.w3.org/2000/01/rdf-schema#Class  
56
http://www.openlinksw.com/schemas/virtrdf#QuadMapValue  
8
http://www.openlinksw.com/schemas/virtrdf#array-of-QuadMapColumn
8
http://www.openlinksw.com/schemas/virtrdf#QuadMapColumn 
8





Best Regards
Hugh Williams
Professional Services
OpenLink Software, Inc.  //http://www.openlinksw.com/
Weblog   --http://www.openlinksw.com/blogs/
LinkedIn --http://www.linkedin.com/company/openlink-software/
Twitter  --http://twitter.com/OpenLink
Google+  --http://plus.google.com/100570109519069333827/
Facebook --http://www.facebook.com/OpenLinkSoftware
Universal Data Access, Integration, and Management Technology Providers




On 15 Mar 2017, at 17:08, Thomas Michaux  wrote:

Hello,

FYI, virtuoso still loading but we needed to increase memory ressources,
now the process use almost 40GB of ram :

[devel@tulipe-test2 ~]$ ./memcheck-virtuoso.sh
2017-03-15T17:54 VmSize: 41273424kB 5883

stats for the 

Re: [Virtuoso-users] Write load high relative to disk write throughput / intensive JDBC sparql INSERT DATA INTO GRAPH

2017-03-20 Thread Thomas Michaux



Le 19/03/2017 à 16:15, Hugh Williams a écrit :

Hi Thomas,

Hi,


Is the loading of the dataset now complete or it is still in progress as you 
opening statement is not clear ?

You should not need 40GB RAM for inserting and hosting 240 million triples, 
which should require less then 10GB depending on how well they can be 
compressed for storage in the database.
loading is complete, we finished at 243 188 427  triples  , hosting now 
requires 25GB ram, 15Gb disk, details :


void:triples 243188427 ;
 void:classes 13 ;
 void:entities 58523487 ;
 void:distinctSubjects 58523514 ;
 void:properties 32 ;
 void:distinctObjects 73171603 .

Total pages 1925120
Free pages  607377
Buffers 272
Buffers used244554
Dirty buffers   3
Wired down buffers  0

Table   Index name  Touches Reads   Read %
DB.DBA.RDF_QUAD RDF_QUAD1562356553  36371   0
DB.DBA.RDF_QUAD RDF_QUAD_POGS   609423455   16989   0
DB.DBA.RDF_QUAD RDF_QUAD_SP 378769255   35822   0
DB.DBA.RDF_QUAD RDF_QUAD_GS 340377017   16340



I assume you have set the swappiness as suggested previously ?

yes, done, $ sysctl vm.swappiness
vm.swappiness = 10



When you recompiled your Virtuoso was this done from the git stable/7 or 
develop/7 branch , as I latter has a number of memory consumption fixes that 
would not be in stable/7, thus I would suggest building from develop/7.

will investigate.

The two main problems we encountered while loading were :

- logs messages indicating "Flushing at 5.7 MB/s while application is 
making dirty pages at 1.7 MB/s." which we interpreted as not enough 
write speed while receiving lots of JDBC INSERTs (disk issue ? buffer 
issue ? ...)


- high memory consumption (40GB RAM), virtuoso process never releasing 
memory while loading, free RAM always going down...




Have you provided a copy of your INI file previously,  if not can you provide a 
copy ?

see attached (FYI QueryLog= was not active while loading)


Do ensure the following params are set to 1 in order to clean up unused 
threads/resources and reduce memory consumption of the Virtuoso server, which 
can otherwise be construed as memory leaks.:

ThreadCleanupInterval= 1
ResourcesCleanupInterval = 1

we have theses settings right.

Thanks for your help,

Thomas

if needed we model ORCID 2016 dataset using :
c1  c2
http://xmlns.com/foaf/0.1/Person

28021451

http://purl.org/ontology/bibo/Document  

14283692

http://purl.org/ontology/bibo/Journal   

9104659

http://xmlns.com/foaf/0.1/PersonalProfileDocument   

2527333

http://xmlns.com/foaf/0.1/Article   

974945

http://www.w3.org/ns/org#Membership 

807465

http://www.w3.org/2006/vcard/ns#Address 

807423

http://www.w3.org/ns/org#Organization   

807418

http://purl.org/ontology/bibo/Conference

769451

http://www.w3.org/ns/org#OrganizationalUnit 

649291

http://www.w3.org/2004/02/skos/core#Concept 

371731

http://purl.org/ontology/bibo/Book  

205493

http://www.w3.org/ns/org#Role   

168423

http://www.w3.org/1999/02/22-rdf-syntax-ns#Property 

170

http://www.openlinksw.com/schemas/virtrdf#QuadMapFormat 

130

http://www.openlinksw.com/schemas/virtrdf#array-of-QuadMapFormat

98

http://www.w3.org/2000/01/rdf-schema#Class  

56

http://www.openlinksw.com/schemas/virtrdf#QuadMapValue  

8

http://www.openlinksw.com/schemas/virtrdf#array-of-QuadMapColumn

8

http://www.openlinksw.com/schemas/virtrdf#QuadMapColumn 

8






Best Regards
Hugh Williams
Professional Services
OpenLink Software, Inc.  //  http://www.openlinksw.com/
Weblog   -- http://www.openlinksw.com/blogs/
LinkedIn -- http://www.linkedin.com/company/openlink-software/
Twitter  -- http://twitter.com/OpenLink
Google+  -- http://plus.google.com/100570109519069333827/
Facebook -- http://www.facebook.com/OpenLinkSoftware
Universal Data Access, Integration, and Management Technology Providers




On 15 Mar 2017, at 17:08, Thomas Michaux  wrote:

Hello,

FYI, virtuoso still loading but we needed to increase memory ressources,
now the process use almost 40GB of ram :

[devel@tulipe-test2 ~]$ ./memcheck-virtuoso.sh
2017-03-15T17:54 VmSize: 41273424kB 5883

stats for the graph  (forget
to mention, it's the only graph in db) :

239 451 028 triples


this:Dataset a void:Dataset ;
rdfs:seeAlso  ;
rdfs:label "" ;
void:sparqlEndpoint  ;
void:triples 239451028 ;
void:classes 13 ;
void:entities 57692917 ;
void:distinctSubjects 57650847 ;
void:properties 32 ;
void:distinctObjects 72219514 .

this:sameAsLinks a void:Linkset ;
void:inDataset this:Dataset ;
void:triples 997389 ;
void:linkPredicate owl:sameAs .


Le 14/03/2017 à 10:05, Thomas Michaux a écrit :


;
;  virtuoso.ini
;
;  Configuration file for the 

Re: [Virtuoso-users] Write load high relative to disk write throughput / intensive JDBC sparql INSERT DATA INTO GRAPH

2017-03-19 Thread Hugh Williams
Hi Thomas,

Is the loading of the dataset now complete or it is still in progress as you 
opening statement is not clear ?

You should not need 40GB RAM for inserting and hosting 240 million triples, 
which should require less then 10GB depending on how well they can be 
compressed for storage in the database.

I assume you have set the swappiness as suggested previously ?

When you recompiled your Virtuoso was this done from the git stable/7 or 
develop/7 branch , as I latter has a number of memory consumption fixes that 
would not be in stable/7, thus I would suggest building from develop/7.

Have you provided a copy of your INI file previously,  if not can you provide a 
copy ?

Do ensure the following params are set to 1 in order to clean up unused 
threads/resources and reduce memory consumption of the Virtuoso server, which 
can otherwise be construed as memory leaks.:

ThreadCleanupInterval= 1
ResourcesCleanupInterval = 1

Best Regards
Hugh Williams
Professional Services
OpenLink Software, Inc.  //  http://www.openlinksw.com/
Weblog   -- http://www.openlinksw.com/blogs/
LinkedIn -- http://www.linkedin.com/company/openlink-software/
Twitter  -- http://twitter.com/OpenLink
Google+  -- http://plus.google.com/100570109519069333827/
Facebook -- http://www.facebook.com/OpenLinkSoftware
Universal Data Access, Integration, and Management Technology Providers



> On 15 Mar 2017, at 17:08, Thomas Michaux  wrote:
> 
> Hello,
> 
> FYI, virtuoso still loading but we needed to increase memory ressources, 
> now the process use almost 40GB of ram :
> 
> [devel@tulipe-test2 ~]$ ./memcheck-virtuoso.sh
> 2017-03-15T17:54 VmSize: 41273424kB 5883
> 
> stats for the graph  (forget 
> to mention, it's the only graph in db) :
> 
> 239 451 028 triples
> 
> 
> this:Dataset a void:Dataset ;
> rdfs:seeAlso  ;
> rdfs:label "" ;
> void:sparqlEndpoint  ;
> void:triples 239451028 ;
> void:classes 13 ;
> void:entities 57692917 ;
> void:distinctSubjects 57650847 ;
> void:properties 32 ;
> void:distinctObjects 72219514 .
> 
> this:sameAsLinks a void:Linkset ;
> void:inDataset this:Dataset ;
> void:triples 997389 ;
> void:linkPredicate owl:sameAs .
> 
> 
> Le 14/03/2017 à 10:05, Thomas Michaux a écrit :



smime.p7s
Description: S/MIME cryptographic signature
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Virtuoso-users mailing list
Virtuoso-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/virtuoso-users


Re: [Virtuoso-users] Write load high relative to disk write throughput / intensive JDBC sparql INSERT DATA INTO GRAPH

2017-03-15 Thread Thomas Michaux
Hello,

FYI, virtuoso still loading but we needed to increase memory ressources, 
now the process use almost 40GB of ram :

[devel@tulipe-test2 ~]$ ./memcheck-virtuoso.sh
2017-03-15T17:54 VmSize: 41273424kB 5883

stats for the graph  (forget 
to mention, it's the only graph in db) :

239 451 028 triples


this:Dataset a void:Dataset ;
  rdfs:seeAlso  ;
  rdfs:label "" ;
  void:sparqlEndpoint  ;
  void:triples 239451028 ;
  void:classes 13 ;
  void:entities 57692917 ;
  void:distinctSubjects 57650847 ;
  void:properties 32 ;
  void:distinctObjects 72219514 .

this:sameAsLinks a void:Linkset ;
  void:inDataset this:Dataset ;
  void:triples 997389 ;
  void:linkPredicate owl:sameAs .


Le 14/03/2017 à 10:05, Thomas Michaux a écrit :
> Hi Hugh,
>
> Le 10/03/2017 à 14:01, Hugh Williams a écrit :
>> Hi Thomas,
>>
>> Is the ORCID dataset the only RDF datasets in the Virtuoso RDF Quad Store 
>> currently, or are there others ?
>>
>> What is the size of the ORCID dataset ie triple count ?
> I gave you wrong informations, because I misundertstood the process.
> Below are the correct details of our INSERT procedure from ORACLE db :
>
> - dataset is from ORCID 2016 XML download available on this page
> https://orcid.org/content/download-file ("The file contains the public
> information associated with each user's ORCID record. Each record is
> included as a separate file in both JSON and XML. "
> https://figshare.com/articles/ORCID_Public_Data_File_2016/4134027).
>
> They are uploaded inside ORACLE as XML records.
>
> - then in a ORACLE PL/SQL procedure we apply "on the fly" an XSLT
> stylesheet (using Oracle XMLTRANSFORM efficient XSLT transform engine)
> to have an RDX/XML file for each ORCID XML record in the ORACLE table
>
> - next in the process we use Jena tools to generate also "on the fly"
> TRIPLES from this RDF/XML results
>
> - these are the  triples we're finally inserting via a JDBC "SPARQL
> INSERT DATA INTO GRAPH..." call to virtuoso from the PL/SQL Oracle
> procedure via virtuoso JDBC driver (and not ORACLE jdbc driver, my
> mistake, as you guessed)
>
> -checking release of JDBC driver is > java -cp virtjdbc3.jar
> virtuoso.jdbc3.Driver
> OpenLink Virtuoso(TM) Driver for JDBC(TM) Version 3.x [Build 3.62]
>
> (the driver is embedded  inside ORACLE java JVM)
>
> Thanks in advance if you have suggestions.
>
> Last "statistics" on the graph size give  : 182 405 784 triples
>
>
> this:Dataset a void:Dataset ;
>rdfs:seeAlso  ;
>rdfs:label "" ;
>void:sparqlEndpoint  ;
>void:triples 182405784 ;
>void:classes 13 ;
>void:entities 43946633 ;
>void:distinctSubjects 43922470 ;
>void:properties 32 ;
>void:distinctObjects 56509541 .
>
> this:sameAsLinks a void:Linkset ;
>void:inDataset this:Dataset ;
>void:triples 759462 ;
>
>
>
>> I would definitely suggest setting swappiness to 10 to reduce swapping to 
>> disk which should speed inserts rates.
> done
>> Looking at you status() command output I see "Clients: 4177045 connects, max 
>> 3 concurrent”  indicating more than 4 million SQL connections have been made 
>> to Virtuoso since it was started on 9th Mar . What is making that many 
>> connections, it is this insertion process
> yes, it is
>> or are there other clients reading from the instance also ?
> none for the moment, instance is private
>>Apart from that the status() output looks fine with please of unused 
>> Buffers for database working set size to be increased and still fit in 
>> memory ,
> don't really understand the point about buffers, but also noticed the
> use is not "maximized", because there are no other clients reading from
> the instance I suppose ?
>> no deadlock and only one pending transaction which is one of your inserts.
>>
>> You talk about the Oracle JDBC Driver but I still don’t see its relevance as 
>> ultimately your insertions to Virtuoso must be done one of its client 
>> interfaces / services ie either the /sparql endpoint or the Virtuoso JDBC 
>> driver I would presume, thus which is it ?
> my mistake, as I said driver is > java -cp virtjdbc3.jar
> virtuoso.jdbc3.Driver
> OpenLink Virtuoso(TM) Driver for JDBC(TM) Version 3.x [Build 3.62]
>> The "DEFINE sql:log-enable 2” pragma being passed in the SPARQL insert 
>> queries does set row by row auto-commit and turn off transaction logging, 
>> which is the fastest transaction mode for write operations, see:
>>
>>  http://docs.openlinksw.com/virtuoso/fn_log_enable/
> ok, thanks, a good point
>
> Thomas
>> Best Regards
>> Hugh Williams
>> Professional Services
>> OpenLink Software, Inc.  //  http://www.openlinksw.com/
>> Weblog   -- http://www.openlinksw.com/blogs/
>> LinkedIn -- http://www.linkedin.com/company/openlink-software/
>> Twitter  -- 

Re: [Virtuoso-users] Write load high relative to disk write throughput / intensive JDBC sparql INSERT DATA INTO GRAPH

2017-03-14 Thread Thomas Michaux
Hi Hugh,

Le 10/03/2017 à 14:01, Hugh Williams a écrit :
> Hi Thomas,
>
> Is the ORCID dataset the only RDF datasets in the Virtuoso RDF Quad Store 
> currently, or are there others ?
>
> What is the size of the ORCID dataset ie triple count ?

I gave you wrong informations, because I misundertstood the process. 
Below are the correct details of our INSERT procedure from ORACLE db :

- dataset is from ORCID 2016 XML download available on this page 
https://orcid.org/content/download-file ("The file contains the public 
information associated with each user's ORCID record. Each record is 
included as a separate file in both JSON and XML. " 
https://figshare.com/articles/ORCID_Public_Data_File_2016/4134027).

They are uploaded inside ORACLE as XML records.

- then in a ORACLE PL/SQL procedure we apply "on the fly" an XSLT 
stylesheet (using Oracle XMLTRANSFORM efficient XSLT transform engine) 
to have an RDX/XML file for each ORCID XML record in the ORACLE table

- next in the process we use Jena tools to generate also "on the fly" 
TRIPLES from this RDF/XML results

- these are the  triples we're finally inserting via a JDBC "SPARQL 
INSERT DATA INTO GRAPH..." call to virtuoso from the PL/SQL Oracle 
procedure via virtuoso JDBC driver (and not ORACLE jdbc driver, my 
mistake, as you guessed)

-checking release of JDBC driver is > java -cp virtjdbc3.jar 
virtuoso.jdbc3.Driver
OpenLink Virtuoso(TM) Driver for JDBC(TM) Version 3.x [Build 3.62]

(the driver is embedded  inside ORACLE java JVM)

Thanks in advance if you have suggestions.

Last "statistics" on the graph size give  : 182 405 784 triples


this:Dataset a void:Dataset ;
  rdfs:seeAlso  ;
  rdfs:label "" ;
  void:sparqlEndpoint  ;
  void:triples 182405784 ;
  void:classes 13 ;
  void:entities 43946633 ;
  void:distinctSubjects 43922470 ;
  void:properties 32 ;
  void:distinctObjects 56509541 .

this:sameAsLinks a void:Linkset ;
  void:inDataset this:Dataset ;
  void:triples 759462 ;



>
> I would definitely suggest setting swappiness to 10 to reduce swapping to 
> disk which should speed inserts rates.
done
>
> Looking at you status() command output I see "Clients: 4177045 connects, max 
> 3 concurrent”  indicating more than 4 million SQL connections have been made 
> to Virtuoso since it was started on 9th Mar . What is making that many 
> connections, it is this insertion process
yes, it is
> or are there other clients reading from the instance also ?
none for the moment, instance is private
>   Apart from that the status() output looks fine with please of unused 
> Buffers for database working set size to be increased and still fit in memory 
> ,
don't really understand the point about buffers, but also noticed the 
use is not "maximized", because there are no other clients reading from 
the instance I suppose ?
> no deadlock and only one pending transaction which is one of your inserts.
>
> You talk about the Oracle JDBC Driver but I still don’t see its relevance as 
> ultimately your insertions to Virtuoso must be done one of its client 
> interfaces / services ie either the /sparql endpoint or the Virtuoso JDBC 
> driver I would presume, thus which is it ?
my mistake, as I said driver is > java -cp virtjdbc3.jar 
virtuoso.jdbc3.Driver
OpenLink Virtuoso(TM) Driver for JDBC(TM) Version 3.x [Build 3.62]
>
> The "DEFINE sql:log-enable 2” pragma being passed in the SPARQL insert 
> queries does set row by row auto-commit and turn off transaction logging, 
> which is the fastest transaction mode for write operations, see:
>
>   http://docs.openlinksw.com/virtuoso/fn_log_enable/
ok, thanks, a good point

Thomas
>
> Best Regards
> Hugh Williams
> Professional Services
> OpenLink Software, Inc.  //  http://www.openlinksw.com/
> Weblog   -- http://www.openlinksw.com/blogs/
> LinkedIn -- http://www.linkedin.com/company/openlink-software/
> Twitter  -- http://twitter.com/OpenLink
> Google+  -- http://plus.google.com/100570109519069333827/
> Facebook -- http://www.facebook.com/OpenLinkSoftware
> Universal Data Access, Integration, and Management Technology Providers
>
>
>
>> On 10 Mar 2017, at 10:54, Thomas Michaux  wrote:
>>
>> Hi,
>>
>> thanks Hugh, we reached 110 932 303 triples loaded from our ORCID dataset 
>> since yesterday, and still loading...
>>
>>
>>
>> Virtuoso process use VmSize: 32227664kB 32708 of memory of :
>>
>> KiB Mem : 32780296 total,   243972 free, 29985320 used,  2551004 buff/cache
>> KiB Swap:  2097148 total,  1734244 free,   362904 used.  2241196 avail Mem
>>
>> previous 4h logs :
>>
>> ...
>>
>> 06:03:28 Checkpoint started
>> 06:04:11 Checkpoint finished, new log is 
>> /usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310055817.trx
>> 06:28:41 Checkpoint started
>> 06:28:44 Checkpoint finished, new log is 
>> /usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310062412.trx
>> 

Re: [Virtuoso-users] Write load high relative to disk write throughput / intensive JDBC sparql INSERT DATA INTO GRAPH

2017-03-10 Thread Hugh Williams
Hi Thomas,

Is the ORCID dataset the only RDF datasets in the Virtuoso RDF Quad Store 
currently, or are there others ?

What is the size of the ORCID dataset ie triple count ?

I would definitely suggest setting swappiness to 10 to reduce swapping to disk 
which should speed inserts rates.

Looking at you status() command output I see "Clients: 4177045 connects, max 3 
concurrent”  indicating more than 4 million SQL connections have been made to 
Virtuoso since it was started on 9th Mar . What is making that many 
connections, it is this insertion process or are there other clients reading 
from the instance also ? Apart from that the status() output looks fine with 
please of unused Buffers for database working set size to be increased and 
still fit in memory , no deadlock and only one pending transaction which is one 
of your inserts.

You talk about the Oracle JDBC Driver but I still don’t see its relevance as 
ultimately your insertions to Virtuoso must be done one of its client 
interfaces / services ie either the /sparql endpoint or the Virtuoso JDBC 
driver I would presume, thus which is it ?

The "DEFINE sql:log-enable 2” pragma being passed in the SPARQL insert queries 
does set row by row auto-commit and turn off transaction logging, which is the 
fastest transaction mode for write operations, see:

http://docs.openlinksw.com/virtuoso/fn_log_enable/

Best Regards
Hugh Williams
Professional Services
OpenLink Software, Inc.  //  http://www.openlinksw.com/
Weblog   -- http://www.openlinksw.com/blogs/
LinkedIn -- http://www.linkedin.com/company/openlink-software/
Twitter  -- http://twitter.com/OpenLink
Google+  -- http://plus.google.com/100570109519069333827/
Facebook -- http://www.facebook.com/OpenLinkSoftware
Universal Data Access, Integration, and Management Technology Providers



> On 10 Mar 2017, at 10:54, Thomas Michaux  wrote:
> 
> Hi,
> 
> thanks Hugh, we reached 110 932 303 triples loaded from our ORCID dataset 
> since yesterday, and still loading...
> 
> 
> 
> Virtuoso process use VmSize: 32227664kB 32708 of memory of :
> 
> KiB Mem : 32780296 total,   243972 free, 29985320 used,  2551004 buff/cache
> KiB Swap:  2097148 total,  1734244 free,   362904 used.  2241196 avail Mem
> 
> previous 4h logs :
> 
> ...
> 
> 06:03:28 Checkpoint started
> 06:04:11 Checkpoint finished, new log is 
> /usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310055817.trx
> 06:28:41 Checkpoint started
> 06:28:44 Checkpoint finished, new log is 
> /usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310062412.trx
> 06:52:58 Checkpoint started
> 06:53:16 Checkpoint finished, new log is 
> /usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310064844.trx
> 07:17:14 Checkpoint started
> 07:17:18 Checkpoint finished, new log is 
> /usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310071317.trx
> 07:39:58 Write load high relative to disk write throughput.  Flushing at  
>  5.5 MB/s while application is making dirty pages at   1.5 MB/s. Doing a 
> second flushing pass before checkpoint
> 07:41:10 Checkpoint started
> 07:41:17 Checkpoint finished, new log is 
> /usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310073719.trx
> 08:04:53 Checkpoint started
> 08:04:56 Checkpoint finished, new log is 
> /usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310080117.trx
> 08:27:35 Write load high relative to disk write throughput.  Flushing at  
>  5.7 MB/s while application is making dirty pages at   1.7 MB/s. Doing a 
> second flushing pass before checkpoint
> 08:28:45 Checkpoint started
> 08:29:02 Checkpoint finished, new log is 
> /usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310082457.trx
> 08:51:43 Write load high relative to disk write throughput.  Flushing at  
>  5.4 MB/s while application is making dirty pages at   1.7 MB/s. Doing a 
> second flushing pass before checkpoint
> 08:52:57 Checkpoint started
> 08:53:01 Checkpoint finished, new log is 
> /usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310084902.trx
> 09:15:40 Write load high relative to disk write throughput.  Flushing at  
>  5.6 MB/s while application is making dirty pages at   1.9 MB/s. Doing a 
> second flushing pass before checkpoint
> 09:16:59 Checkpoint started
> 09:17:13 Checkpoint finished, new log is 
> /usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310091301.trx
> 09:39:57 Write load high relative to disk write throughput.  Flushing at  
>  5.4 MB/s while application is making dirty pages at   1.7 MB/s. Doing a 
> second flushing pass before checkpoint
> 09:41:13 Checkpoint started
> 09:41:16 Checkpoint finished, new log is 
> /usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310093714.trx
> 10:04:13 Write load high relative to disk write throughput.  Flushing at  
>  5.2 MB/s while application is making dirty pages at   

Re: [Virtuoso-users] Write load high relative to disk write throughput / intensive JDBC sparql INSERT DATA INTO GRAPH

2017-03-10 Thread Thomas Michaux

Hi,

thanks Hugh, we reached 110 932 303 triples loaded from our ORCID 
dataset since yesterday, and still loading...



Virtuoso process use VmSize: 32227664kB 32708 of memory of :

KiB Mem : 32780296 total, *243972 free*, 29985320 used, 2551004 buff/cache
KiB Swap:  2097148 total,  1734244 free,   362904 used.  2241196 avail Mem

previous 4h logs :

...

06:03:28 Checkpoint started

06:04:11 Checkpoint finished, new log is 
/usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310055817.trx

06:28:41 Checkpoint started
06:28:44 Checkpoint finished, new log is 
/usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310062412.trx

06:52:58 Checkpoint started
06:53:16 Checkpoint finished, new log is 
/usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310064844.trx

07:17:14 Checkpoint started
07:17:18 Checkpoint finished, new log is 
/usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310071317.trx
07:39:58 Write load high relative to disk write throughput. Flushing 
at   5.5 MB/s while application is making dirty pages at   1.5 
MB/s. Doing a second flushing pass before checkpoint

07:41:10 Checkpoint started
07:41:17 Checkpoint finished, new log is 
/usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310073719.trx

08:04:53 Checkpoint started
08:04:56 Checkpoint finished, new log is 
/usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310080117.trx
08:27:35 Write load high relative to disk write throughput. Flushing 
at   5.7 MB/s while application is making dirty pages at   1.7 
MB/s. Doing a second flushing pass before checkpoint

08:28:45 Checkpoint started
08:29:02 Checkpoint finished, new log is 
/usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310082457.trx
08:51:43 Write load high relative to disk write throughput. Flushing 
at   5.4 MB/s while application is making dirty pages at   1.7 
MB/s. Doing a second flushing pass before checkpoint

08:52:57 Checkpoint started
08:53:01 Checkpoint finished, new log is 
/usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310084902.trx
09:15:40 Write load high relative to disk write throughput. Flushing 
at   5.6 MB/s while application is making dirty pages at   1.9 
MB/s. Doing a second flushing pass before checkpoint

09:16:59 Checkpoint started
09:17:13 Checkpoint finished, new log is 
/usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310091301.trx
09:39:57 Write load high relative to disk write throughput. Flushing 
at   5.4 MB/s while application is making dirty pages at   1.7 
MB/s. Doing a second flushing pass before checkpoint

09:41:13 Checkpoint started
09:41:16 Checkpoint finished, new log is 
/usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310093714.trx
10:04:13 Write load high relative to disk write throughput. Flushing 
at   5.2 MB/s while application is making dirty pages at   1.6 
MB/s. Doing a second flushing pass before checkpoint

10:05:38 Checkpoint started
10:05:52 Checkpoint finished, new log is 
/usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310100118.trx
10:28:52 Write load high relative to disk write throughput. Flushing 
at   5.1 MB/s while application is making dirty pages at   1.8 
MB/s. Doing a second flushing pass before checkpoint

10:30:31 Checkpoint started
10:30:34 Checkpoint finished, new log is 
/usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310102554.trx
10:53:32 Write load high relative to disk write throughput. Flushing 
at   5.2 MB/s while application is making dirty pages at   1.4 
MB/s. Doing a second flushing pass before checkpoint

10:54:43 Checkpoint started
10:55:03 Checkpoint finished, new log is 
/usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310105036.trx

11:19:29 Checkpoint started
11:20:01 Checkpoint finished, new log is 
/usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310111504.trx


here is the output of "status()" :

SQL> status();
REPORT
VARCHAR
___

OpenLink Virtuoso  Server
Version 07.20.3217-pthreads for Linux as of Feb 10 2017
Started on: 2017-03-09 12:33 GMT+1

Database Status:
  File size 0, 1000960 pages, 247031 free.
  272 buffers, 447219 used, 112398 dirty 4 wired down, repl age 
13435443 0 w. io 3 w/crsr.
  Disk Usage: 2212080 reads avg 0 msec, 0% r 0% w last  176 s, 12791013 
writes flush   8.82 MB,
1221 read ahead, batch = 156.  Autocompact 722034 in 631152 out, 
12% saved col ac: 7230338 in 3% saved.

Gate:  5993 2nd in reads, 0 gate write waits, 0 in while read 0 busy scrap.
Log = 
/usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170310105036.trx, 
90073727 bytes

558107 pages have been changed since last backup (in checkpoint state)
Current backup timestamp: 0x-0x00-0x00
Last backup date: unknown
Clients: 4177045 connects, max 3 

Re: [Virtuoso-users] Write load high relative to disk write throughput / intensive JDBC sparql INSERT DATA INTO GRAPH

2017-03-09 Thread Hugh Williams
Hi Thomas,

What is this JDBC Connector from Oracle that is being used for the inserts in 
RDF/XML form ?

What is the ORCID dataset being used as the only one I see is in N-Triple 
format from 2014 at:

https://datahub.io/dataset/orcid_2014_dataset 


Performing inserts with transaction would consume more memory maintaining the 
transaction than with log_enable(2) which auto commits without transaction 
logging in memory.

The  O_DIRECT param set in your INI file is an old param for which no real 
benefit has been seen on current OS’es and on a Linux system setting swappiness 
as detailed at:


https://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/VirtRDFPerformanceTuning#Linux-only%20--%20”swappiness;

Would give better results.

There is also no real need to set ColumnStore = 1 as for as the RDF_QUAD tables 
is column store by default in Virtuoso 7 , so that setting would only have 
effect on default SQL table creation

If you still have problems, can you provide a copy of your virtuoso.log file 
and the output of the “status();” command for review ...

Best Regards
Hugh Williams
Professional Services
OpenLink Software, Inc.  //  http://www.openlinksw.com/
Weblog   -- http://www.openlinksw.com/blogs/
LinkedIn -- http://www.linkedin.com/company/openlink-software/
Twitter  -- http://twitter.com/OpenLink
Google+  -- http://plus.google.com/100570109519069333827/
Facebook -- http://www.facebook.com/OpenLinkSoftware
Universal Data Access, Integration, and Management Technology Providers



> On 9 Mar 2017, at 17:28, Thomas Michaux  wrote:
> 
> Hello,
> 
> We are loading ORCID 2016 in a V7 instance (Version 07.20.3217-pthreads for 
> Linux as of Feb 10 2017), we DO NOT want to use the bulk loader, instead we 
> are providing SPARQL inserts of RDF/XML files via JDBC connector from Oracle.
> 
> Virtuoso is hosted on 8 cores, 32Gb platform.
> 
> We successfully inserted 75 633 079 triples until virtuoso.log signals 
> performances problems on "disk write throughput", is there something else to 
> optimize in the virtuoso.ini while we are in this "loading" phase (no SPARQL 
> "read" query from clients at the moment ) ?
> 
> We've already done :
> 
> - full text indexation has been delayed ( DB.DBA.VT_BATCH_UPDATE ( 
> 'DB.DBA.RDF_OBJ', 'ON', 8640 ); ) 
> - MaxCheckpointRemap = 505856 ( it's larger than 25% of total pages)
> - UnremapQuota   = 0
> - DefaultIsolation   = 2
> - O_DIRECT = 1 (we are on XFS filesystem)
> - ColumnStore  = 1 (we started from a new, fresh .db, deleted all 
> previous existing .db, .trx)
> 
> Can we do something at transaction level ? We commit each JDBC insert as 
> short as possible (1 insert-> 1 commit), query is :
> 
> "'sparql DEFINE sql:log-enable 2 INSERT DATA INTO GRAPH '||graphe ||' { '|| 
> var_clob_line|| ' }'"
> 
> I can see that free memory slowly decrease, and finally the server hang.
> 
> Thanks for your help ! (Attached is virtuoso.ini)
> 
> Thomas
> --
> Announcing the Oxford Dictionaries API! The API offers world-renowned
> dictionary content that is easy and intuitive to access. Sign up for an
> account today to start using our lexical data to power your apps and
> projects. Get started today and enter our developer competition.
> http://sdm.link/oxford___
> Virtuoso-users mailing list
> Virtuoso-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/virtuoso-users



smime.p7s
Description: S/MIME cryptographic signature
--
Announcing the Oxford Dictionaries API! The API offers world-renowned
dictionary content that is easy and intuitive to access. Sign up for an
account today to start using our lexical data to power your apps and
projects. Get started today and enter our developer competition.
http://sdm.link/oxford___
Virtuoso-users mailing list
Virtuoso-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/virtuoso-users


[Virtuoso-users] Write load high relative to disk write throughput / intensive JDBC sparql INSERT DATA INTO GRAPH

2017-03-09 Thread Thomas Michaux

Hello,

We are loading ORCID 2016 in a V7 instance (Version 07.20.3217-pthreads 
for Linux as of Feb 10 2017), we DO NOT want to use the bulk loader, 
instead we are providing SPARQL inserts of RDF/XML files via JDBC 
connector from Oracle.


Virtuoso is hosted on 8 cores, 32Gb platform.

We successfully inserted 75 633 079 triples until virtuoso.log signals 
performances problems on "disk write throughput", is there something 
else to optimize in the virtuoso.ini while we are in this "loading" 
phase (no SPARQL "read" query from clients at the moment ) ?


We've already done :

- full text indexation has been delayed ( DB.DBA.VT_BATCH_UPDATE ( 
'DB.DBA.RDF_OBJ', 'ON', 8640 ); )

- MaxCheckpointRemap = 505856 ( it's larger than 25% of total pages)
- UnremapQuota   = 0
- DefaultIsolation   = 2
- O_DIRECT = 1 (we are on XFS filesystem)
- ColumnStore  = 1 (we started from a new, fresh .db, 
deleted all previous existing .db, .trx)


Can we do something at transaction level ? We commit each JDBC insert as 
short as possible (1 insert-> 1 commit), query is :


"'sparql *DEFINE sql:log-enable 2* INSERT DATA INTO GRAPH '||graphe ||' 
{ '|| var_clob_line|| ' }'"


I can see that free memory slowly decrease, and finally the server hang.

Thanks for your help ! (Attached is virtuoso.ini)

Thomas
;
;  virtuoso.ini
;
;  Configuration file for the OpenLink Virtuoso VDBMS Server
;
;  To learn more about this product, or any other product in our
;  portfolio, please check out our web site at:
;
;  http://virtuoso.openlinksw.com/
;
;  or contact us at:
;
;  general.informat...@openlinksw.com
;
;  If you have any technical questions, please contact our support
;  staff at:
;
;  technical.supp...@openlinksw.com
;
;
;  Database setup
;
[Database]
DatabaseFile   = 
/usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso.db
ErrorLogFile   = 
/usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso.log
LockFile   = 
/usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso.lck
TransactionFile= 
/usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso20170309162914.trx
;TransactionFile= /LN_Hupe/virtuoso20151207171442.trx
xa_persistent_file = 
/usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso.pxa
ErrorLogLevel  = 7
FileExtend = 200
MaxCheckpointRemap = 505856
UnremapQuota   = 0
DefaultIsolation   = 2
Striping   = 0
TempStorage= TempDatabase

[TempDatabase]
DatabaseFile   = 
/usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso-temp.db
TransactionFile= 
/usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso-temp.trx
MaxCheckpointRemap = 2000
Striping   = 0

;
;  Server parameters
;
[Parameters]
ServerPort   = 
LiteMode = 0
DisableUnixSocket= 1
DisableTcpSocket = 0
;SSLServerPort  = 2111
;SSLCertificate = cert.pem
;SSLPrivateKey  = pk.pem
;X509ClientVerify   = 0
;X509ClientVerifyDepth  = 0
;X509ClientVerifyCAFile = ca.pem
MaxClientConnections = 10
CheckpointInterval   = 20
O_DIRECT = 1
CaseMode = 2
MaxStaticCursorRows  = 5000
CheckpointAuditTrail = 1
AllowOSCalls = 0
SchedulerInterval= 10
;DirsAllowed  = ., 
/usr/local/virtuoso-opensource/share/virtuoso/vad, /home/devel, /LN_Hupe, 
/LN_Hupe/dumpviaf
;production
DirsAllowed  = ., 
/usr/local/virtuoso-opensource/share/virtuoso/vad, /home/devel/logs
ThreadCleanupInterval= 1
ThreadThreshold  = 10
ResourcesCleanupInterval = 1
FreeTextBatchSize= 10
SingleCPU= 0
VADInstallDir= /usr/local/virtuoso-opensource/share/virtuoso/vad/
PrefixResultNames= 0
RdfFreeTextRulesSize = 100
IndexTreeMaps= 256
MaxMemPoolSize   = 2
PrefixResultNames= 0
MacSpotlight = 0
IndexTreeMaps= 64
MaxQueryMem  = 3G   ; memory allocated to query processor
VectorSize   = 1000 ; initial parallel query vector (array of query 
operations) size
MaxVectorSize= 100  ; query vector size threshold.
AdjustVectorSize = 0
ThreadsPerQuery  = 8
AsyncQueueMaxThreads = 10
ColumnStore  = 1
;server side query logging
;At run time, this may be enabled or disabled with prof_enable (), overriding 
the specification of the ini file
;QueryLog = virtuoso.qrl
;;
;; When running with large data sets, one should configure the Virtuoso
;; process to use between 2/3 to 3/5 of free system memory and to stripe
;; storage on all available disks.
;;
;; Uncomment next two lines if there is 2 GB system memory free
;NumberOfBuffers  = 17
;MaxDirtyBuffers  = 13
;; Uncomment next two lines if there is 4 GB system memory free
;NumberOfBuffers  = 34