Regis,

When we have encountered this here, we set that 1G (default is 64K or something small) and had no problems. (we=Epimoprhics, here=a production server for a customer).

There is also "plan B" which is don't use that bit of Jetty.

The fuseki server command takes an additional argument of

  --jetty-config=jetty-config.xml

which allows a different Jetty configuration and in particular, either switch off use of direct memory or switch to a different connector implementation altogether.

"SelectChannelConnector" does not use direct memory at all.

"BlockingChannelConnector" does by default and is used by Fuseki by default. <Set name="useDirectBuffer">false</Set> turns use of direct memory off.

Any differences in performance are not measurable except under high load.

Extract:

  <Arg>
     <New class="org.eclipse.jetty.server.nio.SelectChannelConnector">
        <Set name="port">3030</Set>
        <Set name="maxIdleTime">0</Set>
        <!-- All connectors -->
        <Set name="requestHeaderSize">65536</Set>
        <Set name="requestBufferSize">5242880</Set>
        <Set name="responseBufferSize">5242880</Set>
      </New>
...


Full example:

<?xml version="1.0"?>
<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure.dtd";>

<!--
  Reference: http://wiki.eclipse.org/Jetty/Reference/jetty.xml_syntax
  http://wiki.eclipse.org/Jetty/Reference/jetty.xml
-->

<Configure id="Fuseki" class="org.eclipse.jetty.server.Server">
  <Call name="addConnector">
    <Arg>
      <!-- org.eclipse.jetty.server.nio.BlockingChannelConnector -->
      <!-- org.eclipse.jetty.server.nio.SelectChannelConnector -->
      <New class="org.eclipse.jetty.server.nio.SelectChannelConnector">
        <!-- BlockingChannelConnector specific:
             <Set name="useDirectBuffer">false</Set>
        -->
        <Set name="port">3535</Set>
        <Set name="maxIdleTime">0</Set>
        <!-- All connectors -->
        <Set name="requestHeaderSize">65536</Set>       <!-- 64*1024 -->
        <Set name="requestBufferSize">5242880</Set>     <!-- 5*1024*1024 -->
        <Set name="responseBufferSize">5242880</Set>    <!-- 5*1024*1024 -->
      </New>
    </Arg>
  </Call>
</Configure>


        Andy



On 12/04/12 22:00, Regis Pires Magalhães wrote:
Andy and Rob,
Thank you for your help.
I tried -XX:MaxDirectMemorySize to a higher value, but I have not been
successful.

After I ran the same query many times using Virtuoso as the Sparql Endpoint
with the same data and there was no error. Virtuoso and Fuseki are
installed on the same host (Intel Core i7 - 16 GB RAM).
So I think the problem is on the server side and not in the client app that
uses Jena ARQ.

Is there anything else that can be configured in Fuseki in order to get all
the results?
I would really like to use Fuseki.

Regards,
Regis.


On Mon, Apr 9, 2012 at 4:07 PM, Andy Seaborne<a...@apache.org>  wrote:

On 09/04/12 19:47, Rob Vesse wrote:

Hi Regis

Please see this thread where I experienced a similar issue:

https://issues.apache.org/**jira/browse/JENA-181<https://issues.apache.org/jira/browse/JENA-181>

A couple of possible solutions is either to insert delays between
queries (not possible in your scenario) or to set
-XX:MaxDirectMemorySize to a higher value than the default

Andy - is it possible that the Service code is not calling close() on
the query iterators in a timely fashion (i.e. keeping too many
connections open thus exhausting the direct memory buffer) as that
turned out to be the main culprit in my case from re-reading that thread.


It's possible but not in the simplest way because QueryIterService calls
QueryIter.materialize so it should be reading the results when the results
are first received.  Looking at the code, exhausting the iterator should
call the close operation.  I tried breakpointing a single SERVICE call and
it did hit the close breakpoint.

But in the server, direct memory does not always seem to be returned or
retuned fast enough.   We have had to -XX:MaxDirectMemorySize (plan B: use
a separate jetty config and pick a different connector implementation).

Having two SERVICE calls means the second is called for each match from
the first.

An experiment is to put a LIMIT on the first call and see what numbers
matter.

        Andy



Reply via email to