Hi Jurgen,
> hi there, we've implemented the whole sparql endpoint anew.
Thank you for the work!
But unfortunately, I wasn't able to use the code you published, because
it's based on libraries which aren't used in Rya from the master branch,
like OpenRDF Sesame which was replaced by RDF4J. Is it
urlConnection.setDoOutput(true);
>
> final OutputStream os = urlConnection.getOutputStream();
> System.out.print(resourceAsStream.read());
>
> int read;
> while((read = resourceAsStream.read()) >= 0) {
> read = resourceAsStream.read();
>
Hi,
I guess you've already studied the Bulk Loading data section in wiki [0],
but anyway let's go through it one more time.
1. Did you load the data to your HDFS instance? I see hdfs://$RDF_DATA in
your command, but do you really declare this envvar?
2. You need to load the rya.mapreduce jar to H
ill allow later to compute the prospects on
INSERT/DELETE queries, so it'll only need to write a mutation with a value
1 or -1 in case of INSERT and DELETE respectively.
Maxim Kolchin
E-mail: kolchin...@gmail.com
Tel.: +7 (911) 199-55-73
Homepage: http://kolchinmax.ru
to the locally installed hadoop runtime files.
> > Details can be found in the Accumulo manual where it describes running
> > client code:
> >
> >
> >
> https://accumulo.apache.org/1.7/accumulo_user_manual.html#_writing_accumulo_clients
> >
> > david.
&g
ADOOP_HOME.
[1]: https://gist.github.com/KMax/687293ce666754ce8eed11c369a0db05
Thank you in advance!
Maxim Kolchin
E-mail: kolchin...@gmail.com
Tel.: +7 (911) 199-55-73
Homepage: http://kolchinmax.ru