Since you mention the requirement to publish and maintain it on the web,
another (NoSQL) option for your data storage would be a SPARQL graph store
(such as Apache Fuseki). Loading the data would involve transforming each
citation into an RDF graph and storing it as a named graph with an HTTP
PUT,
the rest of the script.
>
> Does anyone know of a good way to accomplish this? I imagine there's some
> incantation that I can perform, but I'm struggling to find it.
>
> Thanks,
> Ken
>
--
Conal Tuohy
http://conaltuohy.com/
@conal_tuohy
+61-466-324297
The fragment identifier component of a URI is defined by the media type
in which the information is represented, in the case of the PDF media type
it is defined here: https://tools.ietf.org/html/rfc3778#section-3
On 19 Aug 2015 07:15, todd.d.robb...@gmail.com todd.d.robb...@gmail.com
wrote:
Kyle
It's a question of merging the two feeds in order of pubDate?
This is the kind of thing that Yahoo Pipes was for, before Yahoo abandoned
it.
https://pipes.yahoo.com/
But there are alternatives around; perhaps the list in this article will
provide some options you haven't seen already:
Assuming your library web server has a front-end proxy (I guess this is
pretty common) or at least runs inside Apache httpd or something, then
rather than use the HTML meta tag, it might be easier to set the referer
policy via the Content-Security-Policy HTTP header field.
I looked at VIVO a few years ago and my memory is that they had a few
ingestion tools, including an OAI-PMH harvester that I believe created RDF
naively from the harvested XML, inserted it into the triple store using a
SPARQL query, and then used a SPARQL update query to reformulate the
harvested
Laura, is it an option to migrate the literary content into a TEI form? You
could consolidate the objects that make up a single text into a single
complex object, with embedded metadata (at whatever level you like), and
then wheel in some existing TEI content management / presentation system.
On
One thing I've been using a triple store for recently is to model a
lexicographic dataset extracted from a bunch of TEI files. The TEI XML
files are transcriptions of lexicons of various Australian aboriginal
languages; tables of English language words, with their equivalents
supplied by native
On the subject of 4Store, I set it up once and was impressed with how easy
it was, but I went back to Apache Fuseki, because it supported the SPARQL
1.1 Graph Store Protocol, which in my opinion is crucial for publishing
linked open data, as it provides a gateway between the fine-grained data of
I am really puzzled by the use of these non-standard inflexions as a
means of qualifying an HTTP request. Why not use the HTTP Accept header,
like everyone else?
On 9 December 2014 at 07:59, John A. Kunze j...@ucop.edu wrote:
Any Apache server (not Tomcat) can handle the '?' and '??' cases
Kia ora Stuart!
You may be interested in a couple of OAI-PMH providers I wrote not that
long ago.
The code is here: https://github.com/Conal-Tuohy/Retailer and there are a
few posts about it on my blog http://conaltuohy.com/
Note that the providers are not available online publicly, but you can
I would recommend learning Linux because it is the key platform for open
source software in general, and librarians need to embrace open source in
order to take control over their library systems, in order to deliver to
their users what they actually need, rather than what can be delivered
within
I have recently written a small Java web app called Retailer, which is a
platform for server-side XSLT-based web apps. It's a bit experimental and
I'd appreciate comment and especially feedback from people who want to try
it out.
https://github.com/Conal-Tuohy/Retailer
Then to run on top
than (textual)
summaries but it appears to me now that that's more what you are
interested in anyway(?)
We've had a play with some of this (the LDA algorithm) using an
implementation in MatLab, and we've got plans to use it in a real
project here over the next few months.
Cheers
Con
--
Conal Tuohy
Eric Hellman wrote:
We need good global metadata catalog/registries. Which of today's
catalog functions will require a local institutional catalog tomorrow?
I think this is an interesting question.
My opinion is that the libraries of tomorrow will have a distributed catalogue:
some of it
15 matches
Mail list logo