://bmcr.brynmawr.edu/2014/2014-02-18.html | grep schema
I tried a few variations, such as removing the .html from the end of the
URL etc. Nada.
On 03/31/2016 08:39 AM, Brian Kennison wrote:
On Mar 29, 2016, at 12:46 PM, Kevin Ford
<k...@3windmills.com<mailto:k...@3windmills.com>>
versity Press, 2013. Pp. ix, 278. ISBN
9780199657865. $35.00.
This is indeed why I wanted a "before and after" test - to see if schema
did add SEO. Now we don't know.
kc
On 3/29/16 7:48 AM, Kevin Ford wrote:
Hi Karen,
I took a look at those bryn mawr hits and I don't see the sc
Hi Karen,
I took a look at those bryn mawr hits and I don't see the schema.org
used in the page. Am I missing it? Perhaps I found the wrong thing.
If indeed it's not there, it just goes to show how using schema is not a
panacea. Loads of factors go into search ranking, relevancy, and
Hi Cindy,
Deduping can happen in any number of ways, but making use of shared
identifiers is the preferred way to address this issue. You could adopt
a shared identifier or you can an indicate that your Thing is the same
as a this other Thing. In schema.org's vocabulary, you'd use
It's probably not safe to say that "all search is local" but there is
most certainly a strong local component considered for every search.
For me, every hit on the first page of Google's results for a search for
"ice cream parlor" is related to Chicago, which is where I executed the
search. A
I think it is technically permissible, but unwise for a host of reasons,
a number of which have been noted in this thread.
It boils down to this: at the end of the day - and putting aside the
whole SSL/non-SSL tangent - it is a relative reference according to
the RFC and that begs the
Hi Cindy,
This doesn't quite address your issue, but, unless you've hit the 2 GB
Access size limit [1], Access can handle a good deal more than 250,000
item records (rows, yes?) you cited.
What makes you think you've hit the limit? Slowness, something else?
All the best,
Kevin
[1]
it over the 2GB mark. I've tried extracting to a csv, and that didn't work. Maybe I'll
try a Make table to a separate db.
Or the OpenRefine suggestion sounds good too.
Cindy Harper
-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Kevin
Ford
Sent
Hi Rodney,
Who, or how, is the scheduling system being replaced? (Assuming it is
changing.)
Do *you* need to replace the scheduling system (and that's would you
would potentially have to write from scratch)?
OR
Is a scheduling system being procured that will obsolete the current
system
I think this just goes to show, with the advent of the
Internet, centralized authorities are not as necessary/useful
as they once
used to be. —ELM
-- Maybe. I think it it recession-related. The high water mark for
nearly all of the groups on that list is 2007 (2006 for one or two).
The
There is also this:
http://www.loc.gov/z3950/
Yours,
Kevin
On 08/28/2014 06:40 PM, Habing, Thomas Gerald wrote:
Index Data maintains a searchable list: http://irspy.indexdata.com/
Tom
-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Jay
them, and then the information
is posted publicly so that everyone interested in the opportunity has access to
the same information.
Yours,
Kevin
--
Kevin Ford
Network Development and MARC Standards Office
Library of Congress
Washington, DC
Dear All,
This position - though hard to tell from the below - is chiefly for a
developer position in the Library of Congress's Network Development and
MARC Standards Office, also known as NetDev for short. Our office, as
its name suggests, manages the MARC Format standards, but we also
I fully second Josh's comments. A nice job and a big thanks!
--Kevin
On 08/13/2014 12:59 PM, Joshua Westgard wrote:
A big, public thank you is in order to Laura Wrubel, Dan Chudnov, and their
whole team for organizing and running the C4L regional meeting in DC over the
past two days, to GWU
* BIBFRAME Tools [6] - sports nice ontologies, but
the online tools won’t scale for large operations
-- The code running the transformation at [6] is available here:
https://github.com/lcnetdev/marc2bibframe
We've run several million records through it at one time. As with
Anything that will remodel MARC to (decent) RDF is going be:
- Non-trivial to install
- Non-trivial to use
- Slow
- Require massive amounts of memory/disk space
Choose any two.
-- I'll second this.
Frankly, I don't see how you can generate RDF that anybody would want to
Though I have some quibbles with Seth's post, I think it's worth
drawing attention to his repeatedly calling out API keys as a very
significant barrier to use, or at least entry. Most of the posts here
have given little attention to the issue API keys present. I can say
that I have quite
not going to defend API keys, but not all APIs are open or free. You
need to have *some* way to track usage.
There may be alternative ways to implement that, but you can't just hand
wave away the rather large use case for API keys.
-Ross.
On Mon, Dec 2, 2013 at 12:15 PM, Kevin Ford k
A key (haha) thing that keys also provide is an opportunity
to have a conversation with the user of your api: who are they,
how could you get in touch with them, what are they doing with
the API, what would they like to do with the API, what doesn’t
work? These questions are difficult to ask
I'll second Richard on this. 4store is fairly quick to set up and get
going. It comes with command-line tools and an HTTP option.
FWIW, ID.LOC.GOV uses 4store in its stack.
Yours,
Kevin
On 11/11/2013 01:17 AM, Richard Wallis wrote:
I've had some success with 4Store: http://4store.org
Dear Karen,
I think that how extensible RDF is would be a very good topic. I'm
not talking about the theoretical extensibility of RDF, but how to do it
in a practical manner. That is, if you have a role, or some other
relationship, for example, and you want to use it. Linked Data provides
as publishing
an extension) is part of that. I could see this extending to best
practices for naming (e.g. URI/IRIs), and perhaps even a bit about
documenting.
Great topic!
kc
On 9/2/13 1:25 AM, Kevin Ford wrote:
Dear Karen,
I think that how extensible RDF is would be a very good topic. I'm
not talking
My (erroneous) assumption was that if a record did not have a broader
term (i.e. a 550 $wg value) then it would sit at the top of the subject
tree, and that they would be the very general subjects headings. As I
found this obviously not the case.
-- You're corrrect - LCSH doesn't work like
to support the
need? And have a place to post various solutions, even ones that are not
OCLC-specific? (Because I am hoping that the use of microformats will
increase in general.)
kc
On 7/10/12 12:12 PM, Kevin Ford wrote:
is there an open search to get one to the desired records in the first
place
enough that one wouldn't want to look up all
of the records by hand.
kc
On 7/10/12 1:43 PM, Kevin Ford wrote:
As for someone who might want to do this programmatically, he/she
should take a look at the Programming languages section of the
second link I sent along:
http://schema.rdfs.org
available for years.
Roy
On Tue, Jul 10, 2012 at 2:08 PM, Kevin Ford k...@3windmills.com wrote:
The use case clarifies perfectly.
Totally feasible. Well, I should say totally feasible with the caveat
that I've never used the Worldcat Search API. Not letting that stop me, so
long as it is what I
,
Kevin
[1] https://listserv.nd.edu/cgi-bin/wa?A2=ind1103L=CODE4LIBT=0F=S=P=112728
--
Kevin Ford
Network Development and MARC Standards Office
Library of Congress
Washington, DC
that started in March
2011 [1] (it ends in April if you want to go crawling for the entire
thread).
Rgds,
Kevin
[1]
https://listserv.nd.edu/cgi-
bin/wa?A2=ind1103L=CODE4LIBT=0F=S=P=1
12728
--
Kevin Ford
Network Development and MARC Standards Office Library of Congress
Washington, DC
I was
told by the project manager that Apache, Java, and Tomcat were showing
signs of age.
-- Taking this statement at face value, and taking it to its logical end
(that you'll have to migrate your application), I'm extremely doubtful
that Apache, Java, and Tomcat are so near their ends of
(and am
looking into a java triplestore to run in Tomcat)
-- I don't know if the parenthetical was simply a statement or a
solicitation - apologies if it was the former.
Take a look at Mulgara. Drops right into Tomcat.
http://mulgara.org/
--Kevin
On 05/08/2012 02:01 PM, Ethan Gruber
, and its ecosystem of tools and services.
//Ed
[1] http://www.openarchives.org/ore/1.0/atom
On Thu, May 13, 2010 at 4:53 PM, Kevin Ford k...@loc.gov wrote:
The short answer to your question is no, there's no way to query terms
based on last modification date. However, and this feature
The short answer to your question is no, there's no way to query terms based
on last modification date. However, and this feature needs publication on the
website, there is an Atom feed that exposes the change activities for the
subject headings:
http://id.loc.gov/authorities/feed/
You can
32 matches
Mail list logo