Re: [CODE4LIB] Good Database Software for a Digital Project?

2016-04-15 Thread Matt Sherman
Well, we've got one volume done, with about 1,250 bibliographies, but there
are 3 other volumes to convert. So at the end of the day probably about
5,000 entries.  Though the how is to make it intractable via the web and
hopefully letting scholars in the field continue to add to the database
once it is online.

On Fri, Apr 15, 2016 at 7:38 PM, Kyle Banerjee 
wrote:

> On Fri, Apr 15, 2016 at 11:53 AM, Roy Tennant 
> wrote:
>
> > In my experience, for a number of use cases, including possibly this one,
> > a database is overkill. Often, flat files in a directory system indexed
> by
> > something like Solr is plenty and you avoid the inevitable headaches of
> > being a database administrator. Backup, for example, is a snap and easily
> > automated.
> >
>
> I'm with Roy -- no need to use a chain saw to cut butter.
>
> Out of curiosity, since the use case is an annotated bibliography, how much
> stuff do you have? If you have only a few thousand entries in delimited
> text, flat files could be easier and more effective than other options.
>
> kyle
>


Re: [CODE4LIB] Good Database Software for a Digital Project?

2016-04-15 Thread Kyle Banerjee
On Fri, Apr 15, 2016 at 11:53 AM, Roy Tennant  wrote:

> In my experience, for a number of use cases, including possibly this one,
> a database is overkill. Often, flat files in a directory system indexed by
> something like Solr is plenty and you avoid the inevitable headaches of
> being a database administrator. Backup, for example, is a snap and easily
> automated.
>

I'm with Roy -- no need to use a chain saw to cut butter.

Out of curiosity, since the use case is an annotated bibliography, how much
stuff do you have? If you have only a few thousand entries in delimited
text, flat files could be easier and more effective than other options.

kyle


Re: [CODE4LIB] Good Database Software for a Digital Project?

2016-04-15 Thread Cary Gordon
We use MySQL and now mostly Maria. While I agree that PostgeSQL might be
technically advantageous in some ways, its ubiquity and the easy
availability of many free and paid support options make it a great choice.

That said, I think you should examine and explore the possibilities offered
by Solr or Elastisearch. Those would likely by my tools of choice for your
scenario.

Of course, I would probably wrap this in Drupal either way ;)

Cary

On Friday, April 15, 2016, Adam Constabaris 
wrote:

> Hi Matt,
>
> It's very hard to provide a responsible recommendation without further
> details, so this is just going to be a quick overview of *relational
> database* options.  It might be that some of the other recommendations fit
> your needs better.  For example, if your users aren't at ease with SQL,
> Solr or ElasticSearch might be better..
>
> Consider SQLite.  It's nearly everywhere (public domain, embedded in tons
> of things).  There's a Firefox extension that will let you work with it
> through the browser if you don't want to do things from the command line.
> SQLite isn't a multi user server, it's more a file format.  The database is
> a single file that you can ship around.  You can build a 'self contained'
> web application on top of it, which can make deployment much easier.
>
> As befits its nature, it's a bit loose with data types (e.g. you can insert
> strings into numeric column types).  But there's a lot to be said in its
> favour.
>
> MySQL (or MariaDB) are reasonable choices,  It does a lot of things very
> well, and it's very easy to get started with, and lots of documentation.
> You will need to pay attention if your data is multilingual and/or
> "non-Latin".
>
> I will second the suggestion to look at PostgreSQL: it's almost as
> available as MySQL, and tends to adhere closer to SQL standards than MySQL
> (e.g. window functions), and it's fast, and its data storage model makes
> for some nice features (e.g. you can update a table's structure while
> others are querying it, which is great for availability).  It supports
> "foreign data wrappers" which let you query other data sources in
> PostgresSQL (https://wiki.postgresql.org/wiki/Foreign_data_wrappers)
>
> It's worth mentioning that recent versions of PostgreSQL have a JSON column
> type (and the most recent versions support functions that let you query
> inside JSON-valued columns).  For some time, it has supported functional
> indexes:
> http://www.postgresql.org/docs/9.1/static/indexes-expressional.html
>
>
> These two features together mean you can index 'into' a JSON-valued column
> to get fast searching over more loosely structured data, so these versions
> of PostgreSQL also give you many of the advantages touted for NoSQL systems
> while still giving you a standardized query language and traditional ACID
> "guarantees."
>
> HTH,
>
> AC
>
>
>
> On Fri, Apr 15, 2016 at 2:18 PM, Matt Sherman  >
> wrote:
>
> > Hi all,
> >
> > I am looking to pick the group brain as to what might be the most useful
> > database software for a digital project I am collaborating on.  We are
> > working on converting an annotated bibliography to a searchable database.
> > While I have the data in a few structured formats, we need to figure out
> > now what to actually put it in so that it can be queried.  My default
> line
> > of thinking is to try a MySQL since it is free and used ubiquitously
> > online, but I wanted to see if there were any other database or software
> > systems that we should also consider before investing a lot of time in
> one
> > approach.  Any advice and suggestions would be appreciated.
> >
> > Matt Sherman
> >
>


-- 
Cary Gordon
The Cherry Hill Company
http://chillco.com


Re: [CODE4LIB] Good Database Software for a Digital Project?

2016-04-15 Thread Haitz, Lisa (haitzlm)
What technologies are you comfortable with for CRUD? That may help inform your 
choice of data source. Do you want full text searching? Will this project live 
on a web page when completed?

As well- are there existing databases program available for you to add yours to?

How many records are contained in the data?


Cheers!




On 4/15/16, 2:32 PM, "Code for Libraries on behalf of Matt Sherman" 
 wrote:

>Well, this is a side project with just 2 of us working on it, and I have
>the tech skills so it is more of what I need to learn to make it work.
>
>On Fri, Apr 15, 2016 at 2:22 PM, Ethan Gruber  wrote:
>
>> There are countless ways to approach the problem, but I suggest beginning
>> with tools that are within the area of expertise of your staff. Mapping
>> disparate structured formats into a single Solr instance for fast search
>> and retrieval is one possibility.
>>
>> On Fri, Apr 15, 2016 at 2:18 PM, Matt Sherman 
>> wrote:
>>
>> > Hi all,
>> >
>> > I am looking to pick the group brain as to what might be the most useful
>> > database software for a digital project I am collaborating on.  We are
>> > working on converting an annotated bibliography to a searchable database.
>> > While I have the data in a few structured formats, we need to figure out
>> > now what to actually put it in so that it can be queried.  My default
>> line
>> > of thinking is to try a MySQL since it is free and used ubiquitously
>> > online, but I wanted to see if there were any other database or software
>> > systems that we should also consider before investing a lot of time in
>> one
>> > approach.  Any advice and suggestions would be appreciated.
>> >
>> > Matt Sherman
>> >
>>


[CODE4LIB] NYTSL Spring Program: Linked Data Efforts at Cornell, May 4, 2016

2016-04-15 Thread Tam Fultz
**
[image: Inline image 1]


*NYTSL *Spring Program
Linked Data Efforts at Cornell University Library's Technical Services

By Jason Kovari

Wednesday, May 4, 2016
5:00 – 7:45 PM

The New York Public Library - Stephen A. Schwarzman Building
Margaret Liebman Berger Forum, Room 227
476 Fifth Avenue (at 42nd Street), New York, NY 10018

$15 for NYTSL members; $40 for non-members

*Register at **http://nytsl.org* 


Since 2014, Cornell University Library (CUL) has been collaborating with
Harvard and Stanford Universities on Linked Data for Libraries (LD4L), an
Andrew W. Mellon Foundation funded initiative to model and produce library
data as linked data. In this talk, Kovari will review the efforts
undertaken in LD4L and discuss on-going work at CUL to further model and
produce library data as linked data, including leading a community effort
for a rare materials ontology extension, testing native RDF original
cataloging and experimenting with cross-collection authority creation and
maintenance using a local triplestore.


*About the Presenter*

Jason Kovari serves as Head of Metadata Services at Cornell University
Library; in this role, he directs the efforts of the metadata unit and
consults on metadata issues for a wide variety of projects, including those
of discovery, data modeling, preservation and linked data.

-

-- 
Tamara L. Fultz
Associate Museum Librarian
Thomas J. Watson Library
Metropolitan Museum of Art
1000 Fifth Avenue
New York, NY 10028-0198
212-650-2443
tamara.fu...@metmuseum.org


Re: [CODE4LIB] Good Database Software for a Digital Project?

2016-04-15 Thread William Denton

On 15 April 2016, Roy Tennant wrote:

In my experience, for a number of use cases, including possibly this one, a 
database is overkill. Often, flat files in a directory system indexed by 
something like Solr is plenty and you avoid the inevitable headaches of being 
a database administrator. Backup, for example, is a snap and easily automated.


And if you want to turn the bib into a web site, then it's easy to use a static 
site generator like Jekyll:  put a few lines of metadata at the top of each 
file, set up a template, and bingo, there's your site.


Bill
--
William Denton ↔  Toronto, Canada ↔  https://www.miskatonic.org/

[CODE4LIB] Job: Cataloging and Metadata Librarian at Bridgewater State University

2016-04-15 Thread jobs
Cataloging and Metadata Librarian
Bridgewater State University
Bridgewater

Bridgewater State University seeks an dynamic, creative, and service-oriented
professional for the position of Cataloging and Metadata Librarian. The
successful candidate will manage the Technical Services unit and provide
access to library resources, in support of the University's missions for
teaching and learning, research and creative endeavors, and student success.
Review of applications will begin on May 15, 2016. Applications will be
accepted until the position is filled, but preference will be given to
applications received by the begin review date.

  
The Cataloging and Metadata Librarian provides leadership for the Technical
Services unit, which includes cataloging, serials processing, catalog
maintenance, and physical processing for all formats and locations.

  
Essential duties include:

• Plan, implement, and evaluate unit operations and workflows.

• Supervise three library assistants (two full-time and one part-time).

• Conduct training, and create and maintain documentation for the unit.

• Provide timely original and complex copy cataloging for library materials in
all formats and languages, including print and electronic monographs and
serials, rare books, media, and digital items.

• Work with an authority vendor to manage authority control.

• Collaborate with various colleagues as needed to improve workflows, identify
projects, and enhance access to print, video, manuscript, and digital
materials.

• Take an active role in Maxwell Library's future move to a new library
management and discovery service.

• Stay current with cataloging and metadata developments and provide expert
advice to the library and campus on providing access to information resources.

• Act as liaison to one or more academic departments.

• Actively engage in professional development activities as required for
promotion and tenure

  
Required qualifications:

• Relevant experience in an academic, large public, or special library, or in
a related field.

• Knowledge of and demonstrated experience in cataloging using national
standards such as RDA, AACR2, LCSH, LCC, and Dewey. Experience cataloging
using RDA is required.

• Demonstrated knowledge of one or more metadata standards and schema such as
Dublin Core, METS, MODS, VRA Core, or others.

• Awareness of current developments and trends in cataloging and metadata, and
apply such knowledge to local practices.

• Ability to think strategically, set priorities, research tools and best
practices, and manage responsibilities independently.

• Ability to adapt quickly in a dynamic, evolving library and campus
environment.

• Effective communication, interpersonal, and team building skills.

• Supervisory experience.

• Commitment to diversity and social justice.

• Evidence of ability to successfully engage in professional activities and
disciplinary scholarship to satisfy promotion and tenure requirements for the
Massachusetts State College Association (MSCA) collective bargaining
agreement.

  
For a full job description and to apply, visit: [https://jobs.bridgew.edu/post
ings/1102](https://jobs.bridgew.edu/postings/1102)



Brought to you by code4lib jobs: http://jobs.code4lib.org/job/25264/
To post a new job please visit http://jobs.code4lib.org/


Re: [CODE4LIB] authority work with isni

2016-04-15 Thread Stephen Hearn
A cautionary note.  Linked data works best when the entities identified by
URIs are unambiguous. That's not always the case with VIAF and ISNI.  They
aggregate data from other ID registries algorithmically and with limited
review.  High-performing algorithms still make mistakes, as do the more
manually built descriptions they depend on.

I did a search in VIAF on "morgan, eric" (which retrieves names matching
and associated with "eric morgan") and found matched records ISNI
000116490460 / VIAF 56843669, both of which conflate the physicist
Patricia Lewis, born 1957 per the LCNAF record where she's "Lewis, P. M.",
and lecturer on management Patricia Lewis, born 1963 per the LCNAF record
where she's "Lewis, Patricia, 1963-", and a "Lewis, Patricia, 1954-" who
appears to be a French Canadian author writing in French on emotions from a
self-help perspective, who has been confused with the management lecturer
who also writes on emotions from a management perspective.

I also noted ISNI 000384457106 / VIAF 275988911, both associated with
the management lecturer's LCNAF authority, so she effectively has two ISNI
and VIAF IDs.

And the first title cited in the LCNAF authority for physicist "Lewis, P.
M." is a document on the effects of shift work produced in Washington State
for the US Nuclear Regulatory Commission in 1985. That doesn't match with
the physicist Lewis's biography in Wikipedia.  A bit more poking around in
OCLC suggests this work belongs instead to Paul Michael Lewis, who has
contemporaneous works for the NRC on work schedules in the Pacific
Northwest. He's not properly established at all in LCNAF; nor is the French
Canadian author.

This kind of sifting of data is what takes time in authority work, and
what--when it's done well--makes authority data valuable for the semantic
web. The point of the above is NOT that algorithmic aggregation is a bad
idea--just that it leaves a lot of necessary work still to do. Cases like
the above are too easy to find at present in VIAF and ISNI. ISNI has been
very responsive for me when I've reported problems (and I will work to
resolve the problems noted above), but much remains to be done. Rather than
leaping ahead to more algorithmic matching of VIAF and ISNI IDs to other
identity records, I'd like to see developers work on programs which could
mine ISNI and VIAF to detect discrepancies in the aggregated ID sources for
further review.  That could reduce the proliferation of errors already in
the data records and make all of these data resources ultimately more
valuable for semantic web use.

Stephen





On Fri, Apr 15, 2016 at 10:57 AM, Kyle Banerjee 
wrote:

> On Fri, Apr 15, 2016 at 2:16 AM, Eric Lease Morgan  wrote:
>
> > ...
> > My questions are:
> >
> >   * What remote authority databases are available programmatically? I
> > already know of one from the Library of Congress, VIAF, and probably
> > WorldCat Identities. Does ISNI support some sort of API, and if so, where
> > is some documentation?
> >
>
> Depends on what you have in mind. For databases similar to your example, I
> believe ORCID has an API. GNIS, ULAN, CONA, and TGN might be interesting to
> you, but there are tons more, particularly if you add subject authorities
> (e.g. AAT, MeSH). The Getty stuff is all available as LoD.
>
>   * I believe the Library Of Congress, VIAF, and probably WorldCat
> > Identities all support linked data. Does ISNI, and if so, then how is it
> > implemented and can you point me to documentation?
> >
> >   * When it comes to updating the local (MARC) authority records, how do
> > you suggest the updates happen? More specifically, what types of values
> do
> > you suggest I insert into what specific (MARC) fields/subfields? Some
> > people advocate $0 of 1xx, 6xx, and 7xx fields. Other people suggest 024
> > subfields 2 and a. Inquiring minds would like to know.
> >
>
> Implementation would be specific to your system and those you wish to
> interact with. The MARC record is used to represent/transmit data, but it
> doesn't actually exist in the sense that systems use it internally as is.
>
> Having said that, I think the logical place to put control numbers from
> different schema is in 024 because that field allows you to differentiate
> the source so it doesn't matter if control numbers overlap
>
> kyle
>



-- 
Stephen Hearn, Metadata Strategist
Data Management & Access, University Libraries
University of Minnesota
160 Wilson Library
309 19th Avenue South
Minneapolis, MN 55455
Ph: 612-625-2328
Fx: 612-625-3428
ORCID:  -0002-3590-1242


[CODE4LIB] CfP: Crowdsourcing workshop at DH 2016

2016-04-15 Thread Ben Brumfield
Applications are now open for an expert workshop to be held in Kraków,
Poland, on 12 July 2016, 9:30am - 4:00pm, as part of the Digital Humanities
2016 conference (http://dh2016.adho.org/).

[UPDATE: The Wellcome Library is looking to explore specific questions
around crowdsourcing as part of the DH2016 workshop "*Beyond The Basics:
What Next For Crowdsourcing?*" In order to encourage wide participation in
this event, the Wellcome Library has funds to support travel by scholars
outside of Europe.  Participants applying for funding should note this on
the workshop application form. If you have already applied and would like
funding you can contact Christy Henshaw at c.hens...@wellcome.ac.uk. ]

[UPDATE: Despite the "sold out" message on the DH2016 registration page,
there are currently 15 open positions for the workshop.  We apologize for
the mis-communication.]

We welcome applications from all, but please note that we will aim balance
expertise, disciplinary backgrounds, experience with different types of
projects, and institutional and project affiliations when finalising our
list of participants. This workshop is not suitable for beginners.
Participants should have some practical knowledge of running crowdsourcing
projects or expertise in human computation, machine learning or other
relevant topics. You can apply to attend at
https://docs.google.com/forms/d/1l05Rba3EqMyy-X4UVmU9z7hQ-jlK2x2kLGvNtJfgtgQ/viewform

Beyond The Basics: What Next For Crowdsourcing?
Crowdsourcing - asking the public to help with inherently rewarding tasks
that contribute to a shared, significant goal or research interest related
to cultural heritage collections or knowledge - is reasonably well
established in the humanities and cultural heritage sector. The success of
projects such as Transcribe Bentham, Old Weather and the Smithsonian
Transcription Center in processing content and engaging participants, and
the subsequent development of crowdsourcing platforms that make launching a
project easier, have increased interest in this area. While emerging best
practices have been documented in a growing body of scholarship, including
a recent report from the Crowd Consortium for Libraries and Archives
symposium, this workshop looks to the next 5 - 10 years of crowdsourcing in
the humanities, the sciences and in cultural heritage. The workshop will
gather international experts and senior project staff to document the
lessons to be learnt from projects to date and to discuss issues we expect
to be important in the future.

Topics for discussion will be grouped by participants in an
unconference-style opening session in which topics are proposed and voted
on by participants. They are likely to include the following:

Public humanities, education and audiences:

   - The use of crowdsourcing in formal education
   - Designing research questions that encourage participation and create
   space for informal education, the social production of knowledge and
   collaborative problem solving
   - The intersection of crowdsourcing, public humanities and engagement
   with cultural heritage and academic goals
   - Resolving tensions between encouraging participants to follow
   opportunities for informal learning and skills development, and focusing
on
   project productivity


Organisational and project management issues:

   - Collaborative partnerships/funding to develop community platform(s)
   based on open source software
   - The state of focused research into interface design, engagement
   methods, and end-user impact studies
   - Design tensions between techniques that can improve the productivity
   of projects (such as handwritten text recognition and algorithmic
   classification) and optimising the user experience
   - Workflows for crowdsourced data and the ingestion of crowdsourced data
   into collections management systems
   - Challenges to institutional and expert authority
   - The compromises, pros and cons involved in specifying and selecting
   crowdsourcing software and platforms
   - The impact of crowdsourcing on organisational structures and resources
   - Inter-institutional cooperation or competition for crowdsourcing
   participants


Future challenges:

   - The integration of machine learning and other computational techniques
   with human computation
   - Lessons to be learnt from the long histories of grassroots and
   community history projects
   - Sharing lessons learnt and planning peer outreach to ensure that
   academics and cultural heritage professionals can benefit from collective
   best practice
   - The ethics of new and emerging forms of crowdsourcing


The timetable will include a brief round of introductions, a shared
agenda-setting exercise, four or so discussion sessions, and a final
session for closing remarks and to agree next steps.

The discussion and emergent guidelines documented during the workshop would
help future projects benefit from the collective experiences of
participants. 

Re: [CODE4LIB] Good Database Software for a Digital Project?

2016-04-15 Thread Roy Tennant
In my experience, for a number of use cases, including possibly this one, a 
database is overkill. Often, flat files in a directory system indexed by 
something like Solr is plenty and you avoid the inevitable headaches of being a 
database administrator. Backup, for example, is a snap and easily automated.
Roy

> On Apr 15, 2016, at 11:37 AM, Scancella, John  wrote:
> 
> I would definitely pick postgres over mysql. It has all of the same features 
> and more, plus it is easier to use (in my own opinion).
> 
> But before I even pick a database I would consider these:
> What are the speed requirements? 
> How do you plan on doing searching? 
> How much data? 
> Does it need to be redundant? 
> What about clustering? 
> Geographically diverse for faster local retrieval? 
> What languages or other technologies do you plan on interfacing with?
> 
> and then, based on those answers more questions will arise.
> Best of luck!
> 
> From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Ben Cail 
> [benjamin_c...@brown.edu]
> Sent: Friday, April 15, 2016 2:23 PM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: Re: [CODE4LIB] Good Database Software for a Digital Project?
> 
> I would suggest looking at postgresql . It
> may not be as widely used as mysql, but it is used a lot, and it's a
> high-quality piece of database software. It's also free.
> 
> -Ben
> 
>> On 04/15/2016 02:18 PM, Matt Sherman wrote:
>> Hi all,
>> 
>> I am looking to pick the group brain as to what might be the most useful
>> database software for a digital project I am collaborating on.  We are
>> working on converting an annotated bibliography to a searchable database.
>> While I have the data in a few structured formats, we need to figure out
>> now what to actually put it in so that it can be queried.  My default line
>> of thinking is to try a MySQL since it is free and used ubiquitously
>> online, but I wanted to see if there were any other database or software
>> systems that we should also consider before investing a lot of time in one
>> approach.  Any advice and suggestions would be appreciated.
>> 
>> Matt Sherman


Re: [CODE4LIB] Good Database Software for a Digital Project?

2016-04-15 Thread Adam Constabaris
Hi Matt,

It's very hard to provide a responsible recommendation without further
details, so this is just going to be a quick overview of *relational
database* options.  It might be that some of the other recommendations fit
your needs better.  For example, if your users aren't at ease with SQL,
Solr or ElasticSearch might be better..

Consider SQLite.  It's nearly everywhere (public domain, embedded in tons
of things).  There's a Firefox extension that will let you work with it
through the browser if you don't want to do things from the command line.
SQLite isn't a multi user server, it's more a file format.  The database is
a single file that you can ship around.  You can build a 'self contained'
web application on top of it, which can make deployment much easier.

As befits its nature, it's a bit loose with data types (e.g. you can insert
strings into numeric column types).  But there's a lot to be said in its
favour.

MySQL (or MariaDB) are reasonable choices,  It does a lot of things very
well, and it's very easy to get started with, and lots of documentation.
You will need to pay attention if your data is multilingual and/or
"non-Latin".

I will second the suggestion to look at PostgreSQL: it's almost as
available as MySQL, and tends to adhere closer to SQL standards than MySQL
(e.g. window functions), and it's fast, and its data storage model makes
for some nice features (e.g. you can update a table's structure while
others are querying it, which is great for availability).  It supports
"foreign data wrappers" which let you query other data sources in
PostgresSQL (https://wiki.postgresql.org/wiki/Foreign_data_wrappers)

It's worth mentioning that recent versions of PostgreSQL have a JSON column
type (and the most recent versions support functions that let you query
inside JSON-valued columns).  For some time, it has supported functional
indexes: http://www.postgresql.org/docs/9.1/static/indexes-expressional.html


These two features together mean you can index 'into' a JSON-valued column
to get fast searching over more loosely structured data, so these versions
of PostgreSQL also give you many of the advantages touted for NoSQL systems
while still giving you a standardized query language and traditional ACID
"guarantees."

HTH,

AC



On Fri, Apr 15, 2016 at 2:18 PM, Matt Sherman 
wrote:

> Hi all,
>
> I am looking to pick the group brain as to what might be the most useful
> database software for a digital project I am collaborating on.  We are
> working on converting an annotated bibliography to a searchable database.
> While I have the data in a few structured formats, we need to figure out
> now what to actually put it in so that it can be queried.  My default line
> of thinking is to try a MySQL since it is free and used ubiquitously
> online, but I wanted to see if there were any other database or software
> systems that we should also consider before investing a lot of time in one
> approach.  Any advice and suggestions would be appreciated.
>
> Matt Sherman
>


Re: [CODE4LIB] Good Database Software for a Digital Project?

2016-04-15 Thread Matt Sherman
It is OCRed text that we've forced in the a delimited text file format.  So
there are a lot of ways we can spin it.  I am just not as familiar with the
storage/query systems we could put it in.

On Fri, Apr 15, 2016 at 2:44 PM, Gregory Murray 
wrote:

> Matt,
>
> If the annotated bibliography is already in XML form, or if the data it is
> suited to a hierarchical structure, you may want to consider using a
> native XML database (the most common open-source ones are eXist and BaseX)
> and querying it with XQuery.
>
> Greg
>
>
>
> On 4/15/16, 2:18 PM, "Code for Libraries on behalf of Matt Sherman"
>  wrote:
>
> >Hi all,
> >
> >I am looking to pick the group brain as to what might be the most useful
> >database software for a digital project I am collaborating on.  We are
> >working on converting an annotated bibliography to a searchable database.
> >While I have the data in a few structured formats, we need to figure out
> >now what to actually put it in so that it can be queried.  My default line
> >of thinking is to try a MySQL since it is free and used ubiquitously
> >online, but I wanted to see if there were any other database or software
> >systems that we should also consider before investing a lot of time in one
> >approach.  Any advice and suggestions would be appreciated.
> >
> >Matt Sherman
>


Re: [CODE4LIB] Good Database Software for a Digital Project?

2016-04-15 Thread Gregory Murray
Matt,

If the annotated bibliography is already in XML form, or if the data it is
suited to a hierarchical structure, you may want to consider using a
native XML database (the most common open-source ones are eXist and BaseX)
and querying it with XQuery.

Greg



On 4/15/16, 2:18 PM, "Code for Libraries on behalf of Matt Sherman"
 wrote:

>Hi all,
>
>I am looking to pick the group brain as to what might be the most useful
>database software for a digital project I am collaborating on.  We are
>working on converting an annotated bibliography to a searchable database.
>While I have the data in a few structured formats, we need to figure out
>now what to actually put it in so that it can be queried.  My default line
>of thinking is to try a MySQL since it is free and used ubiquitously
>online, but I wanted to see if there were any other database or software
>systems that we should also consider before investing a lot of time in one
>approach.  Any advice and suggestions would be appreciated.
>
>Matt Sherman


Re: [CODE4LIB] Good Database Software for a Digital Project?

2016-04-15 Thread Chris Gray

Have a look at http://grimoire.ca/mysql/choose-something-else:

   "Considering MySQL? Use something else. Already on MySQL? Migrate.
   For every successful project built on MySQL, you could uncover a
   history of time wasted mitigating MySQL's inadequacies, masked by a
   hard-won, but meaningless, sense of accomplishment over the effort
   spent making MySQL behave."

PostgreSQL is a much better option and a sounder investment for the future.

Chris

On 16-04-15 02:23 PM, Ben Cail wrote:
I would suggest looking at postgresql . It 
may not be as widely used as mysql, but it is used a lot, and it's a 
high-quality piece of database software. It's also free.


-Ben

On 04/15/2016 02:18 PM, Matt Sherman wrote:

Hi all,

I am looking to pick the group brain as to what might be the most useful
database software for a digital project I am collaborating on. We are
working on converting an annotated bibliography to a searchable 
database.

While I have the data in a few structured formats, we need to figure out
now what to actually put it in so that it can be queried.  My default 
line

of thinking is to try a MySQL since it is free and used ubiquitously
online, but I wanted to see if there were any other database or software
systems that we should also consider before investing a lot of time 
in one

approach.  Any advice and suggestions would be appreciated.

Matt Sherman


Re: [CODE4LIB] Good Database Software for a Digital Project?

2016-04-15 Thread Scancella, John
I would definitely pick postgres over mysql. It has all of the same features 
and more, plus it is easier to use (in my own opinion).

But before I even pick a database I would consider these:
What are the speed requirements? 
How do you plan on doing searching? 
How much data? 
Does it need to be redundant? 
What about clustering? 
Geographically diverse for faster local retrieval? 
What languages or other technologies do you plan on interfacing with?

and then, based on those answers more questions will arise.
Best of luck!

From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Ben Cail 
[benjamin_c...@brown.edu]
Sent: Friday, April 15, 2016 2:23 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Good Database Software for a Digital Project?

I would suggest looking at postgresql . It
may not be as widely used as mysql, but it is used a lot, and it's a
high-quality piece of database software. It's also free.

-Ben

On 04/15/2016 02:18 PM, Matt Sherman wrote:
> Hi all,
>
> I am looking to pick the group brain as to what might be the most useful
> database software for a digital project I am collaborating on.  We are
> working on converting an annotated bibliography to a searchable database.
> While I have the data in a few structured formats, we need to figure out
> now what to actually put it in so that it can be queried.  My default line
> of thinking is to try a MySQL since it is free and used ubiquitously
> online, but I wanted to see if there were any other database or software
> systems that we should also consider before investing a lot of time in one
> approach.  Any advice and suggestions would be appreciated.
>
> Matt Sherman


Re: [CODE4LIB] Good Database Software for a Digital Project?

2016-04-15 Thread Matt Sherman
Well, this is a side project with just 2 of us working on it, and I have
the tech skills so it is more of what I need to learn to make it work.

On Fri, Apr 15, 2016 at 2:22 PM, Ethan Gruber  wrote:

> There are countless ways to approach the problem, but I suggest beginning
> with tools that are within the area of expertise of your staff. Mapping
> disparate structured formats into a single Solr instance for fast search
> and retrieval is one possibility.
>
> On Fri, Apr 15, 2016 at 2:18 PM, Matt Sherman 
> wrote:
>
> > Hi all,
> >
> > I am looking to pick the group brain as to what might be the most useful
> > database software for a digital project I am collaborating on.  We are
> > working on converting an annotated bibliography to a searchable database.
> > While I have the data in a few structured formats, we need to figure out
> > now what to actually put it in so that it can be queried.  My default
> line
> > of thinking is to try a MySQL since it is free and used ubiquitously
> > online, but I wanted to see if there were any other database or software
> > systems that we should also consider before investing a lot of time in
> one
> > approach.  Any advice and suggestions would be appreciated.
> >
> > Matt Sherman
> >
>


Re: [CODE4LIB] Good Database Software for a Digital Project?

2016-04-15 Thread Miles Fidelman

It might be worth checking out some existing solutions to the problem.

Zotero is a pretty good tool for collecting, organizing, and sharing 
citation data - open source, works through a browser plug-in, 
collaboration capabilities (though the server code is a bit harder to 
get one's hands on).


For database, you might also look at noSQL options.  eXist 
(http://exist-db.org) is an open source XML database that's pretty 
extensively used for cataloguing type applications.  CouchDB is also 
kind of interesting, and easy to use.


Miles Fidelman




On 4/15/16 2:22 PM, Ethan Gruber wrote:

There are countless ways to approach the problem, but I suggest beginning
with tools that are within the area of expertise of your staff. Mapping
disparate structured formats into a single Solr instance for fast search
and retrieval is one possibility.

On Fri, Apr 15, 2016 at 2:18 PM, Matt Sherman 
wrote:


Hi all,

I am looking to pick the group brain as to what might be the most useful
database software for a digital project I am collaborating on.  We are
working on converting an annotated bibliography to a searchable database.
While I have the data in a few structured formats, we need to figure out
now what to actually put it in so that it can be queried.  My default line
of thinking is to try a MySQL since it is free and used ubiquitously
online, but I wanted to see if there were any other database or software
systems that we should also consider before investing a lot of time in one
approach.  Any advice and suggestions would be appreciated.

Matt Sherman



--
In theory, there is no difference between theory and practice.
In practice, there is.   Yogi Berra


Re: [CODE4LIB] Good Database Software for a Digital Project?

2016-04-15 Thread Ben Cail
I would suggest looking at postgresql . It 
may not be as widely used as mysql, but it is used a lot, and it's a 
high-quality piece of database software. It's also free.


-Ben

On 04/15/2016 02:18 PM, Matt Sherman wrote:

Hi all,

I am looking to pick the group brain as to what might be the most useful
database software for a digital project I am collaborating on.  We are
working on converting an annotated bibliography to a searchable database.
While I have the data in a few structured formats, we need to figure out
now what to actually put it in so that it can be queried.  My default line
of thinking is to try a MySQL since it is free and used ubiquitously
online, but I wanted to see if there were any other database or software
systems that we should also consider before investing a lot of time in one
approach.  Any advice and suggestions would be appreciated.

Matt Sherman


Re: [CODE4LIB] Good Database Software for a Digital Project?

2016-04-15 Thread Ethan Gruber
There are countless ways to approach the problem, but I suggest beginning
with tools that are within the area of expertise of your staff. Mapping
disparate structured formats into a single Solr instance for fast search
and retrieval is one possibility.

On Fri, Apr 15, 2016 at 2:18 PM, Matt Sherman 
wrote:

> Hi all,
>
> I am looking to pick the group brain as to what might be the most useful
> database software for a digital project I am collaborating on.  We are
> working on converting an annotated bibliography to a searchable database.
> While I have the data in a few structured formats, we need to figure out
> now what to actually put it in so that it can be queried.  My default line
> of thinking is to try a MySQL since it is free and used ubiquitously
> online, but I wanted to see if there were any other database or software
> systems that we should also consider before investing a lot of time in one
> approach.  Any advice and suggestions would be appreciated.
>
> Matt Sherman
>


[CODE4LIB] Good Database Software for a Digital Project?

2016-04-15 Thread Matt Sherman
Hi all,

I am looking to pick the group brain as to what might be the most useful
database software for a digital project I am collaborating on.  We are
working on converting an annotated bibliography to a searchable database.
While I have the data in a few structured formats, we need to figure out
now what to actually put it in so that it can be queried.  My default line
of thinking is to try a MySQL since it is free and used ubiquitously
online, but I wanted to see if there were any other database or software
systems that we should also consider before investing a lot of time in one
approach.  Any advice and suggestions would be appreciated.

Matt Sherman


Re: [CODE4LIB] authority work with isni

2016-04-15 Thread Stuart A. Yeates
Wikidata has lots of authority control info and crosswalks many, primarily
based on en.wiki edits. See the list at
https://en.wikipedia.org/wiki/Template:Authority_control /
https://en.wikipedia.org/wiki/Module:Authority_control

Wikidata can be queried of batch downloaded.

cheers
stuart

--
...let us be heard from red core to black sky

On Sat, Apr 16, 2016 at 3:57 AM, Kyle Banerjee 
wrote:

> On Fri, Apr 15, 2016 at 2:16 AM, Eric Lease Morgan  wrote:
>
> > ...
> > My questions are:
> >
> >   * What remote authority databases are available programmatically? I
> > already know of one from the Library of Congress, VIAF, and probably
> > WorldCat Identities. Does ISNI support some sort of API, and if so, where
> > is some documentation?
> >
>
> Depends on what you have in mind. For databases similar to your example, I
> believe ORCID has an API. GNIS, ULAN, CONA, and TGN might be interesting to
> you, but there are tons more, particularly if you add subject authorities
> (e.g. AAT, MeSH). The Getty stuff is all available as LoD.
>
>   * I believe the Library Of Congress, VIAF, and probably WorldCat
> > Identities all support linked data. Does ISNI, and if so, then how is it
> > implemented and can you point me to documentation?
> >
> >   * When it comes to updating the local (MARC) authority records, how do
> > you suggest the updates happen? More specifically, what types of values
> do
> > you suggest I insert into what specific (MARC) fields/subfields? Some
> > people advocate $0 of 1xx, 6xx, and 7xx fields. Other people suggest 024
> > subfields 2 and a. Inquiring minds would like to know.
> >
>
> Implementation would be specific to your system and those you wish to
> interact with. The MARC record is used to represent/transmit data, but it
> doesn't actually exist in the sense that systems use it internally as is.
>
> Having said that, I think the logical place to put control numbers from
> different schema is in 024 because that field allows you to differentiate
> the source so it doesn't matter if control numbers overlap
>
> kyle
>


[CODE4LIB] Job: Library Technology Officer at Seattle Public Library

2016-04-15 Thread jobs
Library Technology Officer 
Seattle Public Library
Seattle

The Seattle Public Library is Seattle's center of
information and knowledge and one of the most popular and valued services in
the city. Library staff members are highly regarded by the public for their
knowledge, quality of service, and caring. Staff members are committed to the
Library's organizational values of respect, partnership, engagement,
diversity, transparency, and recognition. In particular, they demonstrate
respect, engage in partnerships, and are transparent in their communications
and intentions. A strategic priority of the Library is to foster an internal
culture of innovation which focuses on creativity, engagement, learning, and
staff development. If you share those values and meet the qualifications, the
Library invites you to apply for this position.

  
The Library Technology Officer (LTO) reports to the Director of Library
Programs and Services (LPSD) and plays a lead role in developing and
maintaining a leading-edge technology infrastructure to ensure the success of
the Library's vision and strategies. In consultation with the Director of
LPSD, the LTO leads and participates in short-term and long-range strategic
planning that addresses current and emerging service needs and develops and
implements effective technological responses to those needs. The LTO directs,
supervises and evaluates the activities and performance of Information
Technology Division staff, vendors and project or consultant staff assigned to
information technology projects or activities.

  
  
**Job Responsibilities:**  
  
_Essential Functions_

  * Assist the Director of LPSD in short-term and long-range strategic planning 
for technology, including capital projects, service innovation and implementing 
and managing changes in the Library's technology environment.
  * Assess, develop and maintain a sustainable technological infrastructure 
which ensures successful customer service levels throughout the organization.
  * Evaluate and identify the Library's current and emerging technology needs 
and work closely with the Library Leadership Team, designated committees and 
staff to respond to these needs.
  * Understand and embrace public service goals and strategies and be able to 
determine and articulate how technology can be effectively utilized to achieve 
those goals and strategies.
  * Serve as a technology consultant and liaison with management of functional 
departments on all technology matters to answer questions, evaluate needs, 
monitor service and interdepartmental impacts; advise on problems, support 
technology education and understanding and ensure smooth conversion from 
existing automated systems to new and improved systems.
  * Maintain knowledge of leading edge technological advances in library 
systems and general business operations to assure that new developments are 
incorporated in future systems for the Library.
  * Develop a culture of innovation and a plan and budget to support research 
and development around Library services and integration of various Library 
business platforms in a cross divisional effort throughout the organization.
  * Ensure efficient, cost-effective and timely delivery of technology services 
by planning, organizing, administering and evaluating operations, staff, 
budgets, vendor and consultant contracts and other resources; coordinate the 
design and implementation of network, telecommunications, desktop and library 
systems.
  * Direct, supervise and evaluate the work of Information Technology Division 
staff; promote and ensure that staff perform work effectively in a 
collaborative team environment; manage staff performance to achieve the 
Library's technology vision, goals and strategic directions.
  * Develop annual budget proposals to support information technology needs of 
the Library. Represent the Library to and work collaboratively with the City of 
Seattle Department of Information Technology.
  * Administer and control the information systems expense budget to contribute 
to a cost-effective operation, demonstrating sound financial management skills.
  
  
**Qualifications:**  
  
The ideal candidate will be innovative, flexible, responsive, collaborative;
and self-directed; an honest, open communicator who inspires trust; and one
who seeks and sparks creative contributions from others.

  
Additionally, qualified candidates will possess:

  * A relevant Bachelor's degree and a graduate degree in computer science, 
business, public administration or a related field, or equivalent experience. A 
Master's degree in Library Science or Library Information Science is preferred.
  * Five to seven years of increasingly responsible managerial experience in a 
medium to large sophisticated information system environment is preferred. 
Information Technology experience in a library setting is preferred.
  * Advanced knowledge of information technology, systems, technology products, 
and 

Re: [CODE4LIB] authority work with isni

2016-04-15 Thread Kyle Banerjee
On Fri, Apr 15, 2016 at 2:16 AM, Eric Lease Morgan  wrote:

> ...
> My questions are:
>
>   * What remote authority databases are available programmatically? I
> already know of one from the Library of Congress, VIAF, and probably
> WorldCat Identities. Does ISNI support some sort of API, and if so, where
> is some documentation?
>

Depends on what you have in mind. For databases similar to your example, I
believe ORCID has an API. GNIS, ULAN, CONA, and TGN might be interesting to
you, but there are tons more, particularly if you add subject authorities
(e.g. AAT, MeSH). The Getty stuff is all available as LoD.

  * I believe the Library Of Congress, VIAF, and probably WorldCat
> Identities all support linked data. Does ISNI, and if so, then how is it
> implemented and can you point me to documentation?
>
>   * When it comes to updating the local (MARC) authority records, how do
> you suggest the updates happen? More specifically, what types of values do
> you suggest I insert into what specific (MARC) fields/subfields? Some
> people advocate $0 of 1xx, 6xx, and 7xx fields. Other people suggest 024
> subfields 2 and a. Inquiring minds would like to know.
>

Implementation would be specific to your system and those you wish to
interact with. The MARC record is used to represent/transmit data, but it
doesn't actually exist in the sense that systems use it internally as is.

Having said that, I think the logical place to put control numbers from
different schema is in 024 because that field allows you to differentiate
the source so it doesn't matter if control numbers overlap

kyle


Re: [CODE4LIB] authority work with isni

2016-04-15 Thread LeVan,Ralph
If you're comfortable using the VIAF API, then you'll find ISNI links in VIAF 
records.

Ralph

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Eric 
Lease Morgan
Sent: Friday, April 15, 2016 5:17 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: authority work with isni

I am thinking about doing some authority work with content from ISNI, and I 
have a few questions about the resource.

As yo may or may not know, ISNI is a sort of authority database. [1] One can 
search for an identity in ISNI, identify a person of interest, get a key, 
transform the key into a URI, and use the URI to get back both human-readable 
and machine readable data about the person. For example, the following URIs 
return the same content in different forms:

  * human-readable - http://isni.org/isni/35046923
  * XML - http://isni.org/isni/35046923.xml

I discovered the former URI through a tiny bit of reading. [2] And I discovered 
the later URI through a simple guess. What other URIs exist?

When it comes to the authority work, my goal is to enhance authority records; 
to more thoroughly associate global identifiers with named entities in a local 
authority database. Once this goal is accomplished, the library catalog 
experience can be enhanced, and the door is opened for supporting linked data 
initiatives. In order to accomplish the goal, I believe I can:

  1. get a list of authority records
  2. search for name in a global authority database (like VIAF or ISNI)
  3. if found, then update local authority record accordingly
  4. go to Step #2 for all records
  5. done

My questions are:

  * What remote authority databases are available programmatically? I already 
know of one from the Library of Congress, VIAF, and probably WorldCat 
Identities. Does ISNI support some sort of API, and if so, where is some 
documentation?

  * I believe the Library Of Congress, VIAF, and probably WorldCat Identities 
all support linked data. Does ISNI, and if so, then how is it implemented and 
can you point me to documentation?

  * When it comes to updating the local (MARC) authority records, how do you 
suggest the updates happen? More specifically, what types of values do you 
suggest I insert into what specific (MARC) fields/subfields? Some people 
advocate $0 of 1xx, 6xx, and 7xx fields. Other people suggest 024 subfields 2 
and a. Inquiring minds would like to know.

Fun with authorities!? And, “What’s in a name anyway?"

[1] ISNI - http://isni.org
[2] some documentation - http://isni.org/how-isni-works

—
Eric Lease Morgan
Lost In Rome


[CODE4LIB] Idaho State University Oboler LIbrary position opening

2016-04-15 Thread Tania Harden
Library Information Systems Computer Analyst, Oboler Library (5035)
Posting Number: req336
12 Month Full-Time
Library Admin
Pocatello - Main


Primary Purpose

Administer the Library’s supported information systems. Work in a collegial
environment to implement emerging information technologies specific to
library systems and resources.

Key Responsibilities

A. Administers and coordinates all activities related to EZProxy
authentication software for remote access to library’s licensed electronic
resources.  These activities include:
B. Installs and maintains Envisionware’s PC Reservation software for
controlled access to library’s public computers and at remote library
sites.
C. Administers and coordinates all activities related to Windows 2008/12
server and Windows 7/10 public computers.
D. Administers and coordinates all activities related to for LOCKSS server
which provides perpetual journal access.
E. Administers and coordinates all activities related to DeepFreeze and LPT
One software.
F. Administers and coordinates all activities related to LAMP server which
houses the Library’s Wikis and Blogs.
G. Administers and coordinates all activities related to installation,
maintenance and troubleshooting of computer hardware and software in the
library.
H. Assist in maintaining Voyager, the Library’s Integrated Library System
 I. Perform one-on-one staff user training, as needed.
K. Participates in Reference Desk duty.
L. Participates in professional activities, including membership on library
committees and  task forces, membership in professional groups, and
attendance meetings.
J. Other duties as assigned by supervisor.

Minimum Qualifications

•Bachelor's degree
•Ability and desire to learn academic library functions, routines, and
resources
•Basic understanding of network infrastructure
•Basic understanding of desktop computers, including installation of
software
•Basic understanding of the client/server environment
•Basic understanding of mobile technology
•Basic familiarity with the cloud computing environment
•Good understanding of Microsoft Office Access or SQL
•Proficient in Microsoft Office Word and Excel
•Demonstrated knowledge of current and evolving Web
applications/development
•Demonstrated ability to work independently and collaboratively in a
team based environment
•Demonstrated ability to think critically, analyze problems, develop
and implement creative solutions
•Good organizational, interpersonal, oral and written communication
skills
•A desire to explore new technologies relevant to library services
•A desire to implement new security procedures as new technologies are
incorporated

Preferred Qualifications

•ALA-accredited Master's degree in Library Science or equivalent
•Basic understanding of library routines, functions, and/or library
database structure
•Experience developing and supporting web-based services
•Experience working in a UNIX environment, Solaris preferred
•Experience working with LAMP servers
•Basic understanding of integrated library systems
•Basic understanding of library standards (e.g., MARC, Dublin Core)
•Experience working with XML, HTML, XSL, CSS, JavaScript

Please submit the following documents with your application:
Resume, Cover Letter, and a list of 3 professional references
Priority consideration will be given to applications received by May 16,
2016. However, the position will remain open until filled.
Salary will be $37,250 annually commensurate with education and experience.
Includes a competitive benefits package.

*Tania Harden*
Digital Initiatives Librarian

Eli M. Oboler Library
Idaho State University
850 S. 9th Ave., Stop 8089
Pocatello, ID  83209

hardt...@isu.edu
208-282-1678


Re: [CODE4LIB] authority work with isni

2016-04-15 Thread Jing Wang
Hi, Eric,

ISNI does offer API, documentation available at 
http://www.isni.org/content/documents-related-database-enquiry

OCLC Research task group on representing organizations in 
ISNI
 will release its report soon.  Supporting Linked Data is one of the issues we 
discussed. Stay tuned for upcoming report and webinar.



Jing





> -Original Message-

> From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf

> Of Eric Lease Morgan

> Sent: Friday, April 15, 2016 5:17 AM

> To: CODE4LIB@LISTSERV.ND.EDU

> Subject: [CODE4LIB] authority work with isni

>

> I am thinking about doing some authority work with content from ISNI,

> and I have a few questions about the resource.

>

> As yo may or may not know, ISNI is a sort of authority database. [1]

> One can search for an identity in ISNI, identify a person of interest,

> get a key, transform the key into a URI, and use the URI to get back

> both human- readable and machine readable data about the person. For

> example, the following URIs return the same content in different forms:

>

>   * human-readable - http://isni.org/isni/35046923

>   * XML - http://isni.org/isni/35046923.xml

>

> I discovered the former URI through a tiny bit of reading. [2] And I

> discovered the later URI through a simple guess. What other URIs exist?

>

> When it comes to the authority work, my goal is to enhance authority

> records; to more thoroughly associate global identifiers with named

> entities in a local authority database. Once this goal is

> accomplished, the library catalog experience can be enhanced, and the

> door is opened for supporting linked data initiatives. In order to accomplish 
> the goal, I believe I can:

>

>   1. get a list of authority records

>   2. search for name in a global authority database (like VIAF or ISNI)

>   3. if found, then update local authority record accordingly

>   4. go to Step #2 for all records

>   5. done

>

> My questions are:

>

>   * What remote authority databases are available programmatically? I

> already know of one from the Library of Congress, VIAF, and probably

> WorldCat Identities. Does ISNI support some sort of API, and if so,

> where is some documentation?

>

>   * I believe the Library Of Congress, VIAF, and probably WorldCat

> Identities all support linked data. Does ISNI, and if so, then how is

> it implemented and can you point me to documentation?

>

>   * When it comes to updating the local (MARC) authority records, how

> do you suggest the updates happen? More specifically, what types of

> values do you suggest I insert into what specific (MARC)

> fields/subfields? Some people advocate $0 of 1xx, 6xx, and 7xx fields.

> Other people suggest 024 subfields 2 and a. Inquiring minds would like to 
> know.

>

> Fun with authorities!? And, “What’s in a name anyway?"

>

> [1] ISNI - http://isni.org

> [2] some documentation - http://isni.org/how-isni-works

>

> —

> Eric Lease Morgan

> Lost In Rome


[CODE4LIB] Job: Information Services Archivist, New York State Archives

2016-04-15 Thread Michelle Arpey
The New York State Archives is looking for an Archivist with a commitment to 
improving access to historical records along with two years of professional 
experience.  The person in this position will:

  *   Participate in the evaluation, implementation and integration of 
standards based public access tools for archival records, including an EAD 
based finding aid catalog, Digital Collections, and name index;
  *   Develop web content and features including tools for using historical 
records in the classroom;
  *   Support the development of the State Archives electronic records program;
  *   Support the integration of records management systems with archival 
management systems;
  *   Advise on the technical implementation of professional standards; and
  *   Work with State Archives staff and vendors to identify and implement web 
based solutions.
This is a provisional appointment leading to a permanent position.  For 
additional information, including minimum and preferred qualifications see: 
http://www.oms.nysed.gov/hr/flyers/OCE_960_26221.htm





Confidentiality Notice

This email including all attachments is confidential and intended solely for 
the use of the individual or entity to which it is addressed. This 
communication may contain information that is protected from disclosure under 
State and/or Federal law. Please notify the sender immediately if you have 
received this communication in error and delete this email from your system. If 
you are not the intended recipient you are notified that disclosing, copying, 
distributing or taking any action in reliance on the contents of this 
information is strictly prohibited.


Re: [CODE4LIB] authority work with isni

2016-04-15 Thread McDonald, Stephen
I don't have any useful answers for most of your questions.  But you might be 
interested to know that ISNI is working with Library of Congress to get ISNI 
identifiers into LC name authority records.  When this gets implemented, any 
existing ISNI identifiers in the National Authority File will be removed, and 
new ISNI identifiers based on VIAF matching will be inserted.  Thereafter, the 
NAF will be periodically updated with new ISNI identifiers.

It is not clear to me how soon this might be implemented.

Steve McDonald
steve.mcdon...@tufts.edu


> -Original Message-
> From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
> Eric Lease Morgan
> Sent: Friday, April 15, 2016 5:17 AM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: [CODE4LIB] authority work with isni
> 
> I am thinking about doing some authority work with content from ISNI, and I
> have a few questions about the resource.
> 
> As yo may or may not know, ISNI is a sort of authority database. [1] One can
> search for an identity in ISNI, identify a person of interest, get a key,
> transform the key into a URI, and use the URI to get back both human-
> readable and machine readable data about the person. For example, the
> following URIs return the same content in different forms:
> 
>   * human-readable - http://isni.org/isni/35046923
>   * XML - http://isni.org/isni/35046923.xml
> 
> I discovered the former URI through a tiny bit of reading. [2] And I 
> discovered
> the later URI through a simple guess. What other URIs exist?
> 
> When it comes to the authority work, my goal is to enhance authority
> records; to more thoroughly associate global identifiers with named entities
> in a local authority database. Once this goal is accomplished, the library
> catalog experience can be enhanced, and the door is opened for supporting
> linked data initiatives. In order to accomplish the goal, I believe I can:
> 
>   1. get a list of authority records
>   2. search for name in a global authority database (like VIAF or ISNI)
>   3. if found, then update local authority record accordingly
>   4. go to Step #2 for all records
>   5. done
> 
> My questions are:
> 
>   * What remote authority databases are available programmatically? I
> already know of one from the Library of Congress, VIAF, and probably
> WorldCat Identities. Does ISNI support some sort of API, and if so, where is
> some documentation?
> 
>   * I believe the Library Of Congress, VIAF, and probably WorldCat Identities
> all support linked data. Does ISNI, and if so, then how is it implemented and
> can you point me to documentation?
> 
>   * When it comes to updating the local (MARC) authority records, how do
> you suggest the updates happen? More specifically, what types of values do
> you suggest I insert into what specific (MARC) fields/subfields? Some people
> advocate $0 of 1xx, 6xx, and 7xx fields. Other people suggest 024 subfields 2
> and a. Inquiring minds would like to know.
> 
> Fun with authorities!? And, “What’s in a name anyway?"
> 
> [1] ISNI - http://isni.org
> [2] some documentation - http://isni.org/how-isni-works
> 
> —
> Eric Lease Morgan
> Lost In Rome


[CODE4LIB] authority work with isni

2016-04-15 Thread Eric Lease Morgan
I am thinking about doing some authority work with content from ISNI, and I 
have a few questions about the resource.

As yo may or may not know, ISNI is a sort of authority database. [1] One can 
search for an identity in ISNI, identify a person of interest, get a key, 
transform the key into a URI, and use the URI to get back both human-readable 
and machine readable data about the person. For example, the following URIs 
return the same content in different forms:

  * human-readable - http://isni.org/isni/35046923
  * XML - http://isni.org/isni/35046923.xml

I discovered the former URI through a tiny bit of reading. [2] And I discovered 
the later URI through a simple guess. What other URIs exist?

When it comes to the authority work, my goal is to enhance authority records; 
to more thoroughly associate global identifiers with named entities in a local 
authority database. Once this goal is accomplished, the library catalog 
experience can be enhanced, and the door is opened for supporting linked data 
initiatives. In order to accomplish the goal, I believe I can:

  1. get a list of authority records
  2. search for name in a global authority database (like VIAF or ISNI)
  3. if found, then update local authority record accordingly
  4. go to Step #2 for all records
  5. done

My questions are:

  * What remote authority databases are available programmatically? I already 
know of one from the Library of Congress, VIAF, and probably WorldCat 
Identities. Does ISNI support some sort of API, and if so, where is some 
documentation?

  * I believe the Library Of Congress, VIAF, and probably WorldCat Identities 
all support linked data. Does ISNI, and if so, then how is it implemented and 
can you point me to documentation?

  * When it comes to updating the local (MARC) authority records, how do you 
suggest the updates happen? More specifically, what types of values do you 
suggest I insert into what specific (MARC) fields/subfields? Some people 
advocate $0 of 1xx, 6xx, and 7xx fields. Other people suggest 024 subfields 2 
and a. Inquiring minds would like to know.

Fun with authorities!? And, “What’s in a name anyway?"

[1] ISNI - http://isni.org
[2] some documentation - http://isni.org/how-isni-works

—
Eric Lease Morgan
Lost In Rome