[CODE4LIB] Help wanted -- PREMIS in METS toolbox update

2016-06-29 Thread Peter McKinney
My apologies if this hits your mailbox more than once.

PREMIS* needs help! One of our community tools needs an upgrade and we need 
developers to help us upgrade it.

The PREMIS in METS toolbox [http://pim.fcla.edu/] is a set of open source tools 
that can validate and generate PREMIS in METS documents. The toolbox needs to 
be updated to reflect changes that have been made in Version 3 of PREMIS and we 
need technical help to undertake this work.

PiM uses open source tools to describe objects being preserved, convert between 
PREMIS and METS (and vice versa), and validate according to the XML schema and 
the PREMIS and METS guidelines. Tools/programs used include DROID, JHOVE, XSLT, 
Schematron, Ruby. The source code is available on github 
[http://github.com/LibraryOfCongress/pimtoolbox].

If you think you have some time and the skills to help us update the toolbox or 
have any questions, please do get in touch with me 
(peter.mckin...@dia.govt.nz). PREMIS is a 
community-driven activity that only exists thanks to contributions from the 
community.

Best wishes,
Pete

* The PREMIS Data Dictionary for Preservation Metadata is the international 
standard for metadata to support the preservation of digital objects and ensure 
their long-term usability. Developed by an international team of experts, 
PREMIS is implemented in digital preservation projects around the world, and 
support for PREMIS is incorporated into a number of commercial and open-source 
digital preservation tools and systems. See 
http://www.loc.gov/standards/premis/.



Peter McKinney | Digital Preservation Policy Analyst | Information and 
Knowledge Services
National Library of New Zealand Te Puna Mātauranga o Aotearoa
Direct Dial: +64 4 462 3931 | Extn: 3931
Cnr Molesworth and Aitken Streets | PO Box 1467, Wellington 6140 |
http://digitalpreservation.natlib.govt.nz/

I work on Mondays, Wednesdays and Thursdays.

The National Library is part of the Department of Internal Affairs


[CODE4LIB] Job: Sr. Digital Library Software Engineer - LD4L Labs at Harvard University

2016-06-29 Thread jobs
Sr. Digital Library Software Engineer - LD4L Labs
Harvard University
Cambridge, MA

Come join the Harvard team working on pushing the envelope of linked data for
libraries!

  
Harvard Library is a participant in an Andrew W. Mellon Foundation grant -
Linked Data for Libraries: LD4L Labs. The project is a collaboration of
Cornell University, Harvard University, The University of Iowa, and Stanford
University to advance the use and usefulness of linked data in libraries.
Project team members will create and assemble metadata tools, ontologies,
services, and approaches that create and use linked data to improve the
discovery, use, and understanding of scholarly information resources. The goal
is to pilot tools and services and to create solutions that can be implemented
in production at research libraries within the next three to five years.

  
This project is a great opportunity to collaborate with outstanding colleagues
at Harvard and peer institutions on the leading edge of the semantic web,
developing tools and technologies for the next generation of library metadata,
exposing Harvard's library resources as linked open data and linking that data
to related resources on the web. You will be a part of
Harvard University IT Library Technology Services (LTS), which develops
digital library software for the Harvard Library, including a digital
repository service, web delivery applications for images, audio, video, and
books, and software for the management and public access to visual material,
archival descriptions, and geospatial data sets.

  
The Harvard portion of the project includes deploying a pilot linked data
conversion and triple store hosting environment for BIBFRAME linked data, an
evolving standard being stewarded by the Library of Congress. As part of a
team of LTS developers, Library metadata specialists, and dev/ops staff, you
will work on projects to further LTS goals, and play a lead role in piloting
linked data hosting, conversion, editing, publication, and visualization of
Harvard Geospatial Library and Harvard Film Archive metadata. Work may include
enhancing, and deploying triple stores, linked data identity endpoints, the
open source Vitro linked data creation and editing environment, and data
format conversions and visualizations that will leverage linked data. You will
work closely with skilled staff at Harvard as well as collaborate with leading
linked data staff from Cornell, Stanford and Iowa to build metadata management
and linked data services for the library.

  
This is a term position funded through March 31, 2018.

  
Basic Qualifications

- Bachelors degree with 5+ years software engineering experience or equivalent. 
 
- Familiarity with linked data principles and technologies, including XML, RDF 
and ontologies  
- Full stack web application development experience, including server side 
development, web services, java, and javascript  
  
Additional Qualifications Self-motivated, with evidence of ability to assess, 
analyze, plan, and solve problems creatively and collaboratively in a complex, 
rapidly-changing environment  
- Excellent English oral and written communication skills  
- Proven ability to communicate with both technical and non-technical staff and 
capture user needs  
- Proven ability to accomplish goals in a timely way with minimal supervision  
- Knowledge of the academic library organization and information technology 
goals  
  
Experience with the following is desirable:

- Triple stores and SPARQL  
- Library metadata schemas such as MARC, METS, PREMIS, MODS, BIBFRAME, 
Schema.org  
- XML Schema and/or OWL  
- PHP, ruby on rails, or python (Omeka/PHP or Blacklight/Rails a plus)  
- Graph visualization for non-technical users  
- Developing and consuming RESTful APIs  
- Test-driven development practices  
- Agile development practices.  
- User story development and library client engagement  
- Integrating and/or contributing to open source software projects  
  
Please click the following link to view the job description.

  
39789BR - Senior Digital Library Software Engineer

https://jobs.brassring.com/tgwebhost/jobdetails.aspx?jobId=1220396&PartnerId=2
5240&SiteId=5341&type=mail&JobReqLang=1&recordstart=1&JobSiteId=5341&JobSiteIn
fo=1220396_5341&gqid=0

  
  



Brought to you by code4lib jobs: http://jobs.code4lib.org/job/26540/
To post a new job please visit http://jobs.code4lib.org/


Re: [CODE4LIB] Implementing Google Tag Manager

2016-06-29 Thread Mann, Paige
Hi Kyle,

I thought it'd be easier to just reply to the list. Thanks to Maria for 
mentioning our poster. 

Glad to hear that you're implementing Google Tag Manager (GTM). If I remember 
correctly it's a pretty painless process. You basically insert the GTM code 
after the opening  tag, and remove the Google Analytics (GA) code from 
the  tag (or where you might have inserted it). If that's all you do, 
you'll still collect the same GA data. Of course if you set up tags, triggers, 
and variable within GTM you can grab a lot more interesting data. Sanjeet Mann 
and I created a poster (see 
http://www.slideshare.net/NASIG/collecting-data-with-google-tag-manager) geared 
toward electronic resource librarians.

Like Lynn, I use only one GTM container. Google also recommends this for 
multiple domains (see "Do you manage multiple domains?" in  
https://support.google.com/tagmanager/answer/6103576?hl=en). As Google 
mentions, it makes setting up GTM tags, triggers, and variables a lot easier 
and reducing the need to duplicate these across multiple containers. However, 
as Ivan mentioned, since I use only one container I do have to deal with a long 
list of site-specific tags, variables, and triggers. However, if you were to 
set up multiple containers, you'd have to insert multiple GTM container codes 
into your systems and I'm not sure how you'd prevent GA from counting a single 
hit to a web page multiple times. Ivan, how do you deal with this?

In case it helps, I'll also outline how I've set up Google Analytics at my 
institution. Bear in mind I'm a self-taught-non-programmer librarian, so (1) 
while my settings seem to have worked so far, I can't guarantee that they're 
necessarily the "right" way to set up Google Analytics, and (2) I've found 
Google Tag Manager a lot easier to work with and I feel a lot more confident 
with the ways I've set it up to collect data. I'm curious to know if other 
libraries have set up their Google Analytics accounts similarly.
-- I have a single Google Analytics property for all library systems
-- I use the Admin > Property > Tracking Info > Referral Exclusion List option 
to list all the domains and subdomains I want Google Analytics to include as 
part of my Google Analytics property. 

Paige

Paige Mann
Physical Sciences Librarian
Armacost Library
University of Redlands

--

Date:Tue, 28 Jun 2016 07:28:15 -0400
From:Kyle Breneman 
Subject: Implementing Google Tag Manager

Has anyone out there implemented Google Tag Manager?  I'd like to get it up and 
running for all of our library's web properties this summer, but I'm a bit 
uncertain how I should be structuring it.  The most immediate question is, "Do 
I need only 1 container, or do I need more than 1 container?"
Right now, we are tracking our main website, our library blog, LibGuides, 
ArchivesSpace, EDS, and several other things.  Most of these are set up as 
separate "properties" of one umbrella GA account for the library.  This 
existing structure leads me to believe that I want to configure Google Tag 
Manager with multiple containers: one container for the main library website, 
one container for our blog, one container for ArchivesSpace, etc.
Does that sound correct?  Is there anything else I should be mindful of as I 
setup GTM?

Regards,
Kyle

--

Date:Tue, 28 Jun 2016 11:56:28 +
From:"Eades, Lynn" 
Subject: Re: Implementing Google Tag Manager

Hi Kyle,

The University Libraries here at UNC-Chapel Hill are currently working with a 
consultant to set up Google Tag Manager.  We also had several GA accounts for 
our many websites.  Within GTM, we have one container/account for all the 
sites.  This way we can see the flow of traffic between our sites.  The 
consulting firm, SearchDiscovery out of Atlanta, have been wonderful to work 
with and have really helped us in setting this up.

We are just about to wrap up our work with the consultant.  Would be happy to 
discuss further if desired.

Lynn
__
B. Lynn Eades
Web Development Librarian
Health Sciences Library
335 South Columbia Street
CB# 7585
University of North Carolina at Chapel Hill Chapel Hill, NC  27599-7585

Phone: (919) 966-8012
Email: bea...@med.unc.edu
Website: http://hsl.lib.unc.edu
__
--

Date:Tue, 28 Jun 2016 09:01:37 -0400
From:Maria Aghazarian 
Subject: Re: Implementing Google Tag Manager

Paige and Sanjeet Mann talked about their experience using GTM at NASIG-- you 
may want to reach out to them specifically:
https://nasig2016.sched.org/paige_mann

--

Date:Tue, 28 Jun 2016 13:06:21 +
From:"Goldsmith, Ivan Victor" 
Subject: Re: Implementing Google Tag Manager

Hi Kyle,

At Penn Libraries, we've been working on adopting Google Tag Manager. We 
already use on many of our sites, with great results thus fa