[CODE4LIB] iPRES 2013 Proceedings

2013-09-16 Thread Angela Dappert
iPRES 2013 proceedings are available at
http://purl.pt/24107/1/


Re: [CODE4LIB] Expressing negatives and similar in RDF

2013-09-16 Thread Meehan, Thomas
Don: As I understand it, the open world view implies knowledge not asserted for 
whatever reason, whereas sometimes a negative is a definite (and ultimately 
verifiable) fact, such as a painting simply not having a title. I think you're 
ultimately right about unknown things.

Esmé's solution does seem to work, although would perhaps require redefinition 
for every element (title, place of pub, presence of clasp, binding, etc.). I 
did wonder if a more generic method existed.

Thank you,

Tom


---

Thomas Meehan
Head of Current Cataloguing
Library Services
University College London
Gower Street
London WC1E 6BT

t.mee...@ucl.ac.uk


 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
 Donald Brower
 Sent: 13 September 2013 14:46
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Expressing negatives and similar in RDF
 
 At a theoretical level, doesn't the Open World Assumption in RDF rule out
 outright negations? That is, someone else may know the title, and could
 assert it in a separate RDF document. RDF semantics seem to conflate
 unknown with nonexistent.
 
 Practically, Esme's approach seems better in these cases.
 
 
 -Don
 
 
 --
 Donald Brower, Ph.D.
 Digital Library Infrastructure Lead
 Hesburgh Libraries, University of Notre Dame
 
 
 
 
 On 9/13/13 8:51 AM, Esmé Cowles escow...@ucsd.edu wrote:
 
 Thomas-
 
 This isn't something I've run across yet.  But one thing you could do
 is create some URIs for different kinds of unknown/nonexistent titles:
 
 example:book1 dc:title example:unknownTitle
 example:book2 dc:title example:noTitle
 etc.
 
 You could then describe example:unknownTitle with a label or comment to
 fully describe the states you wanted to capture with the different
 categories.
 
 -Esme
 --
 Esme Cowles escow...@ucsd.edu
 
 Necessity is the plea for every infringement of human freedom. It is
 the  argument of tyrants; it is the creed of slaves. -- William Pitt,
 1783
 
 On 09/13/2013, at 7:32 AM, Meehan, Thomas t.mee...@ucl.ac.uk
 wrote:
 
  Hello,
 
  I'm not sure how sensible a question this is (it's certainly
 theoretical), but it cropped up in relation to a rare books
 cataloguing discussion. Is there a standard or accepted way to express
 negatives in RDF? This is best explained by examples, expressed in mock-
 turtle:
 
  If I want  to say this book has the title Cats in RDA I would do
 something like:
 
  example:thisbook dc:title Cats in RDA .
 
  Normally, if a predicate like dc:title is not relevant to
 example:thisbook I believe I am right in thinking that it would simply
 be missing, i.e. it is not part of a record where a set number of
 fields need to be filled in, so no need to even make the statement.
 However, there are occasions where a positively negative statement
 might be useful. I understand OWL has a way of managing the statement
 This book does not have the title Cats in RDA [1]:
 
  []  rdf:type owl:NegativePropertyAssertion ;
  owl:sourceIndividual   example:thisbook ;
  owl:assertionProperty  dc:title ;
  owl:targetIndividual   Cats in RDA .
 
  However, it would be more useful, and quite common at least in a
 bibliographic context, to say This book does not have a title.
 Ideally
 (?!) there would be an ontology of concepts like none, unknown, or
 even something, but unspecified:
 
  This book has no title:
  example:thisbook dc:title hasobject:false .
 
  It is unknown if this book has a title (sounds undesirable but I can
 think of instances where it might be handy[2]):
  example:thisbook dc:title hasobject:unknown .
 
  This book has a title but it has not been specified:
  example:thisbook dc:title hasobject:true .
 
  In terms of cataloguing, the answer is perhaps to refer to the rules
 (which would normally mandate supplied titles in square brackets and
 so
 forth) rather than use RDF to express this kind of thing, although the
 rules differ depending on the part of description and, in the case of
 the kind of thing that prompted the question- the presence of clasps
 on rare books- there are no rules. I wonder if anyone has any more
 wisdom on this.
 
  Many thanks,
 
  Tom
 
  [1] Adapted from
 http://www.w3.org/2007/OWL/wiki/Primer#Object_Properties
  [2] No many tbh, but e.g. title in an unknown script or
 indecipherable hand.
 
  ---
 
  Thomas Meehan
  Head of Current Cataloguing
  Library Services
  University College London
  Gower Street
  London WC1E 6BT
 
  t.mee...@ucl.ac.uk


Re: [CODE4LIB] CODE4LIB Digest - 12 Sep 2013 to 13 Sep 2013 (#2013-237)

2013-09-16 Thread aj...@virginia.edu
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I'd suggest that perhaps the confusion arises because This instance is (not) 
'valid' according to that ontology. might be inferred from an instance and an 
ontology (under certain conditions), and that's the soul of what we're asking 
when we define constraints on the data. Perhaps OWL can be used to express 
conditions of validity, as long as we represent the quality valid for use in 
inferences.

- ---
A. Soroka
The University of Virginia Library

On Sep 13, 2013, at 11:00 PM, CODE4LIB automatic digest system wrote:

 Also, remember that OWL does NOT constrain your data, it constrains only the 
 inferences that you can make about your data. OWL operates at the ontology 
 level, not the data level. (The OWL 2 documentation makes this more clear, in 
 my reading of it. I agree that the example you cite sure looks like a 
 constraint on the data... it's very confusing.)

-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
Comment: GPGTools - http://gpgtools.org

iQEcBAEBAgAGBQJSNwe2AAoJEATpPYSyaoIkwLcIAK+sMzy1XkqLStg94F2I40pe
0DepjqVhdPnaDS1Msg7pd7c7iC0L5NhCWd9BxzdvRgeMRr123zZ3EmKDSy8XZiGf
uQyXlA9cOqpCxdQLj2zXv5VHrOdlsA1UAGprwhYrxOz/v3xQ7b2nXusRoZRfDlts
iadvWx5DhLEb2+uVl9geteeymLIVUTzm8WnUITEE7by2HAQf9VlT9CrQSVQ21wLC
hvmk47Nt8WIGyPwRh1qOhvIXLDLxD9rkBSC1G01RhzwvctDy88Tmt2Ut47ZREScP
YUz/bf/qxITzX2L7tE35s2w+RUIFIFc4nJa3Xhp0wMoTAz5UYMiWIcXZ38qfGlY=
=PJTS
-END PGP SIGNATURE-


Re: [CODE4LIB] CODE4LIB Digest - 12 Sep 2013 to 13 Sep 2013 (#2013-237)

2013-09-16 Thread Karen Coyle

On 9/16/13 6:29 AM, aj...@virginia.edu wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I'd suggest that perhaps the confusion arises because This instance is (not) 'valid' 
according to that ontology. might be inferred from an instance and an ontology (under certain 
conditions), and that's the soul of what we're asking when we define constraints on the data. 
Perhaps OWL can be used to express conditions of validity, as long as we represent the quality 
valid for use in inferences.


Based on the results of the RDF Validation workshop [1], validation is 
being expressed today as SPARQL rules. If you express the rules in OWL 
then unfortunately you affect downstream re-use of your ontology, and 
that can create a mess for inferencing and can add a burden onto any 
reasoners, which are supposed to apply the OWL declarations.


One participant at the workshop demonstrated a system that used the OWL 
constraints as constraints, but only in a closed system. I think that 
the use of SPARQL is superior because it does not affect the semantics 
of the classes and properties, only the instance data, and that means 
that the same properties can be validated differently for different 
applications or under different contexts. As an example, one community 
may wish to say that their metadata can have one and only one dc:title, 
while others may allow more than one. You do not want to constrain 
dc:title throughout the Web, only your own use of it. (Tom Baker and I 
presented a solution to this on the second day as Application Profiles 
[2], as defined by the DC community).


kc
[1] https://www.w3.org/2012/12/rdf-val/agenda
[2] 
http://www.w3.org/2001/sw/wiki/images/e/ef/Baker-dc-abstract-model-revised.pdf




- ---
A. Soroka
The University of Virginia Library

On Sep 13, 2013, at 11:00 PM, CODE4LIB automatic digest system wrote:


Also, remember that OWL does NOT constrain your data, it constrains only the 
inferences that you can make about your data. OWL operates at the ontology 
level, not the data level. (The OWL 2 documentation makes this more clear, in 
my reading of it. I agree that the example you cite sure looks like a 
constraint on the data... it's very confusing.)

-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
Comment: GPGTools - http://gpgtools.org

iQEcBAEBAgAGBQJSNwe2AAoJEATpPYSyaoIkwLcIAK+sMzy1XkqLStg94F2I40pe
0DepjqVhdPnaDS1Msg7pd7c7iC0L5NhCWd9BxzdvRgeMRr123zZ3EmKDSy8XZiGf
uQyXlA9cOqpCxdQLj2zXv5VHrOdlsA1UAGprwhYrxOz/v3xQ7b2nXusRoZRfDlts
iadvWx5DhLEb2+uVl9geteeymLIVUTzm8WnUITEE7by2HAQf9VlT9CrQSVQ21wLC
hvmk47Nt8WIGyPwRh1qOhvIXLDLxD9rkBSC1G01RhzwvctDy88Tmt2Ut47ZREScP
YUz/bf/qxITzX2L7tE35s2w+RUIFIFc4nJa3Xhp0wMoTAz5UYMiWIcXZ38qfGlY=
=PJTS
-END PGP SIGNATURE-


--
Karen Coyle
kco...@kcoyle.net http://kcoyle.net
m: 1-510-435-8234
skype: kcoylenet


[CODE4LIB] Job: Archivist IV (Electronic Data Coordinator) at South Carolina Department of Archives History

2013-09-16 Thread jobs
Under general supervision, works as part of a project team to develop and
administer policies, procedures and practices for managing and providing
access to the State Historic Preservation Office's data and programs through
electronic records and systems. Programs include the
statewide survey of historic properties, National Register of Historic Places,
historical markers, environmental reviews, Certified Local Governments,
grants, and tax incentives programs. Plans, develops, and
implements electronic records management processes, guidelines, and procedures
for the State Historic Preservation Office, and tests approaches to address
long-term preservation and access issues. Develops ways to
enhance access to information about historic properties and preservation
programs through the internet. Other duties as assigned.

  
Minimum and Additional Requirements:

A Bachelor's degree and related professional experience.

  
Knowledge, Skills, and Abilities:

Knowledge of general archival and records managment concepts, and general
knowledge of electronic records issues.

Knowledge of hardware and software used for electronic document management
systems, digital imaging systems and desktop applications, including GIS.

Knowledge of database management, systems analysis, and system development
concepts.

Familiarity with metadata and related standards for information processes and
their application to archival or record materials.

Knowledge of data storage methods, media, and security.

Ability to work cooperatively and effectively with the public, staff, and
other professionals.

Excellent organizational and time management skills.
Ability to juggle multiple projects and deadlines.

Ability to communicate in a clear and effective manner.

  
Preferred Qualifications:

A Master's degree in library and information science, or public history and/or
graduate training in archives administration, records management and
information management. Experience working in state or
local government environment.

Additional Comments:

Must have or be able to obtain a valid S.C. driver's license.



Brought to you by code4lib jobs: http://jobs.code4lib.org/job/10016/


[CODE4LIB] Looking for VB .NET Developer

2013-09-16 Thread Mark Sullivan

Good morning,
I am looking for recommendations for a VB.NET developer with strong 
skills in Windows and Web Forms.  The contract will be for the 
conversion of about 20K lines of code from Windows forms to web forms.  
Knowledge of MS SQL a plus.  If you know someone that might be 
interested, please have them contact me or pass along their information.


Thank you,

Mark

--

Mark Sullivan
Executive Director
IDS Project
Milne Library
1 College Circle
SUNY Geneseo
Geneseo, NY 14454
(585) 245-5172


[CODE4LIB] HathiTrust Research Center awarded grant from The Andrew W. Mellon Foundation

2013-09-16 Thread Senseney, Megan Finn
The Andrew W. Mellon Foundation has awarded a grant in the amount of $437,000 
to the University of Illinois at Urbana-Champaign in partnership with Indiana 
University for an exciting new project in the HathiTrust Research Center. The 
“Workset Creation for Scholarly Analysis: Prototyping Project” (WCSA) project 
will be directed by HTRC Co-director and GSLIS Associate Dean for Research J. 
Stephen Downie, GSLIS-affiliated faculty member and Professor of Library 
Administration Timothy Cole, and Beth Plale of Indiana University.

Requirements for creating scholarly worksets are becoming increasingly 
sophisticated and complex, both as humanities scholarship has become more 
interdisciplinary and as it has become more digital. Developing the ability to 
slice through the massive HathiTrust corpus and to construct the precise set of 
materials needed for a particular scholarly investigation will open exciting 
new opportunities for conducting research with digital content in the 
humanities and beyond. Given the unprecedented size and scope of the HathiTrust 
corpus—in conjunction with the HTRC’s unique computational access to 
copyrighted materials—this project will engage scholars in designing tools for 
exploring, locating, and analyzing content from the HathiTrust so they can 
conduct computational scholarship at scale, based on meaningful worksets.

In an effort to increase community participation in HTRC and engagement with 
the HathiTrust corpus, the HTRC will release an open, competitive Request for 
Proposals in November 2013 with the intent to fund four prototyping projects 
that will build tools for enriching and augmenting metadata for the HathiTrust 
corpus. Throughout the project, the HTRC will also work closely with the Center 
for Informatics Research in Science and Scholarship (CIRSS) to develop a set of 
formal data models that will be used to capture and integrate the outputs of 
the funded prototyping projects with the larger HathiTrust corpus.

--

Megan Finn Senseney
Project Coordinator, Research Services
Graduate School of Library and Information Science
University of Illinois at Urbana-Champaign
501 East Daniel Street
Champaign, Illinois 61820
Phone: (217) 244-5574
Email: mfsen...@illinois.edu
http://www.lis.illinois.edu/research/services/http://www.lis.uiuc.edu/research/services/


Re: [CODE4LIB] CODE4LIB Digest - 12 Sep 2013 to 13 Sep 2013 (#2013-237)

2013-09-16 Thread Ethan Gruber
Using SPARQL to validate seems like tremendous overhead.  From the Gerber
abstract: A total of 55 rules have been defined representing the
constraints and requirements of the OA Specification and Ontology. For each
rule we have defined a SPARQL query to check compliance. I hope this isn't
55 SPARQL queries per RDF resource.

Europeana's review of schematron indicated what I pointed out earlier, that
it confines one to using RDF/XML, which is sub-optimal in their own
words.  One could accept RDF in any serialization and then run it through
an RDF processor, like rapper (http://librdf.org/raptor/rapper.html), into
XML and then validate.  Eventually, when XPath/XSLT 3 supports JSON and
other non-XML data models, theoretically, schematron might then be able to
validate other serializations of RDF.  Ditto for XForms, which we are using
to validate RDF/XML.  Obviously, this is sub-optimal because our workflow
doesn't yet account for non-XML data.  We will probably go with the rapper
intermediary process until XForms 2 is released.

Ethan


On Mon, Sep 16, 2013 at 10:22 AM, Karen Coyle li...@kcoyle.net wrote:

 On 9/16/13 6:29 AM, aj...@virginia.edu wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 I'd suggest that perhaps the confusion arises because This instance is
 (not) 'valid' according to that ontology. might be inferred from an
 instance and an ontology (under certain conditions), and that's the soul of
 what we're asking when we define constraints on the data. Perhaps OWL can
 be used to express conditions of validity, as long as we represent the
 quality valid for use in inferences.


 Based on the results of the RDF Validation workshop [1], validation is
 being expressed today as SPARQL rules. If you express the rules in OWL then
 unfortunately you affect downstream re-use of your ontology, and that can
 create a mess for inferencing and can add a burden onto any reasoners,
 which are supposed to apply the OWL declarations.

 One participant at the workshop demonstrated a system that used the OWL
 constraints as constraints, but only in a closed system. I think that the
 use of SPARQL is superior because it does not affect the semantics of the
 classes and properties, only the instance data, and that means that the
 same properties can be validated differently for different applications or
 under different contexts. As an example, one community may wish to say that
 their metadata can have one and only one dc:title, while others may allow
 more than one. You do not want to constrain dc:title throughout the Web,
 only your own use of it. (Tom Baker and I presented a solution to this on
 the second day as Application Profiles [2], as defined by the DC community).

 kc
 [1] 
 https://www.w3.org/2012/12/**rdf-val/agendahttps://www.w3.org/2012/12/rdf-val/agenda
 [2] http://www.w3.org/2001/sw/**wiki/images/e/ef/Baker-dc-**
 abstract-model-revised.pdfhttp://www.w3.org/2001/sw/wiki/images/e/ef/Baker-dc-abstract-model-revised.pdf


  - ---
 A. Soroka
 The University of Virginia Library

 On Sep 13, 2013, at 11:00 PM, CODE4LIB automatic digest system wrote:

  Also, remember that OWL does NOT constrain your data, it constrains only
 the inferences that you can make about your data. OWL operates at the
 ontology level, not the data level. (The OWL 2 documentation makes this
 more clear, in my reading of it. I agree that the example you cite sure
 looks like a constraint on the data... it's very confusing.)

 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
 Comment: GPGTools - http://gpgtools.org

 iQEcBAEBAgAGBQJSNwe2AAoJEATpPY**SyaoIkwLcIAK+**sMzy1XkqLStg94F2I40pe
 0DepjqVhdPnaDS1Msg7pd7c7iC0L5N**hCWd9BxzdvRgeMRr123zZ3EmKDSy8X**ZiGf
 uQyXlA9cOqpCxdQLj2zXv5VHrOdlsA**1UAGprwhYrxOz/**v3xQ7b2nXusRoZRfDlts
 iadvWx5DhLEb2+**uVl9geteeymLIVUTzm8WnUITEE7by2**HAQf9VlT9CrQSVQ21wLC
 hvmk47Nt8WIGyPwRh1qOhvIXLDLxD9**rkBSC1G01RhzwvctDy88Tmt2Ut47ZR**EScP
 YUz/bf/qxITzX2L7tE35s2w+**RUIFIFc4nJa3Xhp0wMoTAz5UYMiWIc**XZ38qfGlY=
 =PJTS
 -END PGP SIGNATURE-


 --
 Karen Coyle
 kco...@kcoyle.net http://kcoyle.net
 m: 1-510-435-8234
 skype: kcoylenet



Re: [CODE4LIB] Expressing negatives and similar in RDF

2013-09-16 Thread Karen Coyle

On 9/16/13 2:05 AM, Meehan, Thomas wrote:

Don: As I understand it, the open world view implies knowledge not asserted for 
whatever reason, whereas sometimes a negative is a definite (and ultimately 
verifiable) fact, such as a painting simply not having a title. I think you're 
ultimately right about unknown things.

Esmé's solution does seem to work, although would perhaps require redefinition 
for every element (title, place of pub, presence of clasp, binding, etc.). I 
did wonder if a more generic method existed.


Can you say more about what you mean by redefinition for every element?

kc




Thank you,

Tom


---

Thomas Meehan
Head of Current Cataloguing
Library Services
University College London
Gower Street
London WC1E 6BT

t.mee...@ucl.ac.uk



-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
Donald Brower
Sent: 13 September 2013 14:46
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Expressing negatives and similar in RDF

At a theoretical level, doesn't the Open World Assumption in RDF rule out
outright negations? That is, someone else may know the title, and could
assert it in a separate RDF document. RDF semantics seem to conflate
unknown with nonexistent.

Practically, Esme's approach seems better in these cases.


-Don


--
Donald Brower, Ph.D.
Digital Library Infrastructure Lead
Hesburgh Libraries, University of Notre Dame




On 9/13/13 8:51 AM, Esmé Cowles escow...@ucsd.edu wrote:


Thomas-

This isn't something I've run across yet.  But one thing you could do
is create some URIs for different kinds of unknown/nonexistent titles:

example:book1 dc:title example:unknownTitle
example:book2 dc:title example:noTitle
etc.

You could then describe example:unknownTitle with a label or comment to
fully describe the states you wanted to capture with the different
categories.

-Esme
--
Esme Cowles escow...@ucsd.edu

Necessity is the plea for every infringement of human freedom. It is
the  argument of tyrants; it is the creed of slaves. -- William Pitt,
1783

On 09/13/2013, at 7:32 AM, Meehan, Thomas t.mee...@ucl.ac.uk

wrote:

Hello,

I'm not sure how sensible a question this is (it's certainly
theoretical), but it cropped up in relation to a rare books
cataloguing discussion. Is there a standard or accepted way to express
negatives in RDF? This is best explained by examples, expressed in mock-

turtle:

If I want  to say this book has the title Cats in RDA I would do
something like:

example:thisbook dc:title Cats in RDA .

Normally, if a predicate like dc:title is not relevant to
example:thisbook I believe I am right in thinking that it would simply
be missing, i.e. it is not part of a record where a set number of
fields need to be filled in, so no need to even make the statement.
However, there are occasions where a positively negative statement
might be useful. I understand OWL has a way of managing the statement
This book does not have the title Cats in RDA [1]:

[]  rdf:type owl:NegativePropertyAssertion ;
 owl:sourceIndividual   example:thisbook ;
 owl:assertionProperty  dc:title ;
 owl:targetIndividual   Cats in RDA .

However, it would be more useful, and quite common at least in a
bibliographic context, to say This book does not have a title.
Ideally
(?!) there would be an ontology of concepts like none, unknown, or
even something, but unspecified:

This book has no title:
example:thisbook dc:title hasobject:false .

It is unknown if this book has a title (sounds undesirable but I can
think of instances where it might be handy[2]):
example:thisbook dc:title hasobject:unknown .

This book has a title but it has not been specified:
example:thisbook dc:title hasobject:true .

In terms of cataloguing, the answer is perhaps to refer to the rules
(which would normally mandate supplied titles in square brackets and
so
forth) rather than use RDF to express this kind of thing, although the
rules differ depending on the part of description and, in the case of
the kind of thing that prompted the question- the presence of clasps
on rare books- there are no rules. I wonder if anyone has any more
wisdom on this.

Many thanks,

Tom

[1] Adapted from
http://www.w3.org/2007/OWL/wiki/Primer#Object_Properties
[2] No many tbh, but e.g. title in an unknown script or
indecipherable hand.

---

Thomas Meehan
Head of Current Cataloguing
Library Services
University College London
Gower Street
London WC1E 6BT

t.mee...@ucl.ac.uk


--
Karen Coyle
kco...@kcoyle.net http://kcoyle.net
m: 1-510-435-8234
skype: kcoylenet


[CODE4LIB] Job: Metadata Librarians at University of California, San Diego

2013-09-16 Thread jobs
The UC San Diego Library is looking for two extraordinary and knowledgeable
professionals to join our staff in the role of Metadata Librarians.

  
The UC San Diego Library is committed to supporting academic excellence and
diversity within the faculty, staff, and student body.

  
We have two positions available:

* Temporary, for two years from date of hire  
* Permanent  
  
About the UC San Diego Library

The UC San Diego Library, ranked among the nation's top 25 public academic
libraries, plays a critical role in advancing and supporting the university's
research, teaching, patient care, and public service missions. The world-
renowned research for which UC San Diego is known starts at the UC San Diego
Library, which provides the foundation of knowledge needed to advance cutting-
edge discoveries in a wide range of disciplines, from healthcare and science
to public policy and the arts.

  
The UC San Diego Library is widely recognized as an innovative leader in the
development, management, and delivery of digital resources to support the
university's world-class research and instruction. The Library plays a
leadership role in HathiTrust, an international partnership of 52 academic and
research libraries committed to long-term digital preservation, and also plays
a central role in data curation, a critical part of the University's research
cyberinfrastructure initiative.

  
Program Description

The Research Data Curation and Metadata Services programs are two
collaborative programs within the UC San Diego Libraries. Metadata Services
(MS) incorporates traditional cataloging (including a team that provides
services for the UC system) as well as metadata support for digital objects,
database management, and manuscripts and archives processing. Research Data
Curation (RDC) is one of the Library's newest programs, born of the commitment
of the Library to digital preservation and lifecycle management.

  
RDC supports a core piece of the Library's Strategic Plan, engaging with
partners to make digital scholarly work and data openly discoverable and
accessible for the long term. In response to the growing campus-wide data
management and data preservation challenges, the Library actively supports
open data and open access by collaborating with faculty, researchers, students
and other partners to ensure the long-term curation and accessibility of
scholarly works in all formats.

  
The scale of the challenge regarding the stewardship of digital data requires
that responsibilities be distributed across multiple entities and partnerships
that engage institutions, disciplines, and interdisciplinary domains. For this
reason, the Research Data Curation Program operates in collaboration with UCSD
Research Cyberinfrastructure (RCI) and the University of California Curation
Center (UC3).

  
Research Cyberinfrastructure (RCI) is a UC San Diego-sponsored program that
offers campus researchers facilities, storage, data curation, computing, and
networking to facilitate their research using shared cyberinfrastructure
services across campus.

  
The RCI program is designed to provide cost-effective, reliable services which
can be utilized by UCSD principal investigators in their current research
efforts and incorporated in proposals for future research.

  
RCI has a number of services, including the data curation services. In the
spring of 2011, the UC San Diego RCI Implementation Team invited researchers
and research teams to participate in the Research Curation and Data Management
Pilot program. Twenty applications were received and after due deliberation
the RCI Oversight Committee selected five curation-intensive projects.

  
The pilot participants received assistance with the creation of metadata to
make data discoverable and available for future re-use; with the ingest of
data into the San Diego Supercomputer Center's (SDSC) new Cloud Storage
system, which is accessible via high-speed networks; and with the movement of
data into Chronopolis, a geographically-dispersed preservation system. More
information about the pilot projects can be found at http://rci.ucsd.edu/data-
curation/pilots.html

  
Metadata Services connects our UCSD community to intellectual and creative
content and materials by analyzing, describing and organizing resources using
methods and systems that promote discovery. MS provides, augments and
normalizes metadata for digitized and born-digital objects and for tangible
resources--particularly collections of distinction for which UC San Diego is
known. MS delivers and manages bibliographic access to licensed and open
access electronic resources in all formats. The program includes approximately
37 FTE staff in four core service areas.

  
Within MS, the Metadata Analysis and Specification group establishes and
manages metadata standards and policies for metadata creation and encoding
practices. The group advises on metadata creation, and provides metadata
augmentation, manipulation, 

[CODE4LIB] Lesbian Herstory Archives 2013 Tote Honors Mabel Hampton

2013-09-16 Thread DYV
Hello Everyone,

I'm the librarian/archivist at LHA responsible for cataloging and the OPAC
and we're doing a fundraiser and I want to share it with the community .
 We're selling this lovely tote for the fall.

*About The Lesbian Herstory Archives*

In operation since 1974, The Lesbian Herstory Archives is home to the
world's oldest and largest collection of archival, bibliographic and
multimedia materials by and about the diverse lesbian experience.  LHA is
an all-volunteer run, 501(c)3 , non-profit educational organization.  We
offer research assistance, tours, exhibits in-house events and a
semester-long Lesbian Studies course.

*About the Tote*

The new tote design features the cover of the *Lesbian Herstory Archives
Newsletter # 5http://www.lesbianherstoryarchives.org/images/newsl/new_05lg.jpg
* with a beautiful image of *Mabel
Hampton*http://herstories.prattsils.org/omeka/collections/show/29
(one of the first contributors to LHA and a founding member of the LHA
community).

*Each 100% cotton, eggshell tote costs $20.00 (including shipping).*

Click* HERE http://tinyurl.com/mabeltotes *to order your tote.

Thanks in advance for your support of  LHA.
Peace,
Desiree Yael Vester
Caretaker, Librarian, Archivist
Lesbian Herstory Archives ( LHEF, Inc.)


[CODE4LIB] Job: Associate Archivist at American University

2013-09-16 Thread jobs
The Associate Archivist manages the digital assets of the University Archives
and Special Collections department according to professional best practices.
The position facilitates the transfer of permanent records to the archives and
coordinates digitization projects. Manages the preservation of American
University (AU) and Special Collections digital assets. Supervises and
evaluates personnel, and provides reference service. Makes recommendations on
policies and procedures regarding digital assets to ensure adherence to
appropriate standards. The position requires minimal supervision after initial
training.

  
Educational Requirements:

  
A Master's degree in Library or Archival Science from an ALA accredited
program is required.

  
Minimum Requirements:

- Minimum of 5 years archives or special collections experience in an academic, 
research or special library  
- Experience with digitization on demand and digitization projects is required, 
including familiarity with a variety of standards such as Dublin Core, METS, 
MARC, etc  
- Must have well-developed research skills and excellent writing and 
communication skills  
- Ability to work collaboratively and to set and manage priorities essential  
- Project management skills and customer service experience  
- Familiarity with managing digital assets, outreach, and acquisitions  
- Knowledge of federal and state copyright and privacy regulations  
- Ability to work with a number of different information formats such as books, 
manuscripts, electronic records, audio-visual materials, and foreign language 
materials essential  
- Intermediate to advanced level computer skills  
- Familiarity with Windows based programs including database and spreadsheet 
software packages  
- Must be able to lift items up to 50 pounds and retrieve materials from 
shelves that are 7 feet high  
  
Preferred Requirements:

- 2 years of supervisory experience  
- Experience with metadata standards  
- Experience planning, coordinating, and implementing effective programs, and 
services, ensuring that objectives are met within time, budget, and quality 
targets  
- Experience managing digital assets  
- Experience and previous success in expanding a user base or collections  
Additional Information:

Library technology is an evolving infrastructure with many collaborators and
opportunities for synergy. This position will have an opportunity to influence
the development and implementation of the library's digital asset strategy.

  
American University is an equal opportunity, affirmative action employer
committed to a diverse faculty, staff, and student body. The American
University campus is tobacco and smoke free.



Brought to you by code4lib jobs: http://jobs.code4lib.org/job/10028/


[CODE4LIB] Job: Electronic Records Information Management Specialist at Federal Trade Commission

2013-09-16 Thread jobs
OUR MISSION: The Federal Trade Commission (FTC) enforces a variety of federal
antitrust and consumer protection laws. The Commission seeks to ensure that
the Nation's markets function competitively and are vigorous, efficient, and
free of undue restrictions. The Commission also works to enhance the smooth
operation of the marketplace by eliminating acts or practices that are unfair
or deceptive. Finally, the Commission undertakes economic analysis to support
its law enforcement efforts and to contribute to the policy deliberations of
the Congress, the Executive Branch, other independent agencies, and state and
local governments when requested.

  
The incumbent serves as an Electronic Records and Information Management
Specialist in the Office of the Executive Director (OED), Records and Filings
Office (RFO). The Electronic Records and Information
Management Specialist will support the records program. The
records program is responsible for planning, developing, implementing, and
managing the FTC's records management program for both core mission and
administrative records.

  
KEY REQUIREMENTS

  
This position requires U.S. citizenship.

You will be required to provide proof of U.S. citizenship.

Relocation costs will not be paid.

This position is included in the bargaining

unit.

  
DUTIES:

  
The incumbent provides analytical and operational support for the electronic
and physical records program, including training personnel; assisting with the
management of the Agency's records storage program; coordinating records
destruction process for temporary records, and the transfer process for
permanent records; monitoring, evaluating and reporting on various aspects of
the records program.

  
Develops and updates records and other management policies, procedures, and
guidance.

  
Serves as the Webmaster for all intranet pages regarding records management.

  
Works with the Office of the Chief Information Officer (OCIO), system owners
and/or others to incorporate Records and Information Management (RIM)
governance and requirements into new systems, applications and other related
Information Technology planning.

  
Conducts research analyses, studies, and reviews on a wide variety of records
and management topics and issues, and makes recommendations for process
improvements.

  
Reviews existing electronic content requirements, perform analysis, conduct
any additional research and development.

  
Supports the planning and designing of a new electronic content and records
management system to meet the needs of the FTC.

  
Develops cost estimates for RIM related acquisitions.

  
QUALIFICATIONS REQUIRED:

  
To qualify for this position at the GS-09 level, applicants must have at least
1 full year of specialized experience equivalent to the GS- 07 level or above,
which equipped them with the particular knowledge, skills, and abilities to
perform successfully the duties of the position as described OR master's or
equivalent graduate degree or 2 full years of progressively higher level
graduate education leading to such a degree or LL.B. or J.D., if related. To
qualify for this position at the GS-11 level, applicants must have at least 1
full year of specialized experience equivalent to the GS- 09 level or above,
which equipped them with the particular knowledge, skills, and abilities to
perform successfully the duties of the position as described OR Ph.D. or
equivalent doctoral degree or 3 full years of progressively higher level
graduate education leading to such a degree or LL.M., if related. Equivalent
combinations of education and experience are qualifying for both grade levels
for which both education and experience are acceptable. Specialized experience
is defined as progressively responsible administrative, professional,
technical, or other similar work that demonstrates knowledge of archival
principles and techniques as they relate to electronic records management and
the technical applications, uses, and limitations of archival and records
management automation systems.



Brought to you by code4lib jobs: http://jobs.code4lib.org/job/10026/


[CODE4LIB] Information for attending the CARL Webinar “So you’re thinking of upgrading your ILS”.

2013-09-16 Thread Salazar, Christina
Please excuse any cross posting.

* Note that there’s a cap of 300 phone lines (no cap on the number of VoIP 
connections); first come, first served and this does promise to be a popular 
couple of webinars. We will be archiving these sessions. You do NOT need to be 
a CARL member to attend.

Information for attending the CARL Webinar “So you’re thinking of upgrading 
your ILS”.

The webinars will take place on Wednesday, October 9 from 11:00 a.m.-12:30 p.m. 
Pacific Time and Wednesday, October 16, from 11:00 a.m.-12:30 p.m. Pacific 
Time. Both sessions will be recorded so if you are unable to make either or 
both, you will be able to watch them at a later date. Information on how to 
access the archived sessions will be sent out after the webinars have been 
recorded.

No prior registration is required for the webinar. Information on how to log 
into the sessions via CCC Confer is listed below. Please note that there are 
two different codes for dialing in, one code is for October 9th the other code 
is for October 16th. Please note that the system can only handle 300 telephone 
lines. If multiple people from your institution are planning on participating, 
please consider watching and dialing in together if you have a speaker phone. 
You can also use VOIP; this option does not have a limit on how many people can 
participate.

Once again the three Panelists for Wednesday October 9 are
•  Pearl Ly, Interim Assistant Dean, Library Services, Pasadena City College. 
PCC switched from ExLibris’s Voyager to OCLC Worldshare
•  Dana M. Miller, Head of Metadata and Cataloging, Mathewson-IGT Knowledge 
Center, University of Nevada Reno,  UNR upgraded from Innovative’s Millennium 
to Sierra
•  Jennifer D. Ware, Acquisitions Librarian, California State University, 
Sacramento, CSUS switched from Innovative’s Millennium to ExLibris’s Alma

The four Panelists for Wednesday October 16 are
•  Rogan Hamby, Managers Headquarters Library and Reference Services, York 
County Library Systems, South Carolina; Operations Manager, SCLENDS, a 19 
library consortium, migration project manager. Most libraries switched from 
Horizon, TLC and Unicorn to Evergreen
•  Janel Kinlaw, Broadcast Librarian, National Public Radio, NPR’s Library 
upgraded from Techlib to Collective Access
•  George Williams, Access Services Manager, Latah County Library District, 
Idaho, their 52 library consortium switched from ExLibris’s Voyager to Koha.
•  Merrillene Wood, Interim Library Director, Western Nebraska Community 
College, WNCC switched from Follett Destiny as an individual entity to a 
statewide KOHA consortium (Pioneer)

Login information for the October 9th and 16th Sessions
PRIOR TO YOUR FIRST CCC CONFER MEETING
Test Your Computer Readiness

PARTICIPANT DETAILS
 Dial your telephone conference line: 913-312-3202 or (888) 886-3951
 Cell phone users dial: 913-312-3202
 Enter passcode: 169219 for Wednesday, October 9. Enter passcode: 969692 for 
 Wednesday, October 16.
 Go to www.onfer.orghttp://www.onfer.org
 Click the Participant Log In button under the Webinars logo
 Locate your meeting and click Go (CARL Webinar-So you’re thinking of 
 upgrading your ILS)
 Fill out the form and click connect

PARTICIPANT CONFERENCE FEATURES
*0 - Contact the operator for audio assistance
*6 - Mute/unmute your individual line


Christina Salazar
Systems Librarian
John Spoor Broome Library
California State University, Channel Islands
805/437-3198
[Description: Description: CI Formal Logo_1B grad_em signature]

inline: image001.jpg

Re: [CODE4LIB] CODE4LIB Digest - 12 Sep 2013 to 13 Sep 2013 (#2013-237)

2013-09-16 Thread Karen Coyle
Ethan, if you are interested in dialoguing about this topic, I suspect 
this isn't the forum for it. I don't think that W3C has yet set up a 
public list on rdf validation (the meeting participants need to form an 
actual W3C group for that to happen, and I hope that won't take too 
long), but there should be one soon. It's really rather useless to keep 
telling *me* this, since I'm not arguing for any particular technology, 
just reporting what I've learned in the last few weeks about what others 
are doing.


That is, if you are interested in having an exchange of ideas about this 
topic rather than repeatedly trying to convince me that what I'm saying 
is wrong. It's like you're trying to convince me that I really did not 
hear what I did. But I did hear it. Maybe all of those people are wrong; 
maybe you could explain to them that they are wrong. But if you care 
about this, then you need to be talking to them.


kc


On 9/16/13 7:40 AM, Ethan Gruber wrote:

Using SPARQL to validate seems like tremendous overhead.  From the Gerber
abstract: A total of 55 rules have been defined representing the
constraints and requirements of the OA Specification and Ontology. For each
rule we have defined a SPARQL query to check compliance. I hope this isn't
55 SPARQL queries per RDF resource.

Europeana's review of schematron indicated what I pointed out earlier, that
it confines one to using RDF/XML, which is sub-optimal in their own
words.  One could accept RDF in any serialization and then run it through
an RDF processor, like rapper (http://librdf.org/raptor/rapper.html), into
XML and then validate.  Eventually, when XPath/XSLT 3 supports JSON and
other non-XML data models, theoretically, schematron might then be able to
validate other serializations of RDF.  Ditto for XForms, which we are using
to validate RDF/XML.  Obviously, this is sub-optimal because our workflow
doesn't yet account for non-XML data.  We will probably go with the rapper
intermediary process until XForms 2 is released.

Ethan


On Mon, Sep 16, 2013 at 10:22 AM, Karen Coyle li...@kcoyle.net wrote:


On 9/16/13 6:29 AM, aj...@virginia.edu wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I'd suggest that perhaps the confusion arises because This instance is
(not) 'valid' according to that ontology. might be inferred from an
instance and an ontology (under certain conditions), and that's the soul of
what we're asking when we define constraints on the data. Perhaps OWL can
be used to express conditions of validity, as long as we represent the
quality valid for use in inferences.


Based on the results of the RDF Validation workshop [1], validation is
being expressed today as SPARQL rules. If you express the rules in OWL then
unfortunately you affect downstream re-use of your ontology, and that can
create a mess for inferencing and can add a burden onto any reasoners,
which are supposed to apply the OWL declarations.

One participant at the workshop demonstrated a system that used the OWL
constraints as constraints, but only in a closed system. I think that the
use of SPARQL is superior because it does not affect the semantics of the
classes and properties, only the instance data, and that means that the
same properties can be validated differently for different applications or
under different contexts. As an example, one community may wish to say that
their metadata can have one and only one dc:title, while others may allow
more than one. You do not want to constrain dc:title throughout the Web,
only your own use of it. (Tom Baker and I presented a solution to this on
the second day as Application Profiles [2], as defined by the DC community).

kc
[1] 
https://www.w3.org/2012/12/**rdf-val/agendahttps://www.w3.org/2012/12/rdf-val/agenda
[2] http://www.w3.org/2001/sw/**wiki/images/e/ef/Baker-dc-**
abstract-model-revised.pdfhttp://www.w3.org/2001/sw/wiki/images/e/ef/Baker-dc-abstract-model-revised.pdf


  - ---

A. Soroka
The University of Virginia Library

On Sep 13, 2013, at 11:00 PM, CODE4LIB automatic digest system wrote:

  Also, remember that OWL does NOT constrain your data, it constrains only

the inferences that you can make about your data. OWL operates at the
ontology level, not the data level. (The OWL 2 documentation makes this
more clear, in my reading of it. I agree that the example you cite sure
looks like a constraint on the data... it's very confusing.)


-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
Comment: GPGTools - http://gpgtools.org

iQEcBAEBAgAGBQJSNwe2AAoJEATpPY**SyaoIkwLcIAK+**sMzy1XkqLStg94F2I40pe
0DepjqVhdPnaDS1Msg7pd7c7iC0L5N**hCWd9BxzdvRgeMRr123zZ3EmKDSy8X**ZiGf
uQyXlA9cOqpCxdQLj2zXv5VHrOdlsA**1UAGprwhYrxOz/**v3xQ7b2nXusRoZRfDlts
iadvWx5DhLEb2+**uVl9geteeymLIVUTzm8WnUITEE7by2**HAQf9VlT9CrQSVQ21wLC
hvmk47Nt8WIGyPwRh1qOhvIXLDLxD9**rkBSC1G01RhzwvctDy88Tmt2Ut47ZR**EScP
YUz/bf/qxITzX2L7tE35s2w+**RUIFIFc4nJa3Xhp0wMoTAz5UYMiWIc**XZ38qfGlY=
=PJTS
-END PGP SIGNATURE-


--
Karen Coyle

[CODE4LIB] Job: Assistant Prof: Pratt Institute 
School of Information and Library Science, New York City.

2013-09-16 Thread Anthony Cocciolo
Assistant Professor:

*Digital media and emerging technologies*



Pratt Institute, School of Information and Library Science, has an opening
for full time tenure-track position at the level of assistant professor in
the area of *Digital media and emerging technologies*



With expertise in several of the following areas:

Application  and web development

Computer programming

Content management systems

Open source programming

API and systems integration



We will be conducting interviews at the ASIST conference in Montreal Nov.
2-5.



To schedule an interview please contact Debbie Rabina (drab...@pratt.edu)



To learn more about Pratt SILS full-time faculty visit
http://research.prattsils.org/



A detailed job description is forthcoming



Sincerely,

Debbie Rabina

on behalf of the Pratt SILS search committee 2014
--
Debbie Rabina, Ph.D.
Associate Professor
Pratt Institute, School of Information and Library Science
144 West 14th Street, 6th fl.
New York, NY 10011-7301
drab...@pratt.edu
http://mysite.pratt.edu/~drabina/index.htm

Un modéré par habitude, un libéral par instinct. – Henri Bergson


[CODE4LIB] Job: Assistant Professor: Digital media and emerging technologies at Pratt Institute

2013-09-16 Thread jobs
Assistant Professor: Digital media and emerging technologies

  
  
  
Pratt Institute, School of Information and Library Science, has an opening for
full time tenure-track position at the level of assistant professor in the
area of Digital media and emerging technologies

  
  
  
With expertise in several of the following areas:

  
 and web development

  
 Computer programming

  
 Content management
systems

  
 Open source programming

  
 API and systems
integration

  
  
  
We will be conducting interviews at the ASIST conference in Montreal Nov. 2-5.

  
  
  
To schedule an interview please contact Debbie Rabina (drab...@pratt.edu)

  
  
  
To learn more about Pratt SILS full-time faculty visit
http://research.prattsils.org/

  
  
  
A detailed job description is forthcoming

  
  
  
Sincerely,

  
Debbie Rabina

  
on behalf of the Pratt SILS search committee 2014

--  
Debbie Rabina, Ph.D.

Associate Professor

Pratt Institute, School of Information and Library Science

144 West 14th Street, 6th fl.

New York, NY 10011-7301

drab...@pratt.edu

http://mysite.pratt.edu/~drabina/index.htm

  
Un modere par habitude, un liberal par instinct.



Brought to you by code4lib jobs: http://jobs.code4lib.org/job/10032/