Re: [CODE4LIB] usability testing software

2013-02-01 Thread Matthew Mikitka
Although I have not used it, Zurb's Verify looks promising:
http://verifyapp.com/

I have been very pleased with Foundation so far.

matt
 
 Nate Hill nathanielh...@gmail.com 1/31/2013 10:35 AM 
Hi all,
Years ago I had the opportunity to use Morae to do some usability
testing.
http://www.techsmith.com/morae.html
I may have an opportunity to put together a little bit of a usability
testing lab at my library, and I wonder if anyone can suggest a
similar
product but...
I'd like it to run on Macs.
Suggestions?
thanks

--
Nate Hill
nathanielh...@gmail.com
http://4thfloor.chattlibrary.org/
http://www.natehill.net


Re: [CODE4LIB] Why we need multiple discovery services engine?

2013-02-01 Thread Birkin Diana
Wayne,

 many of them would have their own based discovery services... and... they 
 will have vendor based discovery services... why...

I agree with Jonathan, and would add that, as with so many things, why things 
are has a lot to do with history. Forgive me if you already know this...

For a long while vendors bundled a catalog, then a web-catalog, with their 
integrated library system. Many vendors made it very difficult for developers 
to change the bundled web catalog in any standard way, and charged significant 
fees for the ability to do so. When NCState  Endeca took faceted browsing on 
the road, the closed-data model floodgates broke, and many institutions 
essentially began exporting the necessary data from their ILS, indexing it in 
solr and displaying it via easily tweaked homegrown or VuFind et al systems, 
which had the added advantage of easily allowing other collections to be 
indexed  displayed. And the world was better.

Vendors (most) responded to this by improving their systems, making them easier 
to configure in standard ways, via providing APIs, and via allowing other 
data-sources to be exposed, which is why you sometimes now see both systems in 
place.

-b

---
Birkin James Diana
Programmer, Digital Technologies
Brown University Library
birkin_di...@brown.edu


On Jan 31, 2013, at 11:21 PM, Jonathan Rochkind rochk...@jhu.edu wrote:

 So, there are two categories of solutions here -- 1) local indexes, where you 
 create the index yourself, like blacklight or vufind (both based on a local 
 Solr).  2) vendor-hosted indexes, where the vendor includes all sorts of 
 things in their index that you the customer don't have local metadata for, 
 mostly including lots and lots of scholarly article citations. 
 
 If you want to include scholarly article citations, you probably can't do 
 that with a local index solution. Although some consortiums have done some 
 interesting stuff in that area, let's just say it takes a lot of resources to 
 do. For most people, if you want to include article search in your index, 
 it's not feasilbe to do so with a local index. So only VuFind/Blacklight 
 with a local Solr is out, if you want article search. 
 
 You _can_ load local content in a vendor-hosted index like EDS/Primo/Summon. 
 So plenty of people do choose a vendor-hosted index product as their only 
 discovery tool, including both local metadata and vendor-provided metadata. 
 As you suggest. 
 
 But some people want the increased control that a locally controlled Solr 
 index gives you, for the local metadata where it's feasible. So use a local 
 index product. But still want the article search you can get with a 
 vendor-hosted index product. So they use both.  
 
 There is also at least some reasons to believe that our users don't mind and 
 may even prefer having local results and hosted metadata results presented 
 seperately (although probably preferably in a consistent UI), rather than 
 merged. 
 
 A bunch more discussion of these issues is included in my blog post at: 
 http://bibwild.wordpress.com/2012/10/02/article-search-improvement-strategy/
 
 From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Wayne Lam 
 [wing...@gmail.com]
 Sent: Thursday, January 31, 2013 9:31 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: [CODE4LIB] Why we need multiple discovery services engine?
 
 Hi all,
 
 I saw in numerous of library website, many of them would have their own
 based discovery services (e.g. blacklight / vufind) and at the same time
 they will have vendor based discovery services (e.g. EDS / Primo / Summon).
 Instead of having to maintain 2 separate system, why not put everything
 into just one? Any special reason or concern?
 
 Best
 
 Wayne


Re: [CODE4LIB] usability testing software

2013-02-01 Thread Matthew L. Zimmerman
My library used Silverback recently, and it was just right for what we
needed to do. We did not experience the issues that Jason mentioned.

Matt Zimmerman

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
Jason Michel
Sent: Thursday, January 31, 2013 11:10 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] usability testing software

I've been using Silverback but lately have had problems: crashing during
export, exported file corrupted, and also crashing during the preview
playback within the app itself. Any one else have these issues?

Sent from my iPhone

On Jan 31, 2013, at 11:01 AM, Eric Lease Morgan emor...@nd.edu wrote:

 On Jan 31, 2013, at 10:56 AM, Julia Bauder julia.bau...@gmail.com
wrote:

 I've used this in the past: http://silverbackapp.com/. It's Mac-only 
 (which was actually a drawback for the project I was working on!), 
 it's cheap, and did what we needed. It doesn't do nearly as much as 
 Morae, though, so it might not have specific features you need?

 I liked Silverback as well. BTW, you might also ask this question of 
 Usability4Lib -- http://bit.ly/VxGls9

 --
 Eric Morgan


Re: [CODE4LIB] Answer to your question Re: [CODE4LIB] Group Decision Making (was Zoia)

2013-02-01 Thread Karen Coyle

Deborah,

I'm not sure what you mean about something for the offender, so some 
examples would be good. My big concern is that we not create a new group 
of outsiders -- folks who've been told they've offended someone and 
therefore are made to feel uncomfortable. I fully understand the 
pushback from people who feel that all of this will have a chilling 
effect and that some folks will be made to feel guilty. How do we 
avoid that?


I'd recommend big group hugs at the closing of the conference to show we 
all still love each other, but, damn, the 70's are long gone! :-)


kc


On 1/31/13 7:26 PM, Fitchett, Deborah wrote:

Thank you Becky, Karen and Gary for your answers (and excuse the delay 
replying; have been attempting to clear my head despite the heat and an achy 
ankle combining against me).

The backup buttons are a good idea, and I definitely support both Becky and 
Karen's suggestions for additions to the policy. I think it's helpful breaking it down 
into separate parts. It's especially helpful to have expectations for the community, 
since the more the community can be trusted, the more safe people will feel to mention 
when something's an issue.

Would it be useful to have something (whether as part of the CoC or just some 
discussion) for the 'offender' as well? Not so much for the person who intends 
to offend, because they're going to do that wherever they think they can; but 
for the person who didn't intend to offend (and/or doesn't think they did) or 
the person who wants to avoid offending (while still actually enjoying the 
party)? I recall some stuff on that angle from a recent discussion of sf 
conventions, and should be able to dig up links if it's of interest to anyone 
here.

Deborah

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Becky 
Yoose
Sent: Wednesday, 30 January 2013 1:59 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] Answer to your question Re: [CODE4LIB] Group Decision 
Making (was Zoia)

On Mon, Jan 28, 2013 at 9:55 PM, Fitchett, Deborah 
deborah.fitch...@lincoln.ac.nz wrote:


So, given that we're all nice people who wouldn't intentionally harass or make 
spurious claims of harassment against each other, nevertheless sometimes 
someone will unintentionally say or do something that (especially given the 
concept of microagressions that Karen and I have alluded to and Kathryn named) 
really hurts someone else.  This is, whatever else you want to call it, a 
problem because it decreases the feeling of community.

So, how as a community should we respond when this happens?

That's my question.

Different people will have different answers, but here's mine to answer your 
question:

I'm breaking this into two parts: the Incident and the Community Response

1. Incident happens. Inform the offender that he/she has affected you 
negatively. Oftentimes, as you pointed out, stuff like this is unintentional, 
and the accidental offender and offended will resolve the incident by having 
that initial discussion. I would predict that most incidents will be resolved 
here.

2. If offender insists that he/she did not offend, or if offender is actively 
harassing you, then you will need a third party to step in.
These people have either been indicated by the CoC or by the listserv as those 
who you should go to for help.

If you are at a conference, find the conference organizer or staff person. For 
#c4l13, that would be Francis. If you can't find Francis, there will be other 
conference staff that would be available to help if the situation calls for 
immediate action.

If you are in the #code4lib IRC, the zoia command to list people designated as 
channel helpers is @helpers. I'd assume that there is at least one helper in 
the channel at most times.

For the listserv, you have a free-for-all for public messages; however, this 
listserv does have a maintainer, Eric Lease Morgan.


3. Wider community response to Incident:

If the incident doesn't past the first step (discussion reveals offense was 
unintentional, apologies said, public note or community is informed of 
resolution), then there's not much the community can do at this point since the 
incident was resolved without outside intervention.

If incident results in corrective action, the community should support the 
decision made by the Help in Step 2 if they choose corrective action, like 
ending a talk early or banning from the listserv, as well as support those 
harmed by the incident, either publicly or privately (whatever individuals are 
comfortable with).

If the Help in Step 2 run into issues implementing the CoC, then the Help 
should come to the community with these issues and the community should revise 
the CoC as they see fit.


So that's my answer. In Real Life people will have opinions about how the CoC 
is enforced. People will argue that a particular decision was unfair, and 
others will say that it didn't go far enough. We really can't stop people 

[CODE4LIB] Reminder: ACM/IEEE JCDL2013 Paper Submissions Due 02.04.13

2013-02-01 Thread McDonald, Robert H.
This is just a reminder that full paper submissions for the Joint Conference on 
Digital Libraries @JCDL2013 are due on Monday Feb 4, 2013 by 11:59 UTC.

For more on submissions - http://www.jcdl2013.org/call-for-papers 
 
@JCDL2013 is a major international forum focusing on digital libraries and 
associated technical, practical and social issues. This year's conference will 
be held in Indianapolis, IN from July 22-26.  We welcome submissions on the 
wide range of topics of interest in Digital Libraries worldwide.
 
On behalf of the JCDL 2013 Planning Committee,

Best,

Robert

**
Robert H. McDonald
Associate Dean for Library Technologies
Deputy Director-Data to Insight Center, Pervasive Technology Institute
Indiana University
1320 East 10th Street
Herman B Wells Library 234
Bloomington, IN 47405
Phone: 812-856-4834
Email: rhmcd...@indiana.edu
Skype: rhmcdonald
AIM: rhmcdonald1


Re: [CODE4LIB] usability testing software

2013-02-01 Thread Taylor, Nicholas A.
Hi Nate (et al), on the server side, you might also take a look at the open 
source ClickHeat [1], which creates a visual heatmap of clicks on an HTML page.

~Nicholas
__

Nicholas Taylor
Data Specialist
Library of Congress Web Archiving
http://www.loc.gov/webarchiving/
n...@loc.gov

[1] http://www.labsmedia.com/clickheat/index.html


[CODE4LIB] Shameless Plug: The Web for Libraries Weekly

2013-02-01 Thread Michael Schofield
Hey C4Libbers,

I'm just shamelessly plugging my first crack at a weekly newsletter curating 
what's new in the web community for practical application in libraries. Called 
what?! Well, it's called The Web for Libraries Weekly. Here's the browser 
version of the first campaign* that mailed this morning.

http://www.eepurl.com/utWcH

Have a good one : ),

Michael Schofield(@nova.edu) | Web Services Librarian | (954) 262-4536
Alvin Sherman Library, Research, and Information Technology Center

Hi! Hit me up any time, but I'd really appreciate it if you report broken 
links, bugs, your meeting minutes, or request an awesome web app over on the 
Library Web Serviceshttp://staff.library.nova.edu/pm site.

* Can you tell I'm using Mail Chimp? :)


[CODE4LIB] digital collections sitemaps

2013-02-01 Thread Jason Ronallo
Hi,

I've seen registries for digital collections that make their metadata
available through OAI-PMH, but I have yet to see a listing of digital
collections that just make their resources available on the Web the
way the Web works [1]. Sitemaps are the main mechanism for listing Web
resources for automated crawlers [2]. Knowing about all of these
various sitemaps could have many uses for research and improving the
discoverability of digital collections on the open Web [3].

So I thought I'd put up a quick form to start collecting digital
collections sitemaps. One required field for the sitemap itself.
Please take a few seconds to add any digital collections sitemaps you
know about--they don't necessarily have to be yours.

https://docs.google.com/spreadsheet/viewform?formkey=dE1JMDRIcXJMSzJ0YVlRaWdtVnhLcmc6MQ#gid=0

At this point I'll make the data available to anyone that asks for it.

Thank you,

Jason

[1] At least I don't recall seeing such a sitemap registry site or
service. If you know of an existing registry of digital collections
sitemaps, please let me know about it!
[2] http://www.sitemaps.org/ For more information on robots see
http://wiki.code4lib.org/index.php/Robots_Are_Our_Friends
[3] For instance you can see how I've started to investigate whether
digital collections are being crawled by the Common Crawl:
http://jronallo.github.com/blog/common-crawl-url-index/


Re: [CODE4LIB] Code4Lib 2013 in Layar

2013-02-01 Thread Peter Murray
Sweet!  I had deleted Layar last year because I didn't see any use of keeping 
it on the phone after toying with it a bit at Access a couple years ago.  This 
sounds like a quite promising use.  Thanks for setting it up, Bill.


Peter

On Jan 31, 2013, at 9:58 PM, William Denton w...@pobox.com wrote:
 I've set up a Code4Lib 2013 layer in the Android/iOS augmented reality 
 application Layar [1] to do something that I think---I hope---will add an 
 interesting and fun element to the conference.
 
 You can use it to scan around the city to see two kinds of things: 1) 
 tweets using the #c4l13 or #code4lib hashtag (if the tweets are geolocated 
 so they can be nailed to a point) and 2) points of interest from the 
 shared Google Maps that have been set up [2].
 
 During the day all of the tweets will be coming from everyone at the UIC 
 Forum, so that's not too interesting ... but I hope that outside the 
 conference times, when people are all over Chicago, they'll be tweeting, 
 and that's when you might wonder, Where's everyone at? and you can hold 
 up your phone, look around, and see that a bunch of folks are two blocks 
 over there at a blues club and another bunch are up over there trying 
 obscure beers and someone else posted a picture of an LP she just bought 
 down the block, and that a comic book store someone recommended is a half 
 mile that way.
 
 It's an Code4Lib-augmented view of Chicago: you look around and see what 
 we're all doing and where we're hanging out, and all the places we're 
 interested in or recommend.
 
 To try it out, intall Layar on your phone, then run it, click to go into 
 Geo Layers mode, and search for code4lib 2013.  Launch the layer and 
 look around. You probably won't see anything around you, but next time you 
 tweet something with #c4l13 (and the tweet is geolocated so you're sharing 
 your latitude and longitude) it will show up.
 
 So, if you want to try it, add points to the Google Maps, and when 
 you're in Chicago, tweet!
 
 I don't know how well it will work, but please test it and try it, because 
 I think if it does turn out it will be a lot of fun.
 
 It can work for any conference or event. The program driving this is 
 Laertes [3], and the code is here:
 
   https://github.com/wdenton/laertes
 
 It's pretty straightforward, and if you're comfortable running a modern 
 Ruby web app then to make your own layer it's just a matter of some basic 
 configuration at Layar's web site and customizing Laertes by editing a 
 hash tag in a config file.  Or maybe I could host it for you, for a while 
 at least.
 
 See you soon,
 
 Bill
 
 [1] http://www.layar.com/
 [2] 
 https://maps.google.com/maps/ms?msid=213549257652679418473.0004ce6c25e6cdeb0319dmsa=0
 and 
 https://maps.google.com/maps/ms?msid=208580427660303662074.0004d00a3e083f4d160a4msa=0
 [3] As in Odysseus's father, who was one of the Argonauts and did a fair 
 bit of travelling, and because his name has layer in it.



-- 
Peter Murray
Assistant Director, Technology Services Development
LYRASIS
peter.mur...@lyrasis.org
+1 678-235-2955
 
1438 West Peachtree Street NW
Suite 200
Atlanta, GA 30309
Toll Free: 800.999.8558
Fax: 404.892.7879 
www.lyrasis.org
 
LYRASIS: Great Libraries. Strong Communities. Innovative Answers.


Re: [CODE4LIB] digital collections sitemaps

2013-02-01 Thread Sullivan, Mark V
Jason,

You may want to allow people just to give you the robots.txt file which 
references the sitemap.  I also register the sitemaps individually with the big 
search engines for our site, but I found that very large sitemaps aren't 
processed very well.  So, for our site I think I limited the number of items 
per sitemap to 40,000.  Which results in ten sitemaps for the digital objects 
and an additional sitemap for all the collections.

http://ufdc.ufl.edu/robots.txt

Or else perhaps give more boxes, so we can include all the sitemaps utilized in 
our systems.

Cheers!

Mark


Mark V Sullivan
Digital Development and Web Coordinator
Technology and Support Services
University of Florida Libraries
352-273-2907 (office)
352-682-9692 (mobile)
mars...@uflib.ufl.edu




From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Jason Ronallo 
[jrona...@gmail.com]
Sent: Friday, February 01, 2013 11:14 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] digital collections sitemaps

Hi,

I've seen registries for digital collections that make their metadata
available through OAI-PMH, but I have yet to see a listing of digital
collections that just make their resources available on the Web the
way the Web works [1]. Sitemaps are the main mechanism for listing Web
resources for automated crawlers [2]. Knowing about all of these
various sitemaps could have many uses for research and improving the
discoverability of digital collections on the open Web [3].

So I thought I'd put up a quick form to start collecting digital
collections sitemaps. One required field for the sitemap itself.
Please take a few seconds to add any digital collections sitemaps you
know about--they don't necessarily have to be yours.

https://docs.google.com/spreadsheet/viewform?formkey=dE1JMDRIcXJMSzJ0YVlRaWdtVnhLcmc6MQ#gid=0

At this point I'll make the data available to anyone that asks for it.

Thank you,

Jason

[1] At least I don't recall seeing such a sitemap registry site or
service. If you know of an existing registry of digital collections
sitemaps, please let me know about it!
[2] http://www.sitemaps.org/ For more information on robots see
http://wiki.code4lib.org/index.php/Robots_Are_Our_Friends
[3] For instance you can see how I've started to investigate whether
digital collections are being crawled by the Common Crawl:
http://jronallo.github.com/blog/common-crawl-url-index/


[CODE4LIB] Job: Web and Mobile Application Developer at Hennepin County Library

2013-02-01 Thread jobs
Position re-opened. Applications accepted through Friday,
Feb. 15.

  
The Hennepin County Library system (MN) is seeking a full-time Web and Mobile
Application Developer. This person will provide software planning, web and
mobile application analysis and programming, and support for the maintenance
and development of the library's growing online services and resources.

  
Hennepin County Library is recognized as one of the top public libraries in
the United States and serves more than one million residents of the city of
Minneapolis and suburban Hennepin County. The 41 library system offers more
than 5 million books, CDs and DVDs, materials in more than 40 languages, 1600
public computers and extensive online services.

  
The primary duties and responsibilities of this position include:

  
 Build and integrate
interactive and static websites, mobile applications, and services for
internal and external audiences; configure, code, deploy and implement
websites and mobile applications

 Develop and adhere
to all applicable standards (web, development, and security)

 Evaluate new
software; test on various platforms, browsers and applications

 Work with teams of
programmers, content managers, and content providers on large and complex
projects

 Troubleshoot and
debug applications

 Work with vendors
that provide key products and services to develop and maintain web interfaces

  
  
For more information and to apply for this position, please visit
[www.hennepin.jobs](http://www.hennepin.jobs), click on View Current Positions
and look for **Web Developer - Hennepin County Library**.
Applications will be accepted for this position until Friday, February 15.



Brought to you by code4lib jobs: http://jobs.code4lib.org/job/5985/


[CODE4LIB] Volunteer during C4L13!

2013-02-01 Thread Cynthia Ng
Hi All,

Another friendly reminder to sign up to volunteer. You're going to be
there anyway! Who wants to sit in the _same_ spot the _entire_ day? So
move around a little bit by joining the volunteers!

Just think, minimal effort, and you can add it to your resume too!
Plus lots of gratitude from various people.

In particular, we're looking for:
* Tuesday registration helpers
* microphone runners
* session timers

Sign up on the wiki:
http://wiki.code4lib.org/index.php/2013_During_the_Conference_Volunteers

Thanks,
Cynthia
TheRealArty / Arty-chan


[CODE4LIB] Opt out of video capture at Code4lib 2013 pre-conference

2013-02-01 Thread Francis Kayiwa
For those attending the Code4lib Pre-Conference and likely by extension
Conference. 

If you are uncomfortable being recorded do let me know by sending a note
with the subject [code4lib video opt-out] to francis.kayiwa@gmail

We will have Daniel Jones from the Berkman Center for Internet  Society
recording some of the pre-conferences for a short video he is putting up
for the Boston DPLA launch.

We also will be recording the presentations (this is trickier to avoid
capturing you) for live streaming. 


Cheers,
./fxk
-- 
Whom are you? said he, for he had been to night school.
-- George Ade


Re: [CODE4LIB] digital collections sitemaps

2013-02-01 Thread Chad Nelson
Hi Mark,

Actually, the sitemap.org protocol allows for a sitemap to include
references to multiple child sitemaps
http://www.sitemaps.org/protocol.html#index.

Which is what we did at my former employer:
http://digitalcollections.library.gsu.edu/sitemap/sitemap.xml

And thus the robots.txt only includes a single sitemap:
http://digitalcollections.library.gsu.edu/robots.txt
When we add extra collections, it just goes into the sitemap.xml, so we are
not continuously updating the robots.txt.

Chad



On Fri, Feb 1, 2013 at 11:33 AM, Sullivan, Mark V mars...@uflib.ufl.eduwrote:

 Jason,

 You may want to allow people just to give you the robots.txt file which
 references the sitemap.  I also register the sitemaps individually with the
 big search engines for our site, but I found that very large sitemaps
 aren't processed very well.  So, for our site I think I limited the number
 of items per sitemap to 40,000.  Which results in ten sitemaps for the
 digital objects and an additional sitemap for all the collections.

 http://ufdc.ufl.edu/robots.txt

 Or else perhaps give more boxes, so we can include all the sitemaps
 utilized in our systems.

 Cheers!

 Mark


 Mark V Sullivan
 Digital Development and Web Coordinator
 Technology and Support Services
 University of Florida Libraries
 352-273-2907 (office)
 352-682-9692 (mobile)
 mars...@uflib.ufl.edu



 
 From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Jason
 Ronallo [jrona...@gmail.com]
 Sent: Friday, February 01, 2013 11:14 AM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: [CODE4LIB] digital collections sitemaps

 Hi,

 I've seen registries for digital collections that make their metadata
 available through OAI-PMH, but I have yet to see a listing of digital
 collections that just make their resources available on the Web the
 way the Web works [1]. Sitemaps are the main mechanism for listing Web
 resources for automated crawlers [2]. Knowing about all of these
 various sitemaps could have many uses for research and improving the
 discoverability of digital collections on the open Web [3].

 So I thought I'd put up a quick form to start collecting digital
 collections sitemaps. One required field for the sitemap itself.
 Please take a few seconds to add any digital collections sitemaps you
 know about--they don't necessarily have to be yours.


 https://docs.google.com/spreadsheet/viewform?formkey=dE1JMDRIcXJMSzJ0YVlRaWdtVnhLcmc6MQ#gid=0

 At this point I'll make the data available to anyone that asks for it.

 Thank you,

 Jason

 [1] At least I don't recall seeing such a sitemap registry site or
 service. If you know of an existing registry of digital collections
 sitemaps, please let me know about it!
 [2] http://www.sitemaps.org/ For more information on robots see
 http://wiki.code4lib.org/index.php/Robots_Are_Our_Friends
 [3] For instance you can see how I've started to investigate whether
 digital collections are being crawled by the Common Crawl:
 http://jronallo.github.com/blog/common-crawl-url-index/



Re: [CODE4LIB] Adding authority control to IR's that don't have it built in

2013-02-01 Thread Jason Ronallo
Ed,

Thank you for the detailed response. That was very helpful. Yes, it
seems like good Web architecture is the API. Sounds like it would be
easy enough to start somewhere and add features over time.

I could see how exposing this data in a crawlable way could provide
some nice indexed landing pages to help improve discoverability of
related collections. I wonder though if this begs the question of who
other than my own institution would use such local authorities? Would
there really be other consumers? What's the likelihood that other
institutions will need to reuse my local name authorities?

Is the idea that if enough of us publish our local data in this way
that there could be aggregators or other means to make it easier to
reuse from a single source?

I can see the use case for a local authorities app. While I think it
would be cool to expose our local data to the world in this way, I'm
still trying to grasp at the larger value proposition.

Jason

On Thu, Jan 31, 2013 at 5:59 AM, Ed Summers e...@pobox.com wrote:
 Hi Jason,

 Heh, sorry for the long response below. You always ask interesting questions 
 :-D

 I would highly recommend that vocabulary management apps like this
 assign an identifier to each entity, that can be expressed as a URL.
 If there is any kind of database backing the app you will get the
 identifier for free (primary key, etc). So for example let's say you
 have a record for John Chapman, who is on the faculty at OSU, which
 has a primary key of 123 in the database, you would have a
 corresponding URL for that record:

   http://id.library.osu.edu/person/123

 When someone points their browser at that URL they get back a nice
 HTML page describing John Chapman. I would strongly recommend that
 schema.org microdata and/or opengraph protocol RDFa be layered into
 the page for SEO purposes, as well as anyone who happens to be doing
 scraping.  I would also highly recommend adding a sitemap to enable
 discovery, and synchronization.

 Having that URL is handy because you could add different machine
 readable formats that hang off of it, which you can express as links
 in your HTML, for example lets say you want to have JSON, RDF and XML
 representations:

   http://id.library.osu.edu/person/123.json
   http://id.library.osu.edu/person/123.xml
   http://id.library.osu.edu/person/123.rdf

 If you want to get fancy you can content negotiate between the generic
 url and the format specific URLs, e.g.

   curl -i --header Accept: application/json
 http://id.library.osu.edu/person/123
   HTTP/1.1 303 See Other
   date: Thu, 31 Jan 2013 10:47:44 GMT
   server: Apache/2.2.14 (Ubuntu)
   location: http://id.library.osu.edu/person/123
   vary: Accept-Encoding

 But that's gravy.

 What exactly you put in these representations is a somewhat open
 question I think. I'm a bit biased towards SKOS for the RDF because
 it's lightweight, this is exactly its use case, it is flexible (you
 can layer other assertions in easily), and (full disclosure) I helped
 with the standardization of it. If you did do this you could use
 JSON-LD for the JSON, or just come up with something that works.
 Likewise for the XML. You might want to consider supporting JSON-P for
 the JSON representation, so that it can be used from JavaScript in
 other people's applications.

 It might be interesting to come up with some norms here for
 interoperability on a Wiki somewhere, or maybe a prototype of some
 kind. But the focus should be on what you need to actual use it in
 some app that needs vocabulary management. Focusing on reusing work
 that has already been done helps a lot too. I think that helps ground
 things significantly. I would be happy to discuss this further if you
 want.

 Whatever the format, I highly recommend you try to have the data link
 out to other places on the Web that are useful. So for example the
 record for John Chapman could link to his department page, blog, VIAF,
 Wikipedia, Google Scholar Profile, etc. This work tends to require
 human eyes, even if helped by a tool (Autosuggest, etc), so what you
 do may have to be limited, or at least an ongoing effort. Managing
 them (link scrubbing) is an ongoing effort too. But fitting your stuff
 into the larger context of the Web will mean that other people will
 want to use your identifiers. It's the dream of Linked Data I guess.

 Lastly I recommend you have an OpenSearch API, which is pretty easy,
 almost trivial, to put together. This would allow people to write
 software to search for John Chapman and get back results (there
 might be more than one) in Atom, RSS or JSON.  OpenSearch also has a
 handy AutoSuggest format, which some JavaScript libraries work with.
 The nice thing about OpenSearch is that Browsers search boxes support
 it too.

 I guess this might sound like an information architecture more than an
 API. Hopefully it makes sense. Having a page that documents all this,
 with API written across the top, that hopefully includes terms of

Re: [CODE4LIB] Adding authority control to IR's that don't have it built in

2013-02-01 Thread McAulay, Elizabeth
Hi,

I've been following this thread carefully, and am very interested. At UCLA, we 
have the Frontera collection (http://frontera.library.ucla.edu/) and we have a 
local set of authorities because the performers and publishers are more 
ephemeral than what's usually in LCNAF. So, we're thinking of providing these 
values to others via API or something to help share what we know and get input 
from others. So, that's our use case for publishing out. Curious about 
everyone's thoughts.

Best,
Lisa

On Feb 1, 2013, at 9:44 AM, Jason Ronallo 
jrona...@gmail.commailto:jrona...@gmail.com wrote:

Ed,

Thank you for the detailed response. That was very helpful. Yes, it
seems like good Web architecture is the API. Sounds like it would be
easy enough to start somewhere and add features over time.

I could see how exposing this data in a crawlable way could provide
some nice indexed landing pages to help improve discoverability of
related collections. I wonder though if this begs the question of who
other than my own institution would use such local authorities? Would
there really be other consumers? What's the likelihood that other
institutions will need to reuse my local name authorities?

Is the idea that if enough of us publish our local data in this way
that there could be aggregators or other means to make it easier to
reuse from a single source?

I can see the use case for a local authorities app. While I think it
would be cool to expose our local data to the world in this way, I'm
still trying to grasp at the larger value proposition.

Jason

On Thu, Jan 31, 2013 at 5:59 AM, Ed Summers 
e...@pobox.commailto:e...@pobox.com wrote:
Hi Jason,

Heh, sorry for the long response below. You always ask interesting questions :-D

I would highly recommend that vocabulary management apps like this
assign an identifier to each entity, that can be expressed as a URL.
If there is any kind of database backing the app you will get the
identifier for free (primary key, etc). So for example let's say you
have a record for John Chapman, who is on the faculty at OSU, which
has a primary key of 123 in the database, you would have a
corresponding URL for that record:

 http://id.library.osu.edu/person/123

When someone points their browser at that URL they get back a nice
HTML page describing John Chapman. I would strongly recommend that
schema.orghttp://schema.org microdata and/or opengraph protocol RDFa be 
layered into
the page for SEO purposes, as well as anyone who happens to be doing
scraping.  I would also highly recommend adding a sitemap to enable
discovery, and synchronization.

Having that URL is handy because you could add different machine
readable formats that hang off of it, which you can express as links
in your HTML, for example lets say you want to have JSON, RDF and XML
representations:

 http://id.library.osu.edu/person/123.json
 http://id.library.osu.edu/person/123.xml
 http://id.library.osu.edu/person/123.rdf

If you want to get fancy you can content negotiate between the generic
url and the format specific URLs, e.g.

 curl -i --header Accept: application/json
http://id.library.osu.edu/person/123
 HTTP/1.1 303 See Other
 date: Thu, 31 Jan 2013 10:47:44 GMT
 server: Apache/2.2.14 (Ubuntu)
 location: http://id.library.osu.edu/person/123
 vary: Accept-Encoding

But that's gravy.

What exactly you put in these representations is a somewhat open
question I think. I'm a bit biased towards SKOS for the RDF because
it's lightweight, this is exactly its use case, it is flexible (you
can layer other assertions in easily), and (full disclosure) I helped
with the standardization of it. If you did do this you could use
JSON-LD for the JSON, or just come up with something that works.
Likewise for the XML. You might want to consider supporting JSON-P for
the JSON representation, so that it can be used from JavaScript in
other people's applications.

It might be interesting to come up with some norms here for
interoperability on a Wiki somewhere, or maybe a prototype of some
kind. But the focus should be on what you need to actual use it in
some app that needs vocabulary management. Focusing on reusing work
that has already been done helps a lot too. I think that helps ground
things significantly. I would be happy to discuss this further if you
want.

Whatever the format, I highly recommend you try to have the data link
out to other places on the Web that are useful. So for example the
record for John Chapman could link to his department page, blog, VIAF,
Wikipedia, Google Scholar Profile, etc. This work tends to require
human eyes, even if helped by a tool (Autosuggest, etc), so what you
do may have to be limited, or at least an ongoing effort. Managing
them (link scrubbing) is an ongoing effort too. But fitting your stuff
into the larger context of the Web will mean that other people will
want to use your identifiers. It's the dream of Linked Data I guess.

Lastly I recommend you have an OpenSearch API, which is pretty 

[CODE4LIB] Job: Production Systems Architect and Administrator at California Digital Library

2013-02-01 Thread jobs
Development and Production Architect and Administrator with operational and
architectural responsibilities for the growing suite of digital curation and
preservation services offered by the University of California Curation Center
(UC3, http://www.cdlib.org/uc3) at the California Digital Library (CDL), an
administrative unit of the University of California Office of the President
(UCOP). UC3, one of the world's premier digital curation programs, is a
creative partnership between the CDL, the ten UC campuses, and the
international curation community, providing innovative services and solutions
to ensure the long-term usability of the University's digital content.

  
UC3 currently supports six major online curation services (DataUp, DMPTool,
EZID, Merritt Repository, UDFR, Web Archiving Service), each with three
parallel instances (dev, stage, production), running on over 30 servers in two
data centers, with 150 TB of dedicated SAN storage. Reporting to a UC3
associate director, the incumbent will be responsible for the high-
availability architectural design, operational support, and application-level
administration of this infrastructure and will liaise with the UC3 Service
Managers and Technical Leads for application monitoring and machine
deployment, and the central CDL Infrastructure and Application Services (IAS)
group for package administration and backup, disaster recovery, and business
continuity planning, and will represent UC3 during weekly IAS/ITS planning and
review meetings.

  
More details are available at
http://jobs.ucop.edu/applicants/Central?quickFind=56008



Brought to you by code4lib jobs: http://jobs.code4lib.org/job/6000/


[CODE4LIB] Job: Associate Director for Creation and Curation Services at Case Western Reserve University

2013-02-01 Thread jobs
The Kelvin Smith Library (KSL) seeks imaginative, collaborative and dynamic
candidates to provide visionary and strategic leadership to advance our
cutting-edge initiatives in digital and scholarly
resources. The ADCC leads two teams: The
Digital Learning and Scholarship Team designs and manages all of KSL's
technology-related services, and the Scholarly Resources and Special
Collections Team (SRSC) manages the library's rare book, manuscript, and
archival collections, and analog and digital preservation.
Strategic opportunities include reimagining the library's services for
e-research and digital scholarship, and developing an exciting new vision for
special collections in the 21st century. As a member of the
senior leadership team, and working with the two team leaders, the ADCC
develops strategies and policies, ensures and assesses service quality,
allocates and manages human and financial resources, participates in fund and
collection development, and serves as the primary administrative liaison to
University Information Technology Services (ITS). The ADCC
must be able to work effectively with external partners and cultivate
potential donors. The ADCC is encouraged to engage in
professional endeavors nationally and internationally. The
ADCC reports to the Associate Provost and University Librarian and manages a
staff of 13 fte and a budget of $1.4 million.

  
QUALIFICATIONS. Required: ability to inspire and mobilize
teams that promotes high performance, staff engagement, diversity and
accountability; a master's degree in a relevant discipline; a record of
success in one or more of the areas of responsibility; an articulate and clear
thinker, good problem solver and solid strategist; successful record of
project and operations management; able to work creatively and collaboratively
in a complex and trans-functional environment; CV must warrant placement as a
Librarian 3 or Librarian 4. Preferred: experience managing
budgets, human resources, and technology infrastructures; an imaginative
strategic planner; knowledge of current best practices; commitment to user-
centered services; experience cultivating external funds and collections;
success in faculty, student and donor relationship
management; superior analytical, problem-solving,
interpersonal and communication skills; success mentoring
and coaching library staff; a respected national or international reputation
in the profession.



Brought to you by code4lib jobs: http://jobs.code4lib.org/job/5938/


[CODE4LIB] I'm on a Code4lib Bus/Shuttle

2013-02-01 Thread Francis Kayiwa
We are thrilled to announce that there will be a Shuttle from the
Conference Hotel to UIC Forum. The Shuttle will run for 2 hours in the
AM and 2 hours at the end of the conference. 

All things being equal we still encourage you to use the reliable
-especially at Rush Hour- CTA #8 Bus which will run more frequently than
the UIC Shuttle/Bus does. (It has to make a `left` onto a very busy
street. I wouldn't want to be that driver :-))

More details will be added to this link[0] and the attendees survivor
guide which is still baking.

best regards,
./fxk
[0] http://wiki.code4lib.org/index.php/2013_travel
-- 
Whom are you? said he, for he had been to night school.
-- George Ade


[CODE4LIB] Thank you for embracing Lanyrd

2013-02-01 Thread Patrick Berry
I just want to say a quick thank you to everybody that has contributed to
the Lanyrd setup (http://lanyrd.com/2013/c4l13/) even if it's just saying
that you're coming.  I know I just went and did it without really asking,
but it was mostly driven by my own selfish needs to have that information
in a system I use.

So, again, thanks!

Pat