Re: [CODE4LIB] Assigning DOI for local content
On Tue, Nov 17, 2009 at 6:58 PM, Jodi Schneider jodi.a.schnei...@gmail.com wrote: The first question is: what are they trying to accomplish by having DOIs? DOIs are just a form of Handle, which is a persistent URL schema. I don't think I need to explain what PURLs are designed to accomplish. If they're looking for persistent identifiers, I don't understand (a priori), why DOI is better, as an identifier scheme, than any other 'persistent identifier scheme' (ARK [1], PURL, Handle, etc[2]). (Though I really like CrossRef and the things they're doing.) The advantage is that DOIs over other PURLs are used only for citation purposes. As someone who works with a lot of students and faculty, I have observed that DOIs are becoming familiar to them as a definitive citation identifier. As more journals, publishing in an online environment, stop using page numbers in their citations and turn instead to article identifiers -- e.g., citations like this one: Neylon C, Wu S (2009) Article-Level Metrics and the Evolution of Scientific Impact. PLoS Biol 7(11): e1000242. doi:10.1371/journal.pbio.1000242 then DOIs become the most consistently recognizable identifier for constructing findable citations. So, you could use a PURL, but they wouldn't be understood to mean the same thing. Also, DOIs are not dependent on a single resolver -- i.e., you don't have to send them through http://dx.doi.org/ although that's largely been the case up to this point in time. PURLs tend to be server-specific. We don't have to think too far back to recall an instance when a PURL server failed, causing some temporary access problems. Hopefully, DOIs are less vulnerable to this -- although this certainly hasn't been tested. And, responding to Jonathan, who said: investigating whether every cited article has a DOI and then making sure to include it... is non-trivial labor. It certainly is if you have to go back and apply them to a backfile of published articles. However, with the Code4Lib Journal, I've been doing this all along in the articles I've edited. CrossRef has good tools for finding this information and when that fails, I go to the cited article itself. Some work, yes, but I figure that's part of my job as an editor. Tom
Re: [CODE4LIB] Assigning DOI for local content
Please explain in more details, that will be more helpful. It has been a while. Back to 2007, I checked PURL's architecture, and it was straightly handling web addresses only. Of course, current HTTP protocol is not going to last forever, and there are other protocols in the Internet. The coverage of PURL is not enough. From PURL's website, it still says PURLs (Persistent Uniform Resource Locators) are Web addresses that act as permanent identifiers in the face of a dynamic and changing Web infrastructure. I am not sure what web addresses means. http://www.purl.org/docs/help.html#overview says PURLs are Persistent Uniform Resource Locators (URLs). A URL is simply an address on the World Wide Web. We all know that World Wide Web is not the Internet. What if info resource can be accessed through other Internet Protocols (FTP, VOIP, )? This is the limitation of PURL. PURL is doing re-architecture, though I cannot find out more documentation. The Handle system is The Handle System is a general purpose distributed information system that provides efficient, extensible, and secure HDL identifier and resolution services for use on networks such as the Internet.. http://www.handle.net/index.html Notice the difference in definition. Yan -Original Message- From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of Ross Singer Sent: Wednesday, November 18, 2009 8:11 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Assigning DOI for local content On Wed, Nov 18, 2009 at 12:19 PM, Han, Yan h...@u.library.arizona.edu wrote: Currently DOI uses Handle (technology) with it social framework (i.e. administrative body to manage DOI). In technical sense, PURL is not going to last long. I'm not entirely sure what this is supposed to mean (re: purl), but I'm pretty sure it's not true. I'm also pretty sure there's little to no direct connection between purl and doi despite a superficial similarity in scope. -Ross.
Re: [CODE4LIB] Assigning DOI for local content
Back in 2007, I had a different job, different email address and lived in a different state. Things change. If people are sending emails to ross.sin...@gatech.edu to fix the library web services, they are going to be sorely disappointed and should perhaps check http://www.library.gatech.edu/about/staff.php for updates. purl.org has been going through a massive architecture change for the better part of a year now -- which has finally been completed. It was a slightly messy transition but they migrated from their homegrown system to one designed by Zepheira. I feel like predicting the demise of HTTP and worrying about a services' ability to handle other protocols is unnecessary hand wringing. I still have a telephone (two, in fact). Both my cell phone and VOIP home phone are still able to communicate flawlessly with a POTS dial phone. My car still has an internal combustion engine based on petroleum. It still doesn't fly or even hover. My wall outlets still accept a plug made in the 1960s. PURLs themselves are perfectly compatible with protocols other than HTTP: http://purl.org/NET/rossfsinger/ftpexample The caveat being that the initial access point is provided via HTTP. But then again, so is http://hdl.handle.net/, which, in fact, the only way currently in practice to dereference handles. My point is, there's a lot of energy, resources and capital invested in HTTP. Even if it becomes completely obsolete, my guess I can still type http://purl.org/dc/terms; in spdy://google.com/ and find something about what I'm looking for. -Ross. On Thu, Nov 19, 2009 at 12:18 PM, Han, Yan h...@u.library.arizona.edu wrote: Please explain in more details, that will be more helpful. It has been a while. Back to 2007, I checked PURL's architecture, and it was straightly handling web addresses only. Of course, current HTTP protocol is not going to last forever, and there are other protocols in the Internet. The coverage of PURL is not enough. From PURL's website, it still says PURLs (Persistent Uniform Resource Locators) are Web addresses that act as permanent identifiers in the face of a dynamic and changing Web infrastructure. I am not sure what web addresses means. http://www.purl.org/docs/help.html#overview says PURLs are Persistent Uniform Resource Locators (URLs). A URL is simply an address on the World Wide Web. We all know that World Wide Web is not the Internet. What if info resource can be accessed through other Internet Protocols (FTP, VOIP, )? This is the limitation of PURL. PURL is doing re-architecture, though I cannot find out more documentation. The Handle system is The Handle System is a general purpose distributed information system that provides efficient, extensible, and secure HDL identifier and resolution services for use on networks such as the Internet.. http://www.handle.net/index.html Notice the difference in definition. Yan -Original Message- From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of Ross Singer Sent: Wednesday, November 18, 2009 8:11 PM To: CODE4LIB@LISTSERV.ND.EDU Subject: Re: [CODE4LIB] Assigning DOI for local content On Wed, Nov 18, 2009 at 12:19 PM, Han, Yan h...@u.library.arizona.edu wrote: Currently DOI uses Handle (technology) with it social framework (i.e. administrative body to manage DOI). In technical sense, PURL is not going to last long. I'm not entirely sure what this is supposed to mean (re: purl), but I'm pretty sure it's not true. I'm also pretty sure there's little to no direct connection between purl and doi despite a superficial similarity in scope. -Ross.
[CODE4LIB] Position Open - Yale University Library IT Office
Systems Programmer II Library Systems Group Yale University Band III-Grade 25 General Purpose In a dynamic 24x7x365 production data center environment, working independently and collaboratively as a senior member of an interdepartmental team, provides Unix and Windows system administration, storage and backup administration, and application administration for Yale University Library, consortia, and development partner's servers and web services. Plays a leadership role in the acquisition and deployment of new hardware. Responsibilities Ensures performance, reliability and security on Solaris, Linux and Windows servers, supporting critical staff and public services for the University Library, campus partners, and national and international consortia members. Provides performance tuning, capacity planning, automation, and documentation for systems and applications. Installs, configures, documents, and maintains operating systems, applications, and system management tools including: * Apache, Netbackup, RedHat, SAMBA, Solaris Zones and ZFS, SVN, Tomcat, VMWare * Open source applications CNRI handles system, Fedora, VUFind, * Informix, Oracle, MySQL and Progres, databases; * Library applications GFA/LAS, MetaLib, SFX, URSA, Verde, Voyager and locally developed. Ensures performance, reliability and security of tape libraries and backup systems: * Monitor storage and backup systems and components to ensure capacity, performance and availability. * Consult with clients on backup/restore issues. * Perform storage and backup related software installs and upgrades. Collaborates as part of a development team, as a technical lead, providing systems expertise in research projects related to new web services for the discovery and delivery of content, digital archives, and repositories. Plays a leadership role as knowledge expert in the specification, design, and deployment of hardware. Independently manages special projects, such as capacity replacements and architecture improvements. Tracks developments in new technologies. Supports and collaborates with developers at Yale and in the open source community, data center operations staff, vendors, and service providers. Performs off-shift work with rotating on-call coverage. Mentors junior system programmers and other technology staff in Library units. May be required to assist with disaster recovery efforts. May be assigned to work at the West Campus location in West Haven, CT. Qualifications Bachelor's degree in a related field and at least four years experience as a systems programmer in a mixed-platform environment; or an equivalent combination of education and experience. Extensive experience configuring and supporting a variety of disk arrays and RAID controllers. Extensive experience and skill troubleshooting and resolving a variety of hardware, network, and application-related issues in a multi-platform computing environment. Demonstrated expert knowledge with Linux/Solaris/Unix and/or Windows operating systems, utilities and applications. Comprehensive, demonstrated leadership and project management skills. Demonstrated ability with administration of web services (Tomcat, Apache), databases, and web applications. Ability to produce well-crafted documentation, specifications, and recommendations. Excellent communication and organizational skills. Expert skill in developing scripts to monitor, maintain, and secure systems. Demonstrated competence working with a range of hardware related to enterprise-class services (servers, storage, and backup). Salary and Benefits Rank and competitive salary will be based upon the successful candidate's qualifications and experience. Full benefits package including pro-rated 22 vacation days; 18 holiday, recess and personal days; comprehensive health care; TIAA/CREF or Yale retirement plan; and relocation assistance. Applications consisting of a cover letter, resume, and the names of three professional references should be sent by creating an account and applying online at www.yale.edu/jobshttp://www.yale.edu/jobs for immediate consideration - the STARS req ID for this position is 8624BR. Please be sure to reference #8624BR in your cover letter.
[CODE4LIB] Web analytics for POST data
Hello coders, I'm looking at tracking our III OPAC usage via a Google Analytics-like tool. As far as I can tell, GA itself doesn't track POST data for privacy reasons. Anyone here know of something for this? I found an open-source GA-like, on-your-own-server PHP project called Piwik [http://piwik.org] which I imagine does this, or could be modified easily enough. -- Yitzchak Schaffer Systems Manager Touro College Libraries 33 West 23rd Street New York, NY 10010 Tel (212) 463-0400 x5230 Fax (212) 627-3197 Email yitzchak.schaf...@tourolib.org