Well, you need to use javascript if you want it to run in a browser. So
that's one reason to pick it, and the main reason people pick it for
it's most popular uses.
It will be very difficult to get javascript running in a browser to do
what you just said though. Not sure if you were running
Eh, I'm still intuitively opposed to pull parsing. Okay, so there are
some useful libraries these days if you are using the right
language. If you're using ruby and don't want to use native C code?
Just as an example. Seems like we want to arrive at something easy
enough to interpret
I'm still not even sure why people think the blog post violated any
unwritten rules or expectations. I agree that people kind of
unreasonably raked the author over the coals here.
I think _maybe_ under some interpretations it's borderline (some of
those interpretations are those of the
Any pro or con thoughts on adding the feed from Library News to Planet
Code4lib? It has a feed, I assume?
On 11/29/2011 1:03 PM, Brett Bonfield wrote:
On Thu, Nov 24, 2011 at 12:02 AM, BRIAN TINGLE
brian.tingle.cdlib@gmail.com wrote:
I'm not sure how many of y'all read hackernews
I'm trying to figure out what software they use, but that 'about' page
has a link that does not seem useful (it links to a page for a lisp-like
language, with no mention of any software package in that language or
any other that can provide a hacker-news-like site).
Don't know if the link is
hold the trademark in trust and not enforce it against any individual,
organization, or company who chooses to promote services around Koha in
New Zealand.
Well, the point of having a trademark at all is generally to enforce it
against people who are calling something that is _not_ Koha Koha.
So, HLT says:
. The Library Trust has never stopped any Koha user or developer or
vendor from carrying out their business. Our track record over the last
12 years of releasing the Koha code and supporting the Koha community to
go about its business unimpeded is exemplary and we have no
I did not fill it out that way, there were no instructions to fill it
out that way. Will my registration and payment still be good? I just
wrote Code4Lib 2012 in the description field on payment page. There
was no way to know to do anything different.
On 11/16/2011 11:31 AM, Elizabeth Duell
I don't understand what you're suggesting, Tim.
I understand that you (like many) need to look at budgets and expenses.
But how does the pre-conf being free hurt you there? If you can't
afford the extra hotel etc., then you may not be able to go to the
pre-conf, but what can the organizers
On 11/10/2011 11:35 AM, Nate Vack wrote:
I think the idea is if the preconference had a cost to attendees, its
sponsorship money could be used to defer the cost of the rest of the
conference for everyone.
Huh? You've completely lost me! What? Why? How? I have no idea what
you're talking
If it helps people budget, for reasons I don't understand, to call the
pre-conf something other than a pre-conf, that's potentially doable.
I do like that it's the beginning so people can skip the pre-conf if
that sort of activity (or the particular topics offered) are not useful
to them
What _ought_ to be easiest of all is getting our ILS's to NEVER export
Marc8 _ever_ again. UTF8 only.
Sadly, that only ought to be easiest.
But IMO there's no reason any of us should be dealing with Marc8 ever
again. The only thing that should deal in Marc8 is an ILS, and should
only input
Specification for details on accessing alternate graphic
character sets (http://www.loc.gov/marc/specifications/speccharmarc8.html#alternative).
-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
Jonathan Rochkind
Sent: Monday, October 24, 2011 2:01 PM
Is there any validation you can do other than checking the number of digits,
and the check digit? Checking the check digit is pretty simple... beyond
that... I don't think there's really any way to tell if the ISBN has _actually_
been assigned or not, or even if the prefix has been actually
If you're in ruby, I prefer this gem, which can check ISBN check digit as
well as convert from 10 to 13.
https://github.com/entangledstate/isbn
Good to find another isbn validator. I use: http://isbn-tools.rubyforge.org/
Not sure why there's two nearly identical gems, exactly.
Probably
in a shelf browse.
Quoting Jonathan Rochkind rochk...@jhu.edu:
Ah, but we're not talking about entry vocabulary, we're talking
about labelling shelf ranges.
At my job at UC we had a rule: if you display it, the user should be
able to search it and get those same results. If you display one set
in the
database), but the trained model is.
That's, admittedly, a bit of a downside to my fork (although the model
being checked into git is true of the original, as well) since you'd
always be in conflict with my trained model if you train your own.
-Ross.
On Monday, October 17, 2011, Jonathan Rochkind
For #1, you need what you asked for, probably, if you want the subject
headings to be consistent with other LCSH on the rest of your records.
For #2, you can provide a useful topical/subject type heading via much
simpler and more feasible solutions than mapping to LCSH. For #2, you
don't
ranges on display, I
think it's probably not worse than nothing.
Of course, that's up to the implementer, what's better than nothing.
Quoting Jonathan Rochkind rochk...@jhu.edu:
For #2, you can provide a useful topical/subject type heading via
much simpler and more feasible solutions than
Also, what, you guys have defected from the IRC channel to g+? Is that
why we never see you in IRC anymore, Roy? We miss you!
Many of us have been using the IRC channel for just this purpose for
years, and anyone is welcome to. Personally, I still haven't used g+,
and don't know when/if I
On 10/12/2011 11:14 AM, Emily Lynema wrote:
Tempted to use Serials Solutions' 360 Link API to create our own version of the
service window interface, but we've managed not to do that yet!
If you decide to, consider using Umlaut as a platform!
On 10/11/2011 1:45 PM, Mark Jordan wrote:
Love the idea, but the form is now throwing a 404 error on submission.
Hmm, weird. I'll try to figure out why. I think 404 is what the google
CSE gives you if you have a syntax error in your config files, but I
don't think I've changed em since it
Love the idea, but the form is now throwing a 404 error on submission.
Any chance it can be fixed?
Okay, it's fixed, at http://www.code4lib.org/custom_search/search_form.html
But the thing I can't fix, is the Google CSE is _weird_ with what
results it finds. It is kind of non-deterministic.
So I was in #code4lib, and skome asked about ideas for library hours.
And I recalled that there have been at least two articles in the C4L
Journal on this topic, so suggested them.
Then I realized that there's enough body of work in the Journal to be
worth searching there whenever you have an
(PS: Thanks a lot to ryan wick for spending time helping to get a
reasonable ruby environment installed on the code4lib.org server, so I
could then get my scripting done quickly and pleasantly.)
On 10/6/2011 9:35 PM, Jonathan Rochkind wrote:
So I was in #code4lib, and skome asked about ideas
Yeah, I think it ends up being pretty hard to create general-purpose
solutions to this sort of thing that are both not-monstrous-to-use and
flexible enough to do what everyone wants. Which is why most of the
'data warehouse' solutions you see end up being so terrible, in my
analysis.
I am
Thanks, I added this as a comment on the code4lib talk page from the conf.
If anyone else happens to be looking for a video and finds it, and you
want to add it to the code4lib talk page in question, it would probably
be useful for findability.
In the past I think someone bulk added the URLs
I spoke too soon, I wasn't able to actually add a comment, Your comment
has been queued for moderation by site administrators and will be
published after approval, but I'm not sure if there's anyone actually
looking at that moderation queue. Sigh.
But my account has 'edit' abilities on the
Very nice, thanks.
I wonder the rationale behind searching both valid and cancelled LCCNs.
This has caused me trouble in the past in similar systems, because a
cancelled LCCN seems in some cases to duplicate a different valid LCCN,
so you search on an LCCN, and get, in this case, both the
?
Ann
-Original Message-
From: Jonathan Rochkind [mailto:rochk...@jhu.edu]
Sent: Wednesday, September 07, 2011 2:16 PM
To: Next generation catalogs for libraries
Cc: Della Porta, Ann; Code for Libraries
Subject: Re: [NGC4LIB] Permalink service for authority data now available at LC
Very
I agree with Brice think you might be over-thinking/over-architecting
it, although over-thinking is one of my sins too and I'm not always sure
how to get out of it.
But am I correct that you're going to be relying on user-submitted
content in large part? Then it's important to keep it simple,
I'm especially interested in anything which
gave you an ah-ha! moment when you were working with library data --
the implicit things which didn't make sense until you knew why those
crazy librarians did things the way they did.
I'd add that you should be open to accepting that some of
_probably_ a safe assumption. (Although it
wouldn't hurt if they added a data element marcCode or somethign with
the actual literal fq i it.)
On 6/22/2011 10:35 PM, Karen Coyle wrote:
Quoting Jonathan Rochkind rochk...@jhu.edu:
Right, so like I keep saying, as far as I can tell, those files
On 6/22/2011 11:25 PM, Ross Singer wrote:
Can't you use:
http://www.loc.gov/standards/codelists/gacs.xml
?
It's what I used to make marccodes.heroku.com/gacs/
Yes, I can! I didn't know about/hadn't found that one either hadn't been
mentioned until now. Thanks! Where did you find that?
Can anyone remind me if there's a machine readable copy of the MARC
geographic codes available at any persistent URL?
They're in HTML at http://www.loc.gov/marc/geoareas/gacs_code.html . I
actually had a script that automatically downloaded from there and
scraped the HTML -- but sometime
Man, I figured it was there somewhere I just didn't know it.
If it's really not there, can we like start a campaign to convince LC
that part of maintaining the MARC vocabularies is making them available
at a persistent URL, in machine-readable fashion, updated and maintained
by them as
PS: Kyle, that's your own version? That's... sort of kind of machine
readable. Well, not really. I can't figure out quite what's going on
there, the label/value pairs are just stuffed in single, javascript
string literals, seperated by newlines, or sometimes (but sometimes not)
with Assigned
Aha, that's probably what I need. And now I remember Ross probably
pointed that out to me before.
I'm still having trouble figuring out how to get from the rdf-triples
it's got there to a hash of codes (as they appear in marc records, not
URIs), to labels.
It seems like it in fact will be a
It can be found at
http://id.loc.gov/vocabulary/geographicAreas.html
Look near the bottom of the page for links to the codes as RDF, N-triples,
and JSON.
Right, so like I keep saying, as far as I can tell, those files are lists of
URLs, one for each code. (Or technically lists of
The result was that a few meetings later LC announced that they
had coded the MARC online pages in XML, and were generating the HTML
from that. I think I was mis-understood.
No doubt, but man if they'd then just SHARE that XML with us at a persistent
URL, and keep the structure of that XML
On 6/16/2011 11:35 AM, Dan Scott wrote:
You're aware of the recent addition of the OpenLibrary Read API, which is meant
to simplify exactly this problem, right?
I'm still a bit confused/miffed about the fact that Internet Archive has
MANY texts which are not included in Open Library. Like
I think the vast majority of libraries can make javascript-only changes
to their OPAC interfaces, which is all Dan's approach requires. Even III
libraries do that.
IF they have any programming staff at all with a bit of time and are
willing to do such hacks. That might be an 'if' that's not
I honestly don't think it's a disaster if registration fee approaches
$200 either. (I realize you said $200 in _addition_ to the usual $125,
I'm saying $200, heh).
I think $200 is about the max that seems okay to me, but $200 does.
That's still a good price for the conf, and still fairly
On 6/15/2011 9:31 AM, Eric Hellman wrote:
Clearly, Jonathan has gone through the process of getting his library to think
through the integration, and it seems to work.
Thank you!
Has there been any opposition?
Not opposition exactly, but it doesn't work perfectly, and people are
unhappy
So maybe part of the problem is our venue voting system -- people vote
for flashy locations, which are also expensive locations. The people
voting (which is anyone who wants to) don't neccesarily consider all the
ramifications (don't neccesarily have the experience/background to do so
even if
On 6/15/2011 12:51 PM, Susan Kane wrote:
great
visibility with influential folks for a fraction of the cost of ALA!
That's an interesting point too -- you pay for a booth at ALA ($),
you DO reach a whole lot of people, but it's a lot more expensive than
even our 'platinum' sponsorship,
On 6/15/2011 10:55 AM, Karen Coyle wrote:
I've been struggling with this around the Open Library digital texts:
how can we make them available to libraries through their catalogs?
When I look at the install documentation for Umlaut [1](I was actually
hoping to find a technical requirements
On 6/15/2011 1:46 PM, Kevin S. Clarke wrote:
What is the problem we're trying to solve again? Do we think that the
recent conferences have cost too much for the attendees? That this
year's will cost too much? Are we worried about not finding places to
host in the future? Are we worried
I doubt anyone is particularly wedded to the particularities of the
current theme. It probably doesn't matter, as long as you can put the
code4lib logo at the top with a banner-menu, if the theme changes, even
significantly. As long as it has pretty much the same functionality
exposed that it
On 6/15/2011 5:43 PM, Peter Noerr wrote:
And it is available - in our commercial software (not a plug - we don't sell
it, just noting that it is not the sort of thing to try yourself on any scale -
it takes a lot of resources).
I wouldn't go that far -- I _have_ done it myself, at the
That's an interesting idea, I might try creating author fields with
Soundex normalization rather than the standard English language
'stemming' normalization.
Still curious to get more feedback on what others have done, even if you
didn't consider it carefully, if you're doing it in production
Hey Erik, in that wiki documentation the example it gives is:
filter class=solr.PhoneticFilterFactory encoder=DoubleMetaphone
inject=true/
Do you know what that 'inject' argument is about, and where (if
anywhere) I'd find it (and other available arguments for
PhoneticFilterFactory, which
When sponsors have sponsored pre-conf activities, that 'sponsoring' of
pre-confs was just that their staff were the
presenters/facilitators/instructors at those pre-confs. So that is more
exposure, but it was formally unconnected with their sponsorship
donation -- in the sense that _anyone_
On 6/14/2011 12:14 PM, Mark Jordan wrote:
-before negotiating with sponsors, have a policy on whether sponsorship gets
them a slot on the program. IIRC there was a long discussion about this on the
c4l planning list.
That is the thing the community has really not liked the idea of in the
On 6/14/2011 4:00 PM, Kyle Banerjee wrote:
Or maybe the conf has gotten more expensive such that we need more
money and thus more incentive to sponsor. (First priority -- try to keep the
conf from getting more expensive so this doesn't happen)
Costs can be kept down by securing
On 6/14/2011 5:34 PM, Kyle Banerjee wrote:
C4l was much smaller then. The smaller the event, the less complicated
things are and the more options you have. There are quite a few
regional c4l events. We held one for a capacity crowd in Portland
yesterday. It was about the same size as the
In a Solr-based search, stemming is done at indexing time, into fields with
stemmed tokens.
It seems typical in library-catalog type applications based on Solr to have the
default (or even only) searches be over these stemmed fields, thus
'auto-stemming' to the user. (Search for 'monkey', find
I wonder if you could get by using Google Calendar as the 'interface', consumed
via client and then published in whatever human-readable and semantic formats
you wanted. I _think_ Google Calendar lets you create repeating events, which
should then be exposed by it's iCal API.
I think of this
On 6/1/2011 10:46 PM, Frumkin, Jeremy wrote:
that content for the user? If we are indeed trying to meet our users'
needs, perhaps we need not to continue to build just-in-case collections,
but provide just-in-time access to information resources, regardless of
their location, and perhaps even
I think we (the community) might be able connect you with interested
wikipedians (some librarians some not) interested/willing to help you
shepard it through wikipedia approval, if you're interested.
On 6/2/2011 10:40 AM, Ralph LeVan wrote:
Yes, the bot was approved, but in a much more
On 6/2/2011 2:25 PM, Rod McFarland wrote:
If Omeka had a desktop client I would fight to have
it replace CDM here, but I don't think CDM would go away even if we
brought in Omeka.
Just curious why you prefer a desktop client to a web client. Do you
find Omeka's web client to be not a good
So, selecting which public domain free on the internet works should be included
in the catalog (presumably considering both quality of digital copy and
quality/usefulness of the work itself), keeping track of them all of them in
their various locations, adding links to them all to our
Neat!
Just tried the human-displayed links off the Immanuel Kant wikipedia
page (http://en.wikipedia.org/wiki/Immanuel_Kant), created by the
'Authority Control' template that Daniel or someone else added.
VIAF one works great, taking me to the human readable VIAF page.
PND one seems to work
Multi-word synonyms are tricky.
You probably want to make sure this synonym is only expanded at index
time, and not at search time. See some background in the
SynonymFilterFactory section of
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
I think the synonym approach is a fine
music students tend to use # and b. (Concerto in F#
minor for Bb Bass Clarinet).
Thomas
On 05/31/2011 11:59 AM, Jonathan Rochkind wrote:
Multi-word synonyms are tricky.
You probably want to make sure this synonym is only expanded at index
time, and not at search time. See some background
In addition to the approaches you note, might be worth investigating
this tool that came up in a thread just a few days ago on this list:
http://wikipedia-miner.sourceforge.net/
I think nobody's done enough with this yet to be sure what will work
best, I think you're going to have to
Another problem with free online resources not just 'collection
selection', but maintenance/support once selected. A resource hosted
elsewhere can stop working at any time, which is a management challenge.
The present environment is ALREADY a management challenge, of course.
But consider the
Curious what script you've used that isn't production ready -- I don't
think you meant to post in the URL for the JQuery library?
On 5/19/2011 10:39 AM, Karen Coyle wrote:
This sounds like a great way to translate from library forms to
wikipedia name forms. But for on-the-fly use I wonder if
On 5/19/2011 11:01 AM, graham wrote:
Replying to Jonathan's mail rather at random, since several people are
saying similar things.
1. 'Free resources can vanish any time.' But so can commercial ones,
which is why LOCKSS was created. This isn't an insoluble issue or one
unique to free resources.
Now whether it _means_ what you want it to mean is another question,
yeah. As Andreas said, I don't think that particular example _ought_ to
have two 856's.
But it ought to be perfectly parseable marc.
If your 'patch' is to make ruby-marc combine those multiple 856's into
one -- that is not
I'm curious what's going on here, it doesn't make any sense.
Do you just mean that your MARC file has more than one 856 in it? That's
what your pasted marc looks like, but that is definitely legal, AND I've
parsed many many marc files with more than one 856 in them, with
ruby-marc, it was not
I wonder if it depends on if your record is in Marc8 or UTF-8, if I'm
reading Karen right to say that CR/LF aren't in the Marc8 character set.
They're certainly in UTF-8! And a Marc record can be in UTF-8.
On 5/19/2011 2:27 PM, Jonathan Rochkind wrote:
Is it really true that newline
On 5/19/2011 2:33 PM, Kyle Banerjee wrote:
However, what would be the use case for including them as you don't
know how
they'll be interpreted by the app that you hand the data to?
Only when the destination is an app you have complete control over too.
One use case I was idly turning over in
On 5/19/2011 2:33 PM, Reese, Terry wrote:
Jonathan,
Karen is correct -- CR/LF are invalid characters within a MARC record. This
has nothing to do if the character is valid in the set -- the format itself
doesn't allow it.
I'm curious where in the spec it says this -- of course, it's an
# 817-272-5326 office
# 817-688-1926 mobile
# do...@uta.edu
# http://rocky.uta.edu/doran/
-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
Jonathan Rochkind
Sent: Thursday, May 19, 2011 1:27 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re
Do you mean ones not under copyright?
On 5/17/2011 3:16 PM, Eric Hellman wrote:
Some ebooks, in fact some of the greatest ever written, already cost less than
razor blades.
Eric
(who just finished writing a chapter on open-access e-books)
On May 16, 2011, at 7:52 PM, Luciano Ramalho wrote:
On 5/16/2011 7:52 PM, Luciano Ramalho wrote:
And then we need to consider the rise of the Kindle. An ebook costs
about $1.60 in 1962 dollars. A thousand ebooks can fit on one device,
1) Why quote the ebook price in 1962 dollars? The reality in 2011 is
that Kindle books in general are too
programming.reddit.com
On 5/2/2011 11:04 AM, Yitzchak Schaffer wrote:
Hello all,
In the spirit of last week's inspiring and procrastination-enhancing
thread on what-to-learn, a new survey: what tech/library news outlets
and blogs do folks follow? My list follows, in the sections I use in
my
Neither vector nor raster information describes the actual embedded
_text_ we're talking about though. The stuff that lets you
copy-and-paste _text_ (not images), or search text. PDFs can also have
that. And even know what portions of a raster displayed image correspond
to what characters.
This is a great idea, thanks for sharing.
On 4/27/2011 9:10 AM, Van Mil, James (vanmiljf) wrote:
Hi everyone! (first post!)
We've been getting lots of feedback at my library about the problem with the NY
Times paywall and the lack of institutional access to their website, but we do
have a
Any idea how those got there, Roy? Manually added by Catalogers? (To
what MARC field, just an 856?). Added by OCLC processing somehow?
On 4/27/2011 12:14 PM, Roy Tennant wrote:
For what it's worth, I see over 7,000 links to IMDB from WorldCat records.
Roy
On Wed, Apr 27, 2011 at 9:01 AM,
Sure, I've experimented myself with getting around the paywall's
restrictions, it's not hard.
It's not something I would suggest my organization publically (or even
privately, really) recommend to users or instruct users in how to do,
however.
There's a role for libraries in this stuff, but
On 4/17/2011 10:58 AM, Bill Dueber wrote:
At the same time, I'm finding it hard to determine if we're converging on
when trying to turn LCSH into reasonable facets, here's what you need to
do or when trying to turn LCSH into reasonable facets, you've haven't got
a freakin' prayer. Can someone
So, yeah, I learned a little bit about this recently, the overall DOI
environment is a bit confusing to understand exactly what the options
and trade-offs are.
So there are various DOI top-level registrars that can register DOIs. (I
don't know if registrar is actually the name DOI uses for
I like that you said when rather than if, heh.
Have you guys at VIAF made it clear to LC that you'd consider them publishing
in linked data to be a complement to VIAF, rather than duplication? I think
maybe some people think it'd be duplication, which I think is not true.
-Original
XML well-formedness and validity checks can't find badly encoded
characters either -- char data that claims to be one encoding but is
really another, or that has been double-encoded and now means something
different than intended.
There's really no way to catch that but heuristics. All of
On 4/7/2011 10:46 AM, Houghton,Andrew wrote:
to go to the name authority record 150 England with LCCN n82068148. Currently
under id.loc.gov you will not find name authority records,
If this would change, so name authority record elements used in 6xx
subject cataloging were in id.loc.gov, it
On 4/7/2011 1:21 PM, Houghton,Andrew wrote:
That is probably correct. England may appear as both a 110 *and* a 151 because
the 110 signifies the concept for the country entity while the 151 signifies
the concept for the geographic place. A subtle distinction...
This starts getting into
] On Behalf Of
Jonathan Rochkind
Sent: Wednesday, April 06, 2011 9:44 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] MARC magic for file
Can't you have a legal MARC file that does NOT have 4500 in those
leader positions? It's just not legal Marc21, right? Other marc
formats may specify
Actually -- I'd disagree because that is a very narrow view of the
specification. When validating MARC, I'd take the approach to validate
structure (which allows you to then read any MARC format) -- then use a
separate process for validating content of fields, which in my opinion,
is more open
On 4/6/2011 2:02 PM, Kyle Banerjee wrote:
I'd go so far as to question the value of validating redundant data that
theoretically has meaning but which are never supposed to vary. The 4 and
the 5 simply repeat what is already known about the structure of the MARC
record. Choking on stuff like
On 4/6/2011 2:43 PM, William Denton wrote:
Validity does mean something definite ... but Postel's Law is a good
guideline, especially with the swamp of bad MARC, old MARC, alternate
MARC, that's out there. Valid MARC is valid MARC, but if---for the sake
of file and its magic---we can identify
I am not familar with that Perl module. But I'm more familiar then I'd
want with char encoding in Marc.
I don't recognize the bytes 0xC2 (there are some bytes I became
pathetically familiar with in past debugging, but I've forgotten em),
but the first things to look at:
1. Is your Marc file
-- the 0xC2 code is the sound recording marker in
MARC-8. I'd guess the file isn't in UTF8.
--TR
-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
Jonathan Rochkind
Sent: Wednesday, April 06, 2011 1:28 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re
Does anyone have a good regular expression that will match all legal LC
Call Numbers from the LC Classified Schedule, but will generally not
match things that could not possibly be an LC Call Number from the LC
Classified Schedule?
In particular, I need it to NOT match an MLC call number,
://code.google.com/p/library-callnumber-lc/wiki/Home
You may want to remove the prefix part, and allow for a fourth cutter.
The folks at UNC pointed me to this a few months ago.
-Tod
On Mar 31, 2011, at 11:29 AM, Jonathan Rochkind wrote:
Does anyone have a good regular expression that will match all legal
' or other broad category, either directly from the LCC
schedule labels, or using a mapping like umich's:
http://www.lib.umich.edu/browse/categories/
But if it's not really an LCC at all, and you try to map it, you'll get
bad postings.
On 3/31/2011 1:03 PM, Jonathan Rochkind wrote:
Thanks
I think the cookie pusher method is inherently flawed, with lots of
problematic edge cases like this.
I simply don't use it. Yes, that creates other problems of it's own.
The fundamental problem with the whole DOI resolution design is ignoring
the appropriate copy problem, not sure there's
Here's the story of one libraries approach to that from a Code4Lib
Journal article:
http://journal.code4lib.org/articles/2941
On 3/7/2011 4:24 PM, Rosalyn Metz wrote:
Hi Everyone,
I was wondering if you had suggestions for an online room reservation
system. I feel like people have asked
Thanks very much, excellent.
On 3/7/2011 5:26 PM, Serials Solutions Library Support wrote:
*To update this question by email, please reply to this message.
Because your reply will be automatically processed, please enter your
reply in the space below. *
[=== Please enter your reply BELOW
201 - 300 of 752 matches
Mail list logo