Re: [CODE4LIB] [CODE4LIB] HEADS UP - Government shutdown will mean *.loc.gov is going offline October 1

2013-09-30 Thread Birkin Diana
 ...you'd want to create a caching service...


One solution for a relevant particular problem (not full-blown linked-data 
caching):

http://en.wikipedia.org/wiki/XML_Catalog

excerpt: However, if they are absolute URLs, they only work when your network 
can reach them. Relying on remote resources makes XML processing susceptible to 
both planned and unplanned network downtime.

We'd heard about this a while ago, but, Jodi, you and David Riordan and 
Congress have caused a temporary retreat from normal sprint-work here at Brown 
today to investigate implementing this!  :/

The particular problem that would affect us: if your processing tool checks, 
say, an loc.gov mods namespace url, that processing will fail if the loc.gov 
url isn't available, unless you've implemented xml catalog, which is a formal 
way to locally resolve such external references.

-b
---
Birkin James Diana
Programmer, Digital Technologies
Brown University Library
birkin_di...@brown.edu


On Sep 30, 2013, at 7:15 AM, Uldis Bojars capts...@gmail.com wrote:

 What are best practices for preventing problems in cases like this when an
 important Linked Data service may go offline?
 
 --- originally this was a reply to Jodi which she suggested to post on the
 list too ---
 
 A safe [pessimistic?] approach would be to say we don't trust [reliability
 of] linked data on the Web as services can and will go down and to cache
 everything.
 
 In that case you'd want to create a caching service that would keep updated
 copies of all important Linked Data sources and a fall-back strategy for
 switching to this caching service when needed. Like archive.org for Linked
 Data.
 
 Some semantic web search engines might already have subsets of Linked Data
 web cached, but not sure how much they cover (e.g., if they have all of LoC
 data, up-to-date).
 
 If one were to create such a service how to best update it, considering
 you'd be requesting *all* Linked Data URIs from each source? An efficient
 approach would be to regularly load RDF dumps for every major source if
 available (e.g., LoC says - here's a full dump of all our RDF data ... and
 a .torrent too).
 
 What do you think?
 
 Uldis
 
 
 On 29 September 2013 12:33, Jodi Schneider jschnei...@pobox.com wrote:
 
 Any best practices for caching authorities/vocabs to suggest for this
 thread on the Code4Lib list?
 
 Linked Data authorities  vocabularies at Library of Congress (id.loc.gov)
 are going to be affected by the website shutdown -- because of lack of
 government funds.
 
 -Jodi


Re: [CODE4LIB] [CODE4LIB] HEADS UP - Government shutdown will mean *.loc.gov is going offline October 1

2013-09-30 Thread Jodi Schneider
Interesting -- thanks, Birkin -- and tell us what you think when you get it
implemented!

:) -Jodi


On Mon, Sep 30, 2013 at 5:19 PM, Birkin Diana birkin_di...@brown.eduwrote:

  ...you'd want to create a caching service...


 One solution for a relevant particular problem (not full-blown linked-data
 caching):

 http://en.wikipedia.org/wiki/XML_Catalog

 excerpt: However, if they are absolute URLs, they only work when your
 network can reach them. Relying on remote resources makes XML processing
 susceptible to both planned and unplanned network downtime.

 We'd heard about this a while ago, but, Jodi, you and David Riordan and
 Congress have caused a temporary retreat from normal sprint-work here at
 Brown today to investigate implementing this!  :/

 The particular problem that would affect us: if your processing tool
 checks, say, an loc.gov mods namespace url, that processing will fail if
 the loc.gov url isn't available, unless you've implemented xml catalog,
 which is a formal way to locally resolve such external references.

 -b
 ---
 Birkin James Diana
 Programmer, Digital Technologies
 Brown University Library
 birkin_di...@brown.edu


 On Sep 30, 2013, at 7:15 AM, Uldis Bojars capts...@gmail.com wrote:

  What are best practices for preventing problems in cases like this when
 an
  important Linked Data service may go offline?
 
  --- originally this was a reply to Jodi which she suggested to post on
 the
  list too ---
 
  A safe [pessimistic?] approach would be to say we don't trust
 [reliability
  of] linked data on the Web as services can and will go down and to cache
  everything.
 
  In that case you'd want to create a caching service that would keep
 updated
  copies of all important Linked Data sources and a fall-back strategy for
  switching to this caching service when needed. Like archive.org for
 Linked
  Data.
 
  Some semantic web search engines might already have subsets of Linked
 Data
  web cached, but not sure how much they cover (e.g., if they have all of
 LoC
  data, up-to-date).
 
  If one were to create such a service how to best update it, considering
  you'd be requesting *all* Linked Data URIs from each source? An efficient
  approach would be to regularly load RDF dumps for every major source if
  available (e.g., LoC says - here's a full dump of all our RDF data ...
 and
  a .torrent too).
 
  What do you think?
 
  Uldis
 
 
  On 29 September 2013 12:33, Jodi Schneider jschnei...@pobox.com wrote:
 
  Any best practices for caching authorities/vocabs to suggest for this
  thread on the Code4Lib list?
 
  Linked Data authorities  vocabularies at Library of Congress (
 id.loc.gov)
  are going to be affected by the website shutdown -- because of lack of
  government funds.
 
  -Jodi



Re: [CODE4LIB] [CODE4LIB] HEADS UP - Government shutdown will mean *.loc.gov is going offline October 1

2013-09-30 Thread Becky Yoose
FYI - this also means that there's a very good chance that the MARC
standards site [1] and the Source Codes site [2] will be down as well. I
don't know if there are any mirror sites out there for these pages.

[1] http://www.loc.gov/marc/
[2] http://www.loc.gov/standards/sourcelist/index.html

Thanks,
Becky, about to be (forcefully) departed with her standards documentation


On Mon, Sep 30, 2013 at 11:39 AM, Jodi Schneider jschnei...@pobox.comwrote:

 Interesting -- thanks, Birkin -- and tell us what you think when you get it
 implemented!

 :) -Jodi


 On Mon, Sep 30, 2013 at 5:19 PM, Birkin Diana birkin_di...@brown.edu
 wrote:

   ...you'd want to create a caching service...
 
 
  One solution for a relevant particular problem (not full-blown
 linked-data
  caching):
 
  http://en.wikipedia.org/wiki/XML_Catalog
 
  excerpt: However, if they are absolute URLs, they only work when your
  network can reach them. Relying on remote resources makes XML processing
  susceptible to both planned and unplanned network downtime.
 
  We'd heard about this a while ago, but, Jodi, you and David Riordan and
  Congress have caused a temporary retreat from normal sprint-work here at
  Brown today to investigate implementing this!  :/
 
  The particular problem that would affect us: if your processing tool
  checks, say, an loc.gov mods namespace url, that processing will fail if
  the loc.gov url isn't available, unless you've implemented xml catalog,
  which is a formal way to locally resolve such external references.
 
  -b
  ---
  Birkin James Diana
  Programmer, Digital Technologies
  Brown University Library
  birkin_di...@brown.edu
 
 
  On Sep 30, 2013, at 7:15 AM, Uldis Bojars capts...@gmail.com wrote:
 
   What are best practices for preventing problems in cases like this when
  an
   important Linked Data service may go offline?
  
   --- originally this was a reply to Jodi which she suggested to post on
  the
   list too ---
  
   A safe [pessimistic?] approach would be to say we don't trust
  [reliability
   of] linked data on the Web as services can and will go down and to
 cache
   everything.
  
   In that case you'd want to create a caching service that would keep
  updated
   copies of all important Linked Data sources and a fall-back strategy
 for
   switching to this caching service when needed. Like archive.org for
  Linked
   Data.
  
   Some semantic web search engines might already have subsets of Linked
  Data
   web cached, but not sure how much they cover (e.g., if they have all of
  LoC
   data, up-to-date).
  
   If one were to create such a service how to best update it, considering
   you'd be requesting *all* Linked Data URIs from each source? An
 efficient
   approach would be to regularly load RDF dumps for every major source if
   available (e.g., LoC says - here's a full dump of all our RDF data ...
  and
   a .torrent too).
  
   What do you think?
  
   Uldis
  
  
   On 29 September 2013 12:33, Jodi Schneider jschnei...@pobox.com
 wrote:
  
   Any best practices for caching authorities/vocabs to suggest for this
   thread on the Code4Lib list?
  
   Linked Data authorities  vocabularies at Library of Congress (
  id.loc.gov)
   are going to be affected by the website shutdown -- because of lack of
   government funds.
  
   -Jodi
 



Re: [CODE4LIB] [CODE4LIB] HEADS UP - Government shutdown will mean *.loc.gov is going offline October 1

2013-09-30 Thread Ford, Kevin
All *.loc.gov web sites will be closed, including the two you quoted.

The Internet Archive's Way Back Machine is probably your best bet for these 
types of things:

http://web.archive.org/web/*/http://www.loc.gov/marc/
http://web.archive.org/web/*/http://www.loc.gov/standards/sourcelist/index.html

Yours,
Kevin

--
Kevin Ford
Network Development and MARC Standards Office
Library of Congress
Washington, DC


 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
 Becky Yoose
 Sent: Monday, September 30, 2013 4:32 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] [CODE4LIB] HEADS UP - Government shutdown will
 mean *.loc.gov is going offline October 1
 
 FYI - this also means that there's a very good chance that the MARC
 standards site [1] and the Source Codes site [2] will be down as well.
 I
 don't know if there are any mirror sites out there for these pages.
 
 [1] http://www.loc.gov/marc/
 [2] http://www.loc.gov/standards/sourcelist/index.html
 
 Thanks,
 Becky, about to be (forcefully) departed with her standards
 documentation
 
 
 On Mon, Sep 30, 2013 at 11:39 AM, Jodi Schneider
 jschnei...@pobox.comwrote:
 
  Interesting -- thanks, Birkin -- and tell us what you think when you
 get it
  implemented!
 
  :) -Jodi
 
 
  On Mon, Sep 30, 2013 at 5:19 PM, Birkin Diana birkin_di...@brown.edu
  wrote:
 
...you'd want to create a caching service...
  
  
   One solution for a relevant particular problem (not full-blown
  linked-data
   caching):
  
   http://en.wikipedia.org/wiki/XML_Catalog
  
   excerpt: However, if they are absolute URLs, they only work when
 your
   network can reach them. Relying on remote resources makes XML
 processing
   susceptible to both planned and unplanned network downtime.
  
   We'd heard about this a while ago, but, Jodi, you and David Riordan
 and
   Congress have caused a temporary retreat from normal sprint-work
 here at
   Brown today to investigate implementing this!  :/
  
   The particular problem that would affect us: if your processing
 tool
   checks, say, an loc.gov mods namespace url, that processing will
 fail if
   the loc.gov url isn't available, unless you've implemented xml
 catalog,
   which is a formal way to locally resolve such external references.
  
   -b
   ---
   Birkin James Diana
   Programmer, Digital Technologies
   Brown University Library
   birkin_di...@brown.edu
  
  
   On Sep 30, 2013, at 7:15 AM, Uldis Bojars capts...@gmail.com
 wrote:
  
What are best practices for preventing problems in cases like
 this when
   an
important Linked Data service may go offline?
   
--- originally this was a reply to Jodi which she suggested to
 post on
   the
list too ---
   
A safe [pessimistic?] approach would be to say we don't trust
   [reliability
of] linked data on the Web as services can and will go down and
 to
  cache
everything.
   
In that case you'd want to create a caching service that would
 keep
   updated
copies of all important Linked Data sources and a fall-back
 strategy
  for
switching to this caching service when needed. Like archive.org
 for
   Linked
Data.
   
Some semantic web search engines might already have subsets of
 Linked
   Data
web cached, but not sure how much they cover (e.g., if they have
 all of
   LoC
data, up-to-date).
   
If one were to create such a service how to best update it,
 considering
you'd be requesting *all* Linked Data URIs from each source? An
  efficient
approach would be to regularly load RDF dumps for every major
 source if
available (e.g., LoC says - here's a full dump of all our RDF
 data ...
   and
a .torrent too).
   
What do you think?
   
Uldis
   
   
On 29 September 2013 12:33, Jodi Schneider jschnei...@pobox.com
  wrote:
   
Any best practices for caching authorities/vocabs to suggest for
 this
thread on the Code4Lib list?
   
Linked Data authorities  vocabularies at Library of Congress (
   id.loc.gov)
are going to be affected by the website shutdown -- because of
 lack of
government funds.
   
-Jodi
  
 


Re: [CODE4LIB] [CODE4LIB] HEADS UP - Government shutdown will mean *.loc.gov is going offline October 1

2013-09-30 Thread Erik Hetzner
At Mon, 30 Sep 2013 15:31:40 -0500,
Becky Yoose wrote:
 
 FYI - this also means that there's a very good chance that the MARC
 standards site [1] and the Source Codes site [2] will be down as well. I
 don't know if there are any mirror sites out there for these pages.
 
 Thanks,
 Becky, about to be (forcefully) departed with her standards documentation

Hi Becky,

Well, there’s always archive.org:

http://web.archive.org/web/20130816154112/http://www.loc.gov/marc/

best, Erik
Sent from my free software system http://fsf.org/.


Re: [CODE4LIB] [CODE4LIB] HEADS UP - Government shutdown will mean *.loc.gov is going offline October 1

2013-09-30 Thread Becky Yoose
Ah, I forgot about the Wayback Machine. Thank you :cD


On Mon, Sep 30, 2013 at 3:44 PM, Ford, Kevin k...@loc.gov wrote:

 All *.loc.gov web sites will be closed, including the two you quoted.

 The Internet Archive's Way Back Machine is probably your best bet for
 these types of things:

 http://web.archive.org/web/*/http://www.loc.gov/marc/

 http://web.archive.org/web/*/http://www.loc.gov/standards/sourcelist/index.html

 Yours,
 Kevin

 --
 Kevin Ford
 Network Development and MARC Standards Office
 Library of Congress
 Washington, DC





Re: [CODE4LIB] [CODE4LIB] HEADS UP - Government shutdown will mean *.loc.gov is going offline October 1

2013-09-30 Thread Roy Tennant
As seen on Twitter, OCLC also has our version of MARC documentation here:

http://www.oclc.org/bibformats/en.html

It's mostly exactly the same except for the places where we have inserted
small but effective messages that RESISTANCE IS FUTILE, YOU WILL BE
ASSIMILATED.
Roy


On Mon, Sep 30, 2013 at 1:31 PM, Becky Yoose b.yo...@gmail.com wrote:

 FYI - this also means that there's a very good chance that the MARC
 standards site [1] and the Source Codes site [2] will be down as well. I
 don't know if there are any mirror sites out there for these pages.

 [1] http://www.loc.gov/marc/
 [2] http://www.loc.gov/standards/sourcelist/index.html

 Thanks,
 Becky, about to be (forcefully) departed with her standards documentation


 On Mon, Sep 30, 2013 at 11:39 AM, Jodi Schneider jschnei...@pobox.com
 wrote:

  Interesting -- thanks, Birkin -- and tell us what you think when you get
 it
  implemented!
 
  :) -Jodi
 
 
  On Mon, Sep 30, 2013 at 5:19 PM, Birkin Diana birkin_di...@brown.edu
  wrote:
 
...you'd want to create a caching service...
  
  
   One solution for a relevant particular problem (not full-blown
  linked-data
   caching):
  
   http://en.wikipedia.org/wiki/XML_Catalog
  
   excerpt: However, if they are absolute URLs, they only work when your
   network can reach them. Relying on remote resources makes XML
 processing
   susceptible to both planned and unplanned network downtime.
  
   We'd heard about this a while ago, but, Jodi, you and David Riordan and
   Congress have caused a temporary retreat from normal sprint-work here
 at
   Brown today to investigate implementing this!  :/
  
   The particular problem that would affect us: if your processing tool
   checks, say, an loc.gov mods namespace url, that processing will fail
 if
   the loc.gov url isn't available, unless you've implemented xml
 catalog,
   which is a formal way to locally resolve such external references.
  
   -b
   ---
   Birkin James Diana
   Programmer, Digital Technologies
   Brown University Library
   birkin_di...@brown.edu
  
  
   On Sep 30, 2013, at 7:15 AM, Uldis Bojars capts...@gmail.com wrote:
  
What are best practices for preventing problems in cases like this
 when
   an
important Linked Data service may go offline?
   
--- originally this was a reply to Jodi which she suggested to post
 on
   the
list too ---
   
A safe [pessimistic?] approach would be to say we don't trust
   [reliability
of] linked data on the Web as services can and will go down and to
  cache
everything.
   
In that case you'd want to create a caching service that would keep
   updated
copies of all important Linked Data sources and a fall-back strategy
  for
switching to this caching service when needed. Like archive.org for
   Linked
Data.
   
Some semantic web search engines might already have subsets of Linked
   Data
web cached, but not sure how much they cover (e.g., if they have all
 of
   LoC
data, up-to-date).
   
If one were to create such a service how to best update it,
 considering
you'd be requesting *all* Linked Data URIs from each source? An
  efficient
approach would be to regularly load RDF dumps for every major source
 if
available (e.g., LoC says - here's a full dump of all our RDF data
 ...
   and
a .torrent too).
   
What do you think?
   
Uldis
   
   
On 29 September 2013 12:33, Jodi Schneider jschnei...@pobox.com
  wrote:
   
Any best practices for caching authorities/vocabs to suggest for
 this
thread on the Code4Lib list?
   
Linked Data authorities  vocabularies at Library of Congress (
   id.loc.gov)
are going to be affected by the website shutdown -- because of lack
 of
government funds.
   
-Jodi
  
 



Re: [CODE4LIB] [CODE4LIB] HEADS UP - Government shutdown will mean *.loc.gov is going offline October 1

2013-09-30 Thread Debra Shapiro
And of course http://dewey.info/ will still work no matter what the feds do …

I was gonna say something about still being able to use LCSH and LCNAF via 
Connexion, but that's really mostly for humans grin

deb


On Sep 30, 2013, at 3:58 PM, Becky Yoose wrote:

 And the OCLC Seal of Approval...
 
 
 On Mon, Sep 30, 2013 at 3:56 PM, Roy Tennant roytenn...@gmail.com wrote:
 
 As seen on Twitter, OCLC also has our version of MARC documentation here:
 
 http://www.oclc.org/bibformats/en.html
 
 It's mostly exactly the same except for the places where we have inserted
 small but effective messages that RESISTANCE IS FUTILE, YOU WILL BE
 ASSIMILATED.
 Roy
 
 
 

dsshap...@wisc.edu
Debra Shapiro
UW-Madison SLIS
Helen C. White Hall, Rm. 4282
600 N. Park St.
Madison WI 53706
608 262 9195
mobile 608 712 6368
FAX 608 263 4849


Re: [CODE4LIB] [CODE4LIB] HEADS UP - Government shutdown will mean *.loc.gov is going offline October 1

2013-09-30 Thread Kyle Banerjee
If all people need is to look up MARC tags, there is also the Cataloging
Calculator http://calculate.alptown.com/  Unless you want to want to feel
totally disgusted, avoid looking source code as it was my first javascript
program which was cobbled together in a day (i.e. it is garbage) and hasn't
been gone through a substantial revision since 1997. The good news is that
if you're still on Netscape 4.0, it should work fine...

kyle


On Mon, Sep 30, 2013 at 1:56 PM, Roy Tennant roytenn...@gmail.com wrote:

 As seen on Twitter, OCLC also has our version of MARC documentation here:

 http://www.oclc.org/bibformats/en.html

 It's mostly exactly the same except for the places where we have inserted
 small but effective messages that RESISTANCE IS FUTILE, YOU WILL BE
 ASSIMILATED.
 Roy


 On Mon, Sep 30, 2013 at 1:31 PM, Becky Yoose b.yo...@gmail.com wrote:

  FYI - this also means that there's a very good chance that the MARC
  standards site [1] and the Source Codes site [2] will be down as well. I
  don't know if there are any mirror sites out there for these pages.
 
  [1] http://www.loc.gov/marc/
  [2] http://www.loc.gov/standards/sourcelist/index.html
 
  Thanks,
  Becky, about to be (forcefully) departed with her standards documentation
 
 
  On Mon, Sep 30, 2013 at 11:39 AM, Jodi Schneider jschnei...@pobox.com
  wrote:
 
   Interesting -- thanks, Birkin -- and tell us what you think when you
 get
  it
   implemented!
  
   :) -Jodi
  
  
   On Mon, Sep 30, 2013 at 5:19 PM, Birkin Diana birkin_di...@brown.edu
   wrote:
  
 ...you'd want to create a caching service...
   
   
One solution for a relevant particular problem (not full-blown
   linked-data
caching):
   
http://en.wikipedia.org/wiki/XML_Catalog
   
excerpt: However, if they are absolute URLs, they only work when
 your
network can reach them. Relying on remote resources makes XML
  processing
susceptible to both planned and unplanned network downtime.
   
We'd heard about this a while ago, but, Jodi, you and David Riordan
 and
Congress have caused a temporary retreat from normal sprint-work here
  at
Brown today to investigate implementing this!  :/
   
The particular problem that would affect us: if your processing tool
checks, say, an loc.gov mods namespace url, that processing will
 fail
  if
the loc.gov url isn't available, unless you've implemented xml
  catalog,
which is a formal way to locally resolve such external references.
   
-b
---
Birkin James Diana
Programmer, Digital Technologies
Brown University Library
birkin_di...@brown.edu
   
   
On Sep 30, 2013, at 7:15 AM, Uldis Bojars capts...@gmail.com
 wrote:
   
 What are best practices for preventing problems in cases like this
  when
an
 important Linked Data service may go offline?

 --- originally this was a reply to Jodi which she suggested to post
  on
the
 list too ---

 A safe [pessimistic?] approach would be to say we don't trust
[reliability
 of] linked data on the Web as services can and will go down and to
   cache
 everything.

 In that case you'd want to create a caching service that would keep
updated
 copies of all important Linked Data sources and a fall-back
 strategy
   for
 switching to this caching service when needed. Like archive.orgfor
Linked
 Data.

 Some semantic web search engines might already have subsets of
 Linked
Data
 web cached, but not sure how much they cover (e.g., if they have
 all
  of
LoC
 data, up-to-date).

 If one were to create such a service how to best update it,
  considering
 you'd be requesting *all* Linked Data URIs from each source? An
   efficient
 approach would be to regularly load RDF dumps for every major
 source
  if
 available (e.g., LoC says - here's a full dump of all our RDF data
  ...
and
 a .torrent too).

 What do you think?

 Uldis


 On 29 September 2013 12:33, Jodi Schneider jschnei...@pobox.com
   wrote:

 Any best practices for caching authorities/vocabs to suggest for
  this
 thread on the Code4Lib list?

 Linked Data authorities  vocabularies at Library of Congress (
id.loc.gov)
 are going to be affected by the website shutdown -- because of
 lack
  of
 government funds.

 -Jodi
   
  
 



Re: [CODE4LIB] [CODE4LIB] HEADS UP - Government shutdown will mean *.loc.gov is going offline October 1

2013-09-30 Thread Roy Tennant
Netscape 4.0 is out? Gosh, but it sure is hard to keep up!
Roy


On Mon, Sep 30, 2013 at 2:06 PM, Kyle Banerjee kyle.baner...@gmail.comwrote:

 If all people need is to look up MARC tags, there is also the Cataloging
 Calculator http://calculate.alptown.com/  Unless you want to want to feel
 totally disgusted, avoid looking source code as it was my first javascript
 program which was cobbled together in a day (i.e. it is garbage) and hasn't
 been gone through a substantial revision since 1997. The good news is that
 if you're still on Netscape 4.0, it should work fine...

 kyle


 On Mon, Sep 30, 2013 at 1:56 PM, Roy Tennant roytenn...@gmail.com wrote:

  As seen on Twitter, OCLC also has our version of MARC documentation here:
 
  http://www.oclc.org/bibformats/en.html
 
  It's mostly exactly the same except for the places where we have inserted
  small but effective messages that RESISTANCE IS FUTILE, YOU WILL BE
  ASSIMILATED.
  Roy
 
 
  On Mon, Sep 30, 2013 at 1:31 PM, Becky Yoose b.yo...@gmail.com wrote:
 
   FYI - this also means that there's a very good chance that the MARC
   standards site [1] and the Source Codes site [2] will be down as well.
 I
   don't know if there are any mirror sites out there for these pages.
  
   [1] http://www.loc.gov/marc/
   [2] http://www.loc.gov/standards/sourcelist/index.html
  
   Thanks,
   Becky, about to be (forcefully) departed with her standards
 documentation
  
  
   On Mon, Sep 30, 2013 at 11:39 AM, Jodi Schneider jschnei...@pobox.com
   wrote:
  
Interesting -- thanks, Birkin -- and tell us what you think when you
  get
   it
implemented!
   
:) -Jodi
   
   
On Mon, Sep 30, 2013 at 5:19 PM, Birkin Diana 
 birkin_di...@brown.edu
wrote:
   
  ...you'd want to create a caching service...


 One solution for a relevant particular problem (not full-blown
linked-data
 caching):

 http://en.wikipedia.org/wiki/XML_Catalog

 excerpt: However, if they are absolute URLs, they only work when
  your
 network can reach them. Relying on remote resources makes XML
   processing
 susceptible to both planned and unplanned network downtime.

 We'd heard about this a while ago, but, Jodi, you and David Riordan
  and
 Congress have caused a temporary retreat from normal sprint-work
 here
   at
 Brown today to investigate implementing this!  :/

 The particular problem that would affect us: if your processing
 tool
 checks, say, an loc.gov mods namespace url, that processing will
  fail
   if
 the loc.gov url isn't available, unless you've implemented xml
   catalog,
 which is a formal way to locally resolve such external references.

 -b
 ---
 Birkin James Diana
 Programmer, Digital Technologies
 Brown University Library
 birkin_di...@brown.edu


 On Sep 30, 2013, at 7:15 AM, Uldis Bojars capts...@gmail.com
  wrote:

  What are best practices for preventing problems in cases like
 this
   when
 an
  important Linked Data service may go offline?
 
  --- originally this was a reply to Jodi which she suggested to
 post
   on
 the
  list too ---
 
  A safe [pessimistic?] approach would be to say we don't trust
 [reliability
  of] linked data on the Web as services can and will go down and
 to
cache
  everything.
 
  In that case you'd want to create a caching service that would
 keep
 updated
  copies of all important Linked Data sources and a fall-back
  strategy
for
  switching to this caching service when needed. Like
 archive.orgfor
 Linked
  Data.
 
  Some semantic web search engines might already have subsets of
  Linked
 Data
  web cached, but not sure how much they cover (e.g., if they have
  all
   of
 LoC
  data, up-to-date).
 
  If one were to create such a service how to best update it,
   considering
  you'd be requesting *all* Linked Data URIs from each source? An
efficient
  approach would be to regularly load RDF dumps for every major
  source
   if
  available (e.g., LoC says - here's a full dump of all our RDF
 data
   ...
 and
  a .torrent too).
 
  What do you think?
 
  Uldis
 
 
  On 29 September 2013 12:33, Jodi Schneider jschnei...@pobox.com
 
wrote:
 
  Any best practices for caching authorities/vocabs to suggest for
   this
  thread on the Code4Lib list?
 
  Linked Data authorities  vocabularies at Library of Congress (
 id.loc.gov)
  are going to be affected by the website shutdown -- because of
  lack
   of
  government funds.
 
  -Jodi

   
  
 



Re: [CODE4LIB] [CODE4LIB] HEADS UP - Government shutdown will mean *.loc.gov is going offline October 1

2013-09-30 Thread Becky Yoose
Cheers, deb!

I was gonna say something about still being able to use LCSH and LCNAF via
Connexion, but that's really mostly for humans grin

Well, at least for those who have Connexion in the first place ;c)

I'm trying to cover all the bases for those catalogers who are panicking
about Authorities and standards sites going dark tomorrow. Again, Wayback
Machine slipped my mind for the standards sites :cP Please forgive me - I
think I have a case of the Mondays...


On Mon, Sep 30, 2013 at 4:03 PM, Debra Shapiro dsshap...@wisc.edu wrote:

 And of course http://dewey.info/ will still work no matter what the feds
 do …

 I was gonna say something about still being able to use LCSH and LCNAF via
 Connexion, but that's really mostly for humans grin

 deb


 On Sep 30, 2013, at 3:58 PM, Becky Yoose wrote:

  And the OCLC Seal of Approval...
 
 
  On Mon, Sep 30, 2013 at 3:56 PM, Roy Tennant roytenn...@gmail.com
 wrote:
 
  As seen on Twitter, OCLC also has our version of MARC documentation
 here:
 
  http://www.oclc.org/bibformats/en.html
 
  It's mostly exactly the same except for the places where we have
 inserted
  small but effective messages that RESISTANCE IS FUTILE, YOU WILL BE
  ASSIMILATED.
  Roy
 
 
 

 dsshap...@wisc.edu
 Debra Shapiro
 UW-Madison SLIS
 Helen C. White Hall, Rm. 4282
 600 N. Park St.
 Madison WI 53706
 608 262 9195
 mobile 608 712 6368
 FAX 608 263 4849



Re: [CODE4LIB] [CODE4LIB] HEADS UP - Government shutdown will mean *.loc.gov is going offline October 1

2013-09-30 Thread Kyle Banerjee
It appeared very recently (depending on your timeframe) -- but that version
is absolutely necessary because the javascript support in 3.0 couldn't
support what I needed to do. And I had no access to cgi at the time I wrote
it, so server side action that might have accommodated Mosaic aficionados
was out of the question...

kyle


On Mon, Sep 30, 2013 at 2:08 PM, Roy Tennant roytenn...@gmail.com wrote:

 Netscape 4.0 is out? Gosh, but it sure is hard to keep up!
 Roy


 On Mon, Sep 30, 2013 at 2:06 PM, Kyle Banerjee kyle.baner...@gmail.com
 wrote:

  If all people need is to look up MARC tags, there is also the Cataloging
  Calculator http://calculate.alptown.com/  Unless you want to want to
 feel
  totally disgusted, avoid looking source code as it was my first
 javascript
  program which was cobbled together in a day (i.e. it is garbage) and
 hasn't
  been gone through a substantial revision since 1997. The good news is
 that
  if you're still on Netscape 4.0, it should work fine...
 
  kyle
 
 
  On Mon, Sep 30, 2013 at 1:56 PM, Roy Tennant roytenn...@gmail.com
 wrote:
 
   As seen on Twitter, OCLC also has our version of MARC documentation
 here:
  
   http://www.oclc.org/bibformats/en.html
  
   It's mostly exactly the same except for the places where we have
 inserted
   small but effective messages that RESISTANCE IS FUTILE, YOU WILL BE
   ASSIMILATED.
   Roy
  
  
   On Mon, Sep 30, 2013 at 1:31 PM, Becky Yoose b.yo...@gmail.com
 wrote:
  
FYI - this also means that there's a very good chance that the MARC
standards site [1] and the Source Codes site [2] will be down as
 well.
  I
don't know if there are any mirror sites out there for these pages.
   
[1] http://www.loc.gov/marc/
[2] http://www.loc.gov/standards/sourcelist/index.html
   
Thanks,
Becky, about to be (forcefully) departed with her standards
  documentation
   
   
On Mon, Sep 30, 2013 at 11:39 AM, Jodi Schneider 
 jschnei...@pobox.com
wrote:
   
 Interesting -- thanks, Birkin -- and tell us what you think when
 you
   get
it
 implemented!

 :) -Jodi


 On Mon, Sep 30, 2013 at 5:19 PM, Birkin Diana 
  birkin_di...@brown.edu
 wrote:

   ...you'd want to create a caching service...
 
 
  One solution for a relevant particular problem (not full-blown
 linked-data
  caching):
 
  http://en.wikipedia.org/wiki/XML_Catalog
 
  excerpt: However, if they are absolute URLs, they only work when
   your
  network can reach them. Relying on remote resources makes XML
processing
  susceptible to both planned and unplanned network downtime.
 
  We'd heard about this a while ago, but, Jodi, you and David
 Riordan
   and
  Congress have caused a temporary retreat from normal sprint-work
  here
at
  Brown today to investigate implementing this!  :/
 
  The particular problem that would affect us: if your processing
  tool
  checks, say, an loc.gov mods namespace url, that processing will
   fail
if
  the loc.gov url isn't available, unless you've implemented xml
catalog,
  which is a formal way to locally resolve such external
 references.
 
  -b
  ---
  Birkin James Diana
  Programmer, Digital Technologies
  Brown University Library
  birkin_di...@brown.edu
 
 
  On Sep 30, 2013, at 7:15 AM, Uldis Bojars capts...@gmail.com
   wrote:
 
   What are best practices for preventing problems in cases like
  this
when
  an
   important Linked Data service may go offline?
  
   --- originally this was a reply to Jodi which she suggested to
  post
on
  the
   list too ---
  
   A safe [pessimistic?] approach would be to say we don't trust
  [reliability
   of] linked data on the Web as services can and will go down
 and
  to
 cache
   everything.
  
   In that case you'd want to create a caching service that would
  keep
  updated
   copies of all important Linked Data sources and a fall-back
   strategy
 for
   switching to this caching service when needed. Like
  archive.orgfor
  Linked
   Data.
  
   Some semantic web search engines might already have subsets of
   Linked
  Data
   web cached, but not sure how much they cover (e.g., if they
 have
   all
of
  LoC
   data, up-to-date).
  
   If one were to create such a service how to best update it,
considering
   you'd be requesting *all* Linked Data URIs from each source? An
 efficient
   approach would be to regularly load RDF dumps for every major
   source
if
   available (e.g., LoC says - here's a full dump of all our RDF
  data
...
  and
   a .torrent too).
  
   What do you think?
  
   Uldis
  
  
   On 29 September 2013 12:33, Jodi Schneider 
 jschnei...@pobox.com
  
 wrote:
  
   Any 

Re: [CODE4LIB] [CODE4LIB] HEADS UP - Government shutdown will mean *.loc.gov is going offline October 1

2013-09-30 Thread Dan Scott
I've temporarily set up mirrors at:

* http://stuff.coffeecode.net/www.loc.gov/marc/ (MARC21 docs)
* http://stuff.coffeecode.net/www.loc.gov/standards/sourcelist/
(standards documentation)

Hopefully these won't be necessary for long, or at all :/

On Mon, Sep 30, 2013 at 5:13 PM, Kyle Banerjee kyle.baner...@gmail.com wrote:
 It appeared very recently (depending on your timeframe) -- but that version
 is absolutely necessary because the javascript support in 3.0 couldn't
 support what I needed to do. And I had no access to cgi at the time I wrote
 it, so server side action that might have accommodated Mosaic aficionados
 was out of the question...

 kyle


 On Mon, Sep 30, 2013 at 2:08 PM, Roy Tennant roytenn...@gmail.com wrote:

 Netscape 4.0 is out? Gosh, but it sure is hard to keep up!
 Roy


 On Mon, Sep 30, 2013 at 2:06 PM, Kyle Banerjee kyle.baner...@gmail.com
 wrote:

  If all people need is to look up MARC tags, there is also the Cataloging
  Calculator http://calculate.alptown.com/  Unless you want to want to
 feel
  totally disgusted, avoid looking source code as it was my first
 javascript
  program which was cobbled together in a day (i.e. it is garbage) and
 hasn't
  been gone through a substantial revision since 1997. The good news is
 that
  if you're still on Netscape 4.0, it should work fine...
 
  kyle
 
 
  On Mon, Sep 30, 2013 at 1:56 PM, Roy Tennant roytenn...@gmail.com
 wrote:
 
   As seen on Twitter, OCLC also has our version of MARC documentation
 here:
  
   http://www.oclc.org/bibformats/en.html
  
   It's mostly exactly the same except for the places where we have
 inserted
   small but effective messages that RESISTANCE IS FUTILE, YOU WILL BE
   ASSIMILATED.
   Roy
  
  
   On Mon, Sep 30, 2013 at 1:31 PM, Becky Yoose b.yo...@gmail.com
 wrote:
  
FYI - this also means that there's a very good chance that the MARC
standards site [1] and the Source Codes site [2] will be down as
 well.
  I
don't know if there are any mirror sites out there for these pages.
   
[1] http://www.loc.gov/marc/
[2] http://www.loc.gov/standards/sourcelist/index.html
   
Thanks,
Becky, about to be (forcefully) departed with her standards
  documentation
   
   
On Mon, Sep 30, 2013 at 11:39 AM, Jodi Schneider 
 jschnei...@pobox.com
wrote:
   
 Interesting -- thanks, Birkin -- and tell us what you think when
 you
   get
it
 implemented!

 :) -Jodi


 On Mon, Sep 30, 2013 at 5:19 PM, Birkin Diana 
  birkin_di...@brown.edu
 wrote:

   ...you'd want to create a caching service...
 
 
  One solution for a relevant particular problem (not full-blown
 linked-data
  caching):
 
  http://en.wikipedia.org/wiki/XML_Catalog
 
  excerpt: However, if they are absolute URLs, they only work when
   your
  network can reach them. Relying on remote resources makes XML
processing
  susceptible to both planned and unplanned network downtime.
 
  We'd heard about this a while ago, but, Jodi, you and David
 Riordan
   and
  Congress have caused a temporary retreat from normal sprint-work
  here
at
  Brown today to investigate implementing this!  :/
 
  The particular problem that would affect us: if your processing
  tool
  checks, say, an loc.gov mods namespace url, that processing will
   fail
if
  the loc.gov url isn't available, unless you've implemented xml
catalog,
  which is a formal way to locally resolve such external
 references.
 
  -b
  ---
  Birkin James Diana
  Programmer, Digital Technologies
  Brown University Library
  birkin_di...@brown.edu
 
 
  On Sep 30, 2013, at 7:15 AM, Uldis Bojars capts...@gmail.com
   wrote:
 
   What are best practices for preventing problems in cases like
  this
when
  an
   important Linked Data service may go offline?
  
   --- originally this was a reply to Jodi which she suggested to
  post
on
  the
   list too ---
  
   A safe [pessimistic?] approach would be to say we don't trust
  [reliability
   of] linked data on the Web as services can and will go down
 and
  to
 cache
   everything.
  
   In that case you'd want to create a caching service that would
  keep
  updated
   copies of all important Linked Data sources and a fall-back
   strategy
 for
   switching to this caching service when needed. Like
  archive.orgfor
  Linked
   Data.
  
   Some semantic web search engines might already have subsets of
   Linked
  Data
   web cached, but not sure how much they cover (e.g., if they
 have
   all
of
  LoC
   data, up-to-date).
  
   If one were to create such a service how to best update it,
considering
   you'd be requesting *all* Linked Data URIs from each source? An
 efficient
   approach would be to regularly load