[CODE4LIB] Talis Open Day: Linked Data and Libraries - British Library London 21st July

2010-06-02 Thread Richard Wallis
Hi all,

Apologies for cross posting.

The latest in the series of free Linked Data focused Platform Open Days hosted 
by Talis may be of interest.

This one is specify for anyone interested in understanding and applying Linked 
Data in the world of National, International, Cooperative, and other large 
libraries.

These Open Days are designed to introduce you to the principles, practice and 
potential of Linked Data.  Included is a short tutorial on RDF and the SPARQL 
query language, pitched at a level which will engage the technical and inform 
the non-technical attendees.

Linked Data is being adopted by many significant organisations across the web.  
data.gov.ukhttp://data.gov.uk and the BBC are just two that are working with 
Talis on applying Linked Data Semantic Web techniques and technologies.   As 
can be seen from the provisional agenda below, this day will (in addition to 
addressing general Linked Data issues) be covering leading library specific 
initiatives in this area.

AGENDA

 *   Introduction to Linked Data
 *   Overview of the Talis Platform
 *   The Bnf Pivot project – Emmanuelle Bermes, Bibliothèque nationale de France
 *   W3C Library Linked Data Incubator Group
 *   RDF/SPARQL tutorial
 *   Bibo – The Bibliographic Ontology
 *   Finding Semantic Relationships in MARC
 *   Linked Data in action

This is an ideal free day for those wanting an insight in to the potential and 
the practicalities of applying Linked Data to library data.

More information and registration:  
http://blogs.talis.com/nodalities/2010/06/talis-open-day-linked-data-and-libraries.php


Richard Wallis
Technology Evangelist, Talis
Tel: +44 (0)870 400 5422 (Direct)
Tel: +44 (0)870 400 5000 (Switchboard)
Tel: +44 (0)7767 886 005 (Mobile)
Fax: +44 (0)870 400 5001

Linkedin: http://www.linkedin.com/in/richardwallis
Skype: richard.wallis1
Twitter: @rjw
IM: rjw3...@hotmail.commailto:rjw3...@hotmail.com






Please consider the environment before printing this email.

Find out more about Talis at http://www.talis.com/
shared innovation™

Any views or personal opinions expressed within this email may not be those of 
Talis Information Ltd or its employees. The content of this email message and 
any files that may be attached are confidential, and for the usage of the 
intended recipient only. If you are not the intended recipient, then please 
return this message to the sender and delete it. Any use of this e-mail by an 
unauthorised recipient is prohibited.

Talis Information Ltd is a member of the Talis Group of companies and is 
registered in England No 3638278 with its registered office at Knights Court, 
Solihull Parkway, Birmingham Business Park, B37 7YB.


[CODE4LIB] New Publication: Twitter for Museums - Strategies and Tactics for Success

2010-06-02 Thread Graeme Farnell
This is a unique book about how museums and other cultural organisations
large and small can – and are – using Twitter to successfully involve their
very diverse communities. And it's written by some of the museum community’s
most experienced and creative users of Twitter.

For all the information and to place an order, please go here:
www.museumsetc.com/?p=1501

Whether you're just starting out, or already an experienced user, Twitter
for Museums will tell you everything you ever wanted to know about Twitter.
And provide in-depth case studies of how museums worldwide, at marginal
cost, are successfully using it to:
• attract new visitors
• build their brand
• sell tickets to special events and exhibitions
• enhance PR activities
• raise funds
• boost retail sales
• reach new audiences
• pick up new ideas
• influence decision makers
• link up with professional colleagues

The topics covered include:
• Making the case for Twitter to your organisation
• Having a policy and being clear about your aims
• Protecting the museum’s image and “controlling” content
• Successful tweeting: events, tours, quizzes, prizes, fieldwork, object of
the day and lots more
• How to build your followers
• Integrating images, audio and video
• Using Twitter for research
• Using third-party applications
• Monitoring: tracking, measuring and reporting
• Using multiple Twitter accounts
• Linking Twitter to your other social media platforms

We've brought together stellar contributors from three continents, and a
highly experienced Editorial Advisory Board, comprising:
• Dana Allen-Greil, Project Manager, New Media, Smithsonian National Museum
of American History
• Laurence Hill, Development Manager, Fabrica
• Kaia Landon, Assistant Director and Curator of Collections, Mesa
Historical Museum
• Nancy Proctor, Head of New Media, Smithsonian American Art Museum
• Dan Zambonini, Technical Director, Box UK

Twitter for Museums is available to order online now:
www.museumsetc.com/?p=1501

Graeme Farnell
MuseumsEtc

PS Like all our publications, it comes with our guarantee that if, once
received, you feel it's not for you, simply return it for a full refund.


Re: [CODE4LIB] Inlining HTTP Headers in URLs

2010-06-02 Thread Jonathan Rochkind

Joe Hourcle wrote:

On Tue, 1 Jun 2010, Jonathan Rochkind wrote:


  
Accept-Ranges is a response header, not something that the client's 
supposed to be sending.
  
Weird. Then can anyone explain why it's included as a request parameter 
in the SRU 2.0 draft?   Section 4.9.2.


Jonathan


[CODE4LIB] SRU 2.0 / Accept-Ranges (was: Inlining HTTP Headers in URLs )

2010-06-02 Thread Joe Hourcle

On Wed, 2 Jun 2010, Jonathan Rochkind wrote:


Joe Hourcle wrote:


  Accept-Ranges is a response header, not something that the client's 
supposed to be sending.


Weird. Then can anyone explain why it's included as a request parameter in 
the SRU 2.0 draft?   Section 4.9.2.


They're not the only ones who think it's a client header:

http://en.wikipedia.org/wiki/List_of_HTTP_headers

(which of course shows up #1 on google for 'http headers')

It looks like someone decided to split it into two tables:


http://en.wikipedia.org/w/index.php?title=List_of_HTTP_headersoldid=183353617

And within a week, someone decided to add Accept-Ranges where it didn't 
belong:



http://en.wikipedia.org/w/index.php?title=List_of_HTTP_headersoldid=184742665

...

I'm guessing it's a mistake -- either the SRU authors looked at the 
Wikipedia entry, or they also misread the intent of the HTTP header in the 
RFC.


Do we have anyone affiliated with the project on this list who can make a 
correction before it leaves draft?


-Joe


Re: [CODE4LIB] SRU 2.0 / Accept-Ranges (was: Inlining HTTP Headers in URLs )

2010-06-02 Thread LeVan,Ralph
I've forwarded the issue to them.  I don't remember any of the
conversation about this feature.

Ralph

 -Original Message-
 From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf
Of
 Joe Hourcle
 Sent: Wednesday, June 02, 2010 11:05 AM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: [CODE4LIB] SRU 2.0 / Accept-Ranges (was: Inlining HTTP
Headers in
 URLs )
 
 On Wed, 2 Jun 2010, Jonathan Rochkind wrote:
 
  Joe Hourcle wrote:
 
Accept-Ranges is a response header, not something that the
client's
  supposed to be sending.
 
  Weird. Then can anyone explain why it's included as a request
parameter in
  the SRU 2.0 draft?   Section 4.9.2.
 
 They're not the only ones who think it's a client header:
 
   http://en.wikipedia.org/wiki/List_of_HTTP_headers
 
 (which of course shows up #1 on google for 'http headers')
 
 It looks like someone decided to split it into two tables:
 
 

http://en.wikipedia.org/w/index.php?title=List_of_HTTP_headersoldid=18
 3353617
 
 And within a week, someone decided to add Accept-Ranges where it
didn't
 belong:
 
 

http://en.wikipedia.org/w/index.php?title=List_of_HTTP_headersoldid=18
 4742665
 
 ...
 
 I'm guessing it's a mistake -- either the SRU authors looked at the
 Wikipedia entry, or they also misread the intent of the HTTP header in
the
 RFC.
 
 Do we have anyone affiliated with the project on this list who can
make a
 correction before it leaves draft?
 
 -Joe


Re: [CODE4LIB] SRU 2.0 / Accept-Ranges

2010-06-02 Thread Jonathan Rochkind
Really, I blame the HTTP spec for having a header that begins Accept-, 
that's a response and not a request header. That's weird. That's really 
true?  But I still don't really understand what it's use cases are exactly.


LeVan,Ralph wrote:

I've forwarded the issue to them.  I don't remember any of the
conversation about this feature.

Ralph

  

-Original Message-
From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf


Of
  

Joe Hourcle
Sent: Wednesday, June 02, 2010 11:05 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] SRU 2.0 / Accept-Ranges (was: Inlining HTTP


Headers in
  

URLs )

On Wed, 2 Jun 2010, Jonathan Rochkind wrote:



Joe Hourcle wrote:
  

  Accept-Ranges is a response header, not something that the


client's
  

supposed to be sending.



Weird. Then can anyone explain why it's included as a request
  

parameter in
  

the SRU 2.0 draft?   Section 4.9.2.
  

They're not the only ones who think it's a client header:

http://en.wikipedia.org/wiki/List_of_HTTP_headers

(which of course shows up #1 on google for 'http headers')

It looks like someone decided to split it into two tables:





http://en.wikipedia.org/w/index.php?title=List_of_HTTP_headersoldid=18
  

3353617

And within a week, someone decided to add Accept-Ranges where it


didn't
  

belong:





http://en.wikipedia.org/w/index.php?title=List_of_HTTP_headersoldid=18
  

4742665

...

I'm guessing it's a mistake -- either the SRU authors looked at the
Wikipedia entry, or they also misread the intent of the HTTP header in


the
  

RFC.

Do we have anyone affiliated with the project on this list who can


make a
  

correction before it leaves draft?

-Joe



  


Re: [CODE4LIB] Inlining HTTP Headers in URLs

2010-06-02 Thread Jonathan Rochkind

Erik Hetzner wrote:


Accept-Encoding is a little strange. It is used for gzip or deflate
compression, largely. I cannot imagine needing a link to a version
that is gzipped.

It is also hard to imagine why a link would want to specify the
charset to be used, possibly overriding a client’s preference. If my
browser says it can only supports UTF-8 or latin-1, it is probably
telling the truth.
  
Perhaps when the client/user-agent is not actually a web browser that 
is simply going to display the document to the user, but is some kind of 
other software. Imagine perhaps archiving software that, by policy, only 
will take UTF-8 encoded documents, and you need to supply a URL which is 
guaranteed to deliver such a thing.


Sure, the hypothetical archiving software could/should(?)  just send an 
actual HTTP header to make sure it gets a UTF-8 charset document.  But 
maybe sometimes it makes sense to provide an identifier that actually 
identifies/points to the UTF-8 charset version -- and that in the actual 
in-practice real world is more guaranteed to return that UTF-8 charset 
version from an HTTP request, without relying on content negotation 
which is often mis-implemented. 

We could probably come up with a similar reasonable-if-edge-case for 
encoding.


So I'm not thinking so much of over-riding the conneg -- I'm thinking 
of your initial useful framework, one URI identifies a more abstract 
'document', the other identifies a specific representation. And 
sometimes it's probably useful to identify a specific representation in 
a specific charset, or, more of a stretch, encoding. No?


I notice you didn't mention 'language', I assume we agree that one is 
even less of a stretch, and has more clear use cases for including in a 
URL, like content-type.


Jonathan


Re: [CODE4LIB] SRU 2.0 / Accept-Ranges (was: Inlining HTTP Headers in URLs )

2010-06-02 Thread Ray Denenberg, Library of Congress
 Joe Hourcle wrote:
Do we have anyone affiliated with the project on this list who can make a
correction before it leaves draft?

Could you submit this suggestion formally  See:

http://www.oasis-open.org/committees/comments/index.php?wg_abbrev=search-ws

(The SRU and CQL development gets discussed on various lists, which is fine,
but when the discussion leads to suggested changes, it is best if any such
proposals can be formally submitted to OASIS via the above.  Otherwise OASIS
gets angry.) 


--Ray


Re: [CODE4LIB] Inlining HTTP Headers in URLs

2010-06-02 Thread Erik Hetzner
At Wed, 2 Jun 2010 15:23:05 -0400,
Jonathan Rochkind wrote:
 
 Erik Hetzner wrote:
 
  Accept-Encoding is a little strange. It is used for gzip or deflate
  compression, largely. I cannot imagine needing a link to a version
  that is gzipped.
 
  It is also hard to imagine why a link would want to specify the
  charset to be used, possibly overriding a client’s preference. If my
  browser says it can only supports UTF-8 or latin-1, it is probably
  telling the truth.

 Perhaps when the client/user-agent is not actually a web browser that 
 is simply going to display the document to the user, but is some kind of 
 other software. Imagine perhaps archiving software that, by policy, only 
 will take UTF-8 encoded documents, and you need to supply a URL which is 
 guaranteed to deliver such a thing.
 
 Sure, the hypothetical archiving software could/should(?)  just send an 
 actual HTTP header to make sure it gets a UTF-8 charset document.  But 
 maybe sometimes it makes sense to provide an identifier that actually 
 identifies/points to the UTF-8 charset version -- and that in the actual 
 in-practice real world is more guaranteed to return that UTF-8 charset 
 version from an HTTP request, without relying on content negotation 
 which is often mis-implemented. 
 
 We could probably come up with a similar reasonable-if-edge-case for 
 encoding.

 So I'm not thinking so much of over-riding the conneg -- I'm thinking 
 of your initial useful framework, one URI identifies a more abstract 
 'document', the other identifies a specific representation. And 
 sometimes it's probably useful to identify a specific representation in 
 a specific charset, or, more of a stretch, encoding. No?

I’m certainly not thinking it should never be done. Personally I would
leave it out of SRU without a serious use case, but that is obviously
not my decision. Still, in my capacity as nobody whatsoever, I would
advise against it. ;)
 
 I notice you didn't mention 'language', I assume we agree that one is 
 even less of a stretch, and has more clear use cases for including in a 
 URL, like content-type.

Definitely.

best, Erik
Sent from my free software system http://fsf.org/.


pgpQQ4F8ZxBbI.pgp
Description: PGP signature