Re: [CODE4LIB] [EXT] Re: [CODE4LIB] A/V and accessibility

2019-02-13 Thread Kate Deibel
I 100% agree. Just for clarity, by requester in my previous email, I meant a 
person requesting accommodations for the video and not the original persons 
pushing for the digital collection. 

The fact is that accessibility remediation is a translation, and different 
types of remediation can result in information loss just like other 
translations. Captioning may make the spoken words accessible but may not 
capture the intonations and other nuances of the dialogue. Transcribing a 
handwritten letter into electronic text may skip over edit marks and other 
aspects of handwriting that a researcher may be interested in. Heck, 
translating handwriting is rarely obvious and can be quite debated.

This is why I view special collections and what libraries call archives to be 
in a different vein than other aspects of accessibility remediation. Making a 
journal article PDF accessible is mostly about proper markup and reading order 
(although exceptions and complexities do exist). The main goal is for anyone to 
be able to read it. But for someone diving into a special collection or 
archive, their inquiry is different. I've seen historians go on and on about 
edit marks in letters and marginal notes in books. Each scholar in such works 
have nuanced inquiries with elements they wish to focus on. To me, making the 
content accessible to them is about also understanding what they want to 
access. Most of the time, we think of accessibility as addressing the 
intersection (dis)ability issues with the content format. However, sometimes we 
need to add in the further complexity of an individual's actual goals. 
Personalized accommodations are likely needed.

This is the argument I give for our special collections/archives group. Do what 
is feasible now with current technology and then have a means for providing 
one-on-one accommodation services. 

Katherine Deibel | PhD
Inclusion & Accessibility Librarian
Syracuse University Libraries 
T 315.443.7178
kndei...@syr.edu
222 Waverly Ave., Syracuse, NY 13244
Syracuse University


-Original Message-
From: Code for Libraries  On Behalf Of Tim McGeary
Sent: Wednesday, February 13, 2019 3:45 PM
To: CODE4LIB@LISTS.CLIR.ORG
Subject: Re: [CODE4LIB] [EXT] Re: [CODE4LIB] A/V and accessibility

This is why defining the policy of access is critical. If these digitized 
collections are intended to be published for the entire public, the needs of 
the (original) requester is not sufficient; the federal mandates require full 
accessibility as best to your ability without undue burden.

If you aren’t making these available for the entire public, and your policies 
are well documented about that restriction and the request process, then you 
have more flexibility to balance the burden of making a collection accessible 
based on the needs of the specific user.

Tim

Tim McGeary
Associate University Librarian for Digital Strategies and Technology Duke 
University

On Wed, Feb 13, 2019 at 3:37 PM Kate Deibel  wrote:

> While this is true in the general case, we're again talking about 
> Special Collections and the needs of the requester. Audio descriptions 
> are extremely difficult to do as the ideal is to never interrupt other 
> relevant sounds in the media, especially dialogue. That's a unique 
> challenge of being precise and fast. My recommendation would be to 
> make audio descriptions available upon request just as with more quality 
> captioning.
> There is currently no means of automating audio descriptions even of 
> low quality. AI tools just aren't there yet, and frankly, I'm a little 
> scared of the idea of a world where AI can view a random scene and 
> describe what is happening.
>
> Katherine Deibel | PhD
> Inclusion & Accessibility Librarian
> Syracuse University Libraries
> T 315.443.7178
> kndei...@syr.edu
> 222 Waverly Ave., Syracuse, NY 13244
> Syracuse University
>


Re: [CODE4LIB] [EXT] Re: [CODE4LIB] A/V and accessibility

2019-02-13 Thread Tim McGeary
This is why defining the policy of access is critical. If these digitized
collections are intended to be published for the entire public, the needs
of the (original) requester is not sufficient; the federal mandates require
full accessibility as best to your ability without undue burden.

If you aren’t making these available for the entire public, and your
policies are well documented about that restriction and the request
process, then you have more flexibility to balance the burden of making a
collection accessible based on the needs of the specific user.

Tim

Tim McGeary
Associate University Librarian for Digital Strategies and Technology
Duke University

On Wed, Feb 13, 2019 at 3:37 PM Kate Deibel  wrote:

> While this is true in the general case, we're again talking about Special
> Collections and the needs of the requester. Audio descriptions are
> extremely difficult to do as the ideal is to never interrupt other relevant
> sounds in the media, especially dialogue. That's a unique challenge of
> being precise and fast. My recommendation would be to make audio
> descriptions available upon request just as with more quality captioning.
> There is currently no means of automating audio descriptions even of low
> quality. AI tools just aren't there yet, and frankly, I'm a little scared
> of the idea of a world where AI can view a random scene and describe what
> is happening.
>
> Katherine Deibel | PhD
> Inclusion & Accessibility Librarian
> Syracuse University Libraries
> T 315.443.7178
> kndei...@syr.edu
> 222 Waverly Ave., Syracuse, NY 13244
> Syracuse University
>


Re: [CODE4LIB] [EXT] Re: [CODE4LIB] A/V and accessibility

2019-02-13 Thread Kate Deibel
While this is true in the general case, we're again talking about Special 
Collections and the needs of the requester. Audio descriptions are extremely 
difficult to do as the ideal is to never interrupt other relevant sounds in the 
media, especially dialogue. That's a unique challenge of being precise and 
fast. My recommendation would be to make audio descriptions available upon 
request just as with more quality captioning. There is currently no means of 
automating audio descriptions even of low quality. AI tools just aren't there 
yet, and frankly, I'm a little scared of the idea of a world where AI can view 
a random scene and describe what is happening. 

Katherine Deibel | PhD
Inclusion & Accessibility Librarian
Syracuse University Libraries 
T 315.443.7178
kndei...@syr.edu
222 Waverly Ave., Syracuse, NY 13244
Syracuse University


Re: [CODE4LIB] [EXT] Re: [CODE4LIB] A/V and accessibility

2019-02-13 Thread Kate Deibel
Yeah, it's the domain specific terms that really make or break these systems, 
especially in academic settings. These might suffice for business domains, but 
I've seen transcription quality drop quite fast for a STEM class or any 
non-Western humanities course. Ideally, there would be a feedback loop to these 
systems but I have yet to see one where you can send in corrections.

Katherine Deibel | PhD
Inclusion & Accessibility Librarian
Syracuse University Libraries 
T 315.443.7178
kndei...@syr.edu
222 Waverly Ave., Syracuse, NY 13244
Syracuse University


-Original Message-
From: Code for Libraries  On Behalf Of Carol Kassel
Sent: Tuesday, February 12, 2019 4:42 PM
To: CODE4LIB@LISTS.CLIR.ORG
Subject: Re: [CODE4LIB] [EXT] Re: [CODE4LIB] A/V and accessibility

Hi everyone,

Thank you so much for your replies! I'll reply to each of you individually as 
well.

In answer to your question about which auto-captioning solutions we're looking 
at, there are 2 main solutions we have our eye on. One is VerbIt and the other 
is Konch. Both appear to offer reasonable accuracy in the languages we need, 
though we are still evaluating. Still, as with any of these solutions, they 
miss some domain-specific vocabulary as well as anything that's mumbled or 
otherwise hard to understand. Also, we need to figure out our workflow for 
generating captions/transcripts, getting them into our infrastructure, and 
allowing for hand corrections and the workflow for revisions resulting from 
same. The devil is in the details!

Best wishes,

Carol


>
>
> -Original Message-
> From: Code for Libraries  On Behalf Of 
> Carol Kassel
> Sent: Monday, February 11, 2019 11:31 AM
> To: CODE4LIB@LISTS.CLIR.ORG
> Subject: [CODE4LIB] A/V and accessibility
>
> Hi,
>
> We're working on a roadmap for making A/V content from Special 
> Collections accessible. For those of you who have been through this 
> process, you know that one of the big-ticket items is captions and 
> transcripts. In our exploration of options, we've found a couple of 
> pretty good auto-captioning solutions. Their accuracy is about as good 
> as what you'd get from performing OCR on scanned book pages, which 
> libraries do all the time. One possibility is to perform 
> auto-captioning on all items and then provide hand-captioning upon 
> request for the specific items where a patron needs better captions.
>
> This idea will be better supported if we know what our peer 
> institutions are doing... so what are you doing? Thanks to those to 
> whom I've reached out personally; your information has helped 
> tremendously. Now I'd like to find out from others how they've handled this 
> issue.
>
> Thank you,
>
> Carol
>
> --
> Carol Kassel
> Senior Manager, Digital Library Infrastructure NYU Digital Library 
> Technology Services c...@nyu.edu
> (212) 992-9246
> dlib.nyu.edu
>
>
>

--
Carol Kassel
Senior Manager, Digital Library Infrastructure NYU Digital Library Technology 
Services c...@nyu.edu
(212) 992-9246
dlib.nyu.edu


Re: [CODE4LIB] [EXT] Re: [CODE4LIB] A/V and accessibility

2019-02-12 Thread Carol Kassel
Hi everyone,

Thank you so much for your replies! I'll reply to each of you individually
as well.

In answer to your question about which auto-captioning solutions we're
looking at, there are 2 main solutions we have our eye on. One is VerbIt
and the other is Konch. Both appear to offer reasonable accuracy in the
languages we need, though we are still evaluating. Still, as with any of
these solutions, they miss some domain-specific vocabulary as well as
anything that's mumbled or otherwise hard to understand. Also, we need to
figure out our workflow for generating captions/transcripts, getting them
into our infrastructure, and allowing for hand corrections and the workflow
for revisions resulting from same. The devil is in the details!

Best wishes,

Carol


>
>
> -Original Message-
> From: Code for Libraries  On Behalf Of Carol
> Kassel
> Sent: Monday, February 11, 2019 11:31 AM
> To: CODE4LIB@LISTS.CLIR.ORG
> Subject: [CODE4LIB] A/V and accessibility
>
> Hi,
>
> We're working on a roadmap for making A/V content from Special
> Collections accessible. For those of you who have been through this
> process, you know that one of the big-ticket items is captions and
> transcripts. In our exploration of options, we've found a couple of pretty
> good auto-captioning solutions. Their accuracy is about as good as what
> you'd get from performing OCR on scanned book pages, which libraries do all
> the time. One possibility is to perform auto-captioning on all items and
> then provide hand-captioning upon request for the specific items where a
> patron needs better captions.
>
> This idea will be better supported if we know what our peer
> institutions are doing... so what are you doing? Thanks to those to whom
> I've reached out personally; your information has helped tremendously. Now
> I'd like to find out from others how they've handled this issue.
>
> Thank you,
>
> Carol
>
> --
> Carol Kassel
> Senior Manager, Digital Library Infrastructure NYU Digital Library
> Technology Services c...@nyu.edu
> (212) 992-9246
> dlib.nyu.edu
>
>
>

-- 
Carol Kassel
Senior Manager, Digital Library Infrastructure
NYU Digital Library Technology Services
c...@nyu.edu
(212) 992-9246
dlib.nyu.edu


Re: [CODE4LIB] [EXT] Re: [CODE4LIB] A/V and accessibility

2019-02-12 Thread Hicks, William
UNT Digital Libraries and the Portal to Texas History are starting to test the 
waters here too with a ton of content to catch up on. Early days.  

Vendors: We've tested 3Play and then rev.com.  At the latest Accessing Higher 
Ground (AHG) conference, the latter was getting talked up a lot by ODA office 
folks as their current preferred vendor given speedy turnaround and cost ratio. 

Automation: I've played with https://github.com/agermanidis/autosub with 
decent~ish output given a few test cases. I know there are a few amazon-related 
demos out there too. No formal workflows on my end yet, but I think your 
outlined approach is generally what my preferred option  would look like too. 
Hope to hear more from you/others on what they are trying. 

digression: I note a handful of folks I talked to at AHG didn't think OCRing 
text in image content was good enough for real compliance when they saw the 
gibberish it often spits out, which would lead me to believe automated efforts 
for A/V would leave us open to the same sorts of complaints, but we do what we 
can, right?). Also captions/transcriptions are only going to get us 1/2 way to 
WCAG AA given the need for audio-descriptions. Maybe text-to-speech here? 3play 
has a plugin along those lines.

Other issues on my plate: looking at cleanup and audio-desc. Script authoring 
(probably will use WGBH Cadet); other outliers like doing webvtt chapters; what 
webvtts should look like for music where you want to give substantial info (i.e 
 movements in symphonies, describing affect in a French Aria, or 
audio-describing a performance with something better than "[jazz music 
playing]"; and tangentially to your original question: what does it look like 
to hire/contract and ASL signer to make derivative files to meet that need 
if/when it comes up. 

As to storage, our webvtts are going into a local gitlab repo, and then we have 
a few local scripts to push them onto public DL filesystem. I haphazardly dream 
of a future scenario where the DL public interface provided links from 
automated transcripts to the git repo for some sort of crowdsource cleanup 
effort. Side note: ODA office folks looked at me with a lot of puzzlement when 
I asked how they were archiving/storing captioned media!

For now at least, non captioned A/V have links in their descriptive records to 
make requests, which we'll typically honor ASAP with a vendor supplied file. 
https://texashistory.unt.edu/ark:/67531/metadc700196/ (see sidebar for request 
link). For now this just populates a simple webform with some boilerplate.

Interested if you can share more of what you are up to.

Cheers,

William Hicks
 
Digital Libraries: User Interfaces
University of North Texas
1155 Union Circle #305190
Denton, TX 76203-5017
 
email: william.hi...@unt.edu  | phone: 940.891.6703 | web: 
http://www.library.unt.edu
Willis Library, Room 321
 
 



On 2/11/19, 4:02 PM, "Code for Libraries on behalf of Goben, Abigail H" 
 wrote:

I can't speak to captioning but I use temi.com for my transcription for the 
class that I teach. It's .10 a minute, it's machine-transcription.   Overall it 
does a really decent job and I can't argue with the price.  The transcription 
takes about half the time of the video, I do light editing, and post.  

-- 
Abigail H. Goben, MLS
Associate Professor
Information Services and Liaison Librarian

Library of the Health Sciences
University of Illinois at Chicago
1750 W. Polk (MC 763)
Chicago, IL 60612
ago...@uic.edu 


-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTS.CLIR.ORG] On Behalf Of Kate 
Deibel
Sent: Monday, February 11, 2019 1:37 PM
To: CODE4LIB@LISTS.CLIR.ORG
Subject: Re: [CODE4LIB] A/V and accessibility

I'd love to hear what auto-captioning options you've found to be tolerable?

 What I can say is that this is the informal policy I've been promoting for 
accessibility in our special collections. In general, any accommodation 
requests in special collections will likely be part of a very nuanced, focused 
research agenda. Thus, any remediation will likely not only have to be specific 
to the individual's disability but also the nature of their research. In the 
case of A/V, a rough transcription may be enough if they are focusing more on 
the visual side of it. For others, though, a more thorough transcription may be 
required. 

All in all, your approach sounds wise.

Katherine Deibel | PhD
Inclusion & Accessibility Librarian
Syracuse University Libraries 
T 315.443.7178
kndei...@syr.edu
222 Waverly Ave., Syracuse, NY 13244
Syracuse University


-Original Message-
From: Code for Libraries  On Behalf Of Carol Kassel
Sent: Monday, February 11, 2019 11:31 AM
To: CODE4LIB@LISTS.CLIR.ORG
Subject: [CODE4LIB] A/V and accessibility

Hi,

We're working on a roadmap for making