Re: [CODE4LIB] Morae setup advice for screen recording

2016-04-08 Thread Ronald Houk
Hi Jennifer,

I don't know if it would fulfill all of your requirements, but it might be
worth taking a look at the open source project "OBS -- Open Broadcast
Software" at https://obsproject.com/index

It can handle multiple inputs like webcams + screen recording.

On Thu, Apr 7, 2016 at 7:32 PM, Jennifer DeJonghe <
jennifer.dejon...@metrostate.edu> wrote:

> I'm at a small public university and we do regular pop up usability
> testing, but want to purchase Morae and start doing some screen capturing.
> Since the license is rather expensive, I'm trying to do as much "sharing"
> among staff as I can within the bounds of what TechSmith allows and what is
> practical. I talked to TEchSmith person today, and they allow you to have
> multiple people using it on a shared computer. Your license gets you one
> copy of the Morae Manager, and unlimited copies of Recorder and Observer.
>
> My question for those of you who have Morae... Do you think it would be
> practical or advisable to install the single copy of Manager on a laptop,
> so that the laptop could be shared between multiple departments in various
> buildings? You don't use Manager during the actual testing, it is more for
> set up and analysis, so they said most people install that on their "good"
> computer. But installing it on a laptop might mean we could get more use
> from it. Is there anything I should think about before proceeding with
> this? We don't have a usability lab, so would probably purchase a dedicated
> shared laptop(s) for this. (I know I could use cheaper software, but I just
> got back from ER&L and was impressed with what I saw people doing with the
> TechSmith product.)
>
> Jennifer
>
> Jennifer DeJonghe
> Librarian and Professor
> Metropolitan State University
> St. Paul, MN
>



-- 
Ronald Houk ☕
Assistant Director
Ottumwa Public Library
102 W. Fourth Street
Ottumwa, IA 52501
(641)682-7563x203
rh...@ottumwapubliclibrary.org


Re: [CODE4LIB] LibX and new Firefox rules

2015-12-16 Thread Ronald Houk
Hello,

In the meantime, if you go to about:config and search for
xpinstall.signatures.required and toggle it to false.  I think it should
let you use the plugin.

On Wed, Dec 16, 2015 at 1:31 PM, Annette Bailey  wrote:

> Hi Bill,
>
> We are aware of the Firefox issue. Godmar has been in touch with Firefox to
> get their blessing to allow LibX to be a verified extension.
>
> Annette
>
> On Wed, Dec 16, 2015 at 2:29 PM, William Denton  wrote:
>
> > Firefox has disallowed the LibX add-on because it's unsigned, and Firefox
> > has new rules about verifying extensions.  We upgraded our LibX build to
> a
> > fresh version, but Firefox still doesn't lke it.  Godmar, Annette,
> > anyone---is this a way I can get it working?
> >
> > Bill
> > --
> > William Denton ↔  Toronto, Canada ↔  https://www.miskatonic.org/
>
>
>
>
> --
>
> *Annette BaileyAssistant Director for Electronic Resources and Emerging
> Technology Services*
> Virginia Tech University Libraries
> Blacksburg, VA
> PH: (540) 231-9266
>



-- 
Ronald Houk ☕
Assistant Director
Ottumwa Public Library
102 W. Fourth Street
Ottumwa, IA 52501
(641)682-7563x203
rh...@ottumwapubliclibrary.org


Re: [CODE4LIB] pdf and web publishing question

2015-04-29 Thread Ronald Houk
You might take a look at a couple of projects.

First to split your pdf up you could use the python based stapler program.
-> https://github.com/hellerbarde/stapler/tree/master

And to convert the pdf to html you would take a look at pdf2htmlEX ->
https://github.com/coolwanglu/pdf2htmlEX

On Wed, Apr 29, 2015 at 9:04 AM, Sergio Letuche 
wrote:

> Dear all,
>
> we have a pdf, that is taken from a to be printed pdf, full of tables. The
> text is split in two columns. How would you suggest we uploaded this pdf to
> the web? We would like to keep the structure, and split each section taken
> from the table of contents as a page, but also keep the format, and if
> possible, serve the content both in an html view, and in a pdf view, based
> on the preference of the user.
>
> Looking forward for your input.
>
> The document is made with Indesign CS6, and i do not know in which format i
> could transform it into
>
> Best
>



-- 
Ronald Houk
Assistant Director
Ottumwa Public Library
102 W. Fourth Street
Ottumwa, IA 52501
(641)682-7563x203
rh...@ottumwapubliclibrary.org


Re: [CODE4LIB] software to limit computer login time

2015-03-18 Thread Ronald Houk
Bugme, sounds interesting.  I'll have to take a look at it.  A couple of
features that come to mind would be the ability for staff members to
extend/end sessions and also for the ability to have a reservation system
in place for when all systems are full but there are waiting patrons.

I'd also need to figure out some kind of print accounting system which I
think should be possible with cups, just don't know how user (staff)
friendly it would be.

Is anyone thinking about using btrfs snapshots for public kiosks?

On Wed, Mar 18, 2015 at 3:15 PM, Tom Connolly 
wrote:

> I would think you could tweak your logon software to make a session expire
> x.minutes.from.now
>
>
> On 03/18/2015 02:54 PM, Laura Krier wrote:
>
>> Hey folks,
>> I'm starting to investigate software that we could install on a few of our
>> public workstations that would limit the length of time a user could be
>> logged in. This would be done to establish a few computers as "print only"
>> or "brief use only" computers. I've seen this in other libraries, but I'm
>> having a hard time searching: all I'm finding are tools for parental
>> control of home computers.
>>
>> Does anyone have any software recommendations for me?
>>
>> Laura
>>
>


-- 
Ronald Houk
Assistant Director
Ottumwa Public Library
102 W. Fourth Street
Ottumwa, IA 52501
(641)682-7563x203
rh...@ottumwapubliclibrary.org


Re: [CODE4LIB] software to limit computer login time

2015-03-18 Thread Ronald Houk
I would love to find something that would work with Linux so I could put it
on our workstations again.  We used to have 6 workstations with Ubuntu on
them but when we switched to using Envisionware this became impossible.
Any open source projects would be even better!

On Wed, Mar 18, 2015 at 2:31 PM, Geng Hua Lin  wrote:

> I used Veralab before and it allows you to set session time limit. It'll
> lock the screen when time is up. Give the free trial a test run.
>
> http://veralab.com/
>
>
> Geng H. Lin
> Library Systems Manager
> Lloyd Sealy Library
> 899 Tenth Avenue, Rm. 115T
> New York City, NY  10019
> Tel.212.237.8248
> Fax.   212.237.8221
> Email. g...@jjay.cuny.edu
>
> -Original Message-
> From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
> Laura Krier
> Sent: Wednesday, March 18, 2015 2:55 PM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: [CODE4LIB] software to limit computer login time
>
> Hey folks,
> I'm starting to investigate software that we could install on a few of our
> public workstations that would limit the length of time a user could be
> logged in. This would be done to establish a few computers as "print only"
> or "brief use only" computers. I've seen this in other libraries, but I'm
> having a hard time searching: all I'm finding are tools for parental
> control of home computers.
>
> Does anyone have any software recommendations for me?
>
> Laura
>



-- 
Ronald Houk
Assistant Director
Ottumwa Public Library
102 W. Fourth Street
Ottumwa, IA 52501
(641)682-7563x203
rh...@ottumwapubliclibrary.org


Re: [CODE4LIB] Python & PyMARC Code Club

2015-02-24 Thread Ronald Houk
Hello,

I am also interested in learning to code with Python and would be
interested in this.

On Mon, Feb 23, 2015 at 9:27 PM, Sean Chen  wrote:

> For those who attended the conference in Portland there was a talk by
> Coral Sheldon-Hess where she introduced the idea of a Code Club. If you
> didn't see it check out the talk's slides and description at:
> http://code4lib.org/conference/2015/sheldon-hess. But, for the tl;dr
> version here it is: read code with other like minded individuals so you can
> become a better programmer.
>
> Which in turn inspires some of us who attended the conference t look for
> other catalogers/hackers/programmers interested in Python and MARC records.
> We'd like to do a club centered on the PyMARC library. If that piques your
> interest please send an email to Richard Tan 
> and Sean Chen .  We are happy to get something
> started but we’d like to hear from others about this endeavor.
>
>
> Best regards,
>
> Sean
>
> --
> Sean Chen 
>



-- 
Ronald Houk
Assistant Director
Ottumwa Public Library
102 W. Fourth Street
Ottumwa, IA 52501
(641)682-7563x203
rh...@ottumwapubliclibrary.org


Re: [CODE4LIB] Checksums for objects and not embedded metadata

2015-01-28 Thread Ronald Houk
Also just stumbled across this on stackoverflow.

http://stackoverflow.com/questions/12115824/compute-the-hash-of-only-the-core-image-data-of-a-tiff

On Wed, Jan 28, 2015 at 10:32 AM, Ronald Houk <
rh...@ottumwapubliclibrary.org> wrote:

> Hello,
>
> I like Danielle's idea.  I wonder if it wouldn't be a good idea to
> decouple the metadata from the data permanently.  Exiftool allows you to
> export the metadata in lots of different formats like JSON.  You could
> export the metadata into JSON, run the checksums and then store the photo
> and the JSON file in a single tar-ball. From there you could use a JSON
> editor to modify/add metadata.
>
>  It would be simple to reintroduce the metadata into the file when needed.
>
> On Mon, Jan 26, 2015 at 10:27 AM, danielle plumer 
> wrote:
>
>> Kyle,
>>
>> It's a bit of a hack, but you could write a script to delete all the
>> metadata from images with ExifTool and then run checksums on the resulting
>> image (see
>> http://u88.n24.queensu.ca/exiftool/forum/index.php?topic=4902.0).
>> exiv2 might also work. I don't think you'd want to do that every time you
>> audited the files, though; generating new checksums is a faster approach.
>>
>> I haven't tried this, but I know that there's a program called ssdeep
>> developed for the digital forensics community that can do piecewise
>> hashing
>> -- it hashes chunks of content and then compares the hashes for the
>> different chunks to find matches, in theory. It might be able to match
>> files with embedded metadata vs. files without; the use cases described on
>> the forensics wiki is finding altered (truncated) files, or reuse of
>> source
>> code.  http://www.forensicswiki.org/wiki/Ssdeep
>>
>> Danielle Cunniff Plumer
>>
>> On Sun, Jan 25, 2015 at 9:44 AM, Kyle Banerjee 
>> wrote:
>>
>> > On Sat, Jan 24, 2015 at 11:07 AM, Rosalyn Metz 
>> > wrote:
>> >
>> > >
>> > >- How is your content packaged?
>> > >- Are you talking about the SIPs or the AIPs or both?
>> > >- Is your content in an instance of Fedora, a unix file structure,
>> or
>> > >something else?
>> > >- Are you generating checksums on the whole package, parts of it,
>> > both?
>> > >
>> >
>> > The quick answer to this is that this is a low tech operation. We're
>> > currently on regular filesystems where we are limited to feeding md5
>> > checksums into a list. I'm looking for a low tech way that makes it
>> easier
>> > to keep track of resources across a variety of platforms in a
>> decentralized
>> > environment and which will easily adopt to future technology
>> transitions.
>> > For example, we have a bunch of stuff in Bepress and Omeka. Neither of
>> > those is good for preservation, so authoritative files live elsewhere
>> as do
>> > a huge number of resources that aren't in these platforms. Filenames are
>> > terrible identifiers and things get moved around even if people don't
>> mess
>> > with the files.
>> >
>> > We also are trying to come up with something that deals with different
>> > kinds of datasets (we're focusing on bioimaging at the moment) and fits
>> in
>> > the workflow of campus units, each of which needs to manage tens of
>> > thousands of files with very little metadata on regular filesystems.
>> Some
>> > of the resources are enormous in terms of size or number of members.
>> >
>> > Simply embedding an identifier in the file is a really easy way to tell
>> > which files have metadata and which metadata is there. In the case at
>> hand,
>> > I could just do that and generate new checksums. But I think the generic
>> > problem of making better use of embedded metadata is an interesting one
>> as
>> > it can make objects more usable and understandable once they're removed.
>> > For example, just this past Friday I received a request to use an image
>> > someone downloaded for a book. Unfortunately, he just emailed me a copy
>> of
>> > the image, described what he wanted to do, and asked for permission but
>> he
>> > couldn't replicate how he found it. An identifier would have been handy
>> as
>> > would have been embedded rights info as this is not the same for all of
>> our
>> > images. The reason we're using DOI's is that they work well for anything
>> > and can easil

Re: [CODE4LIB] Checksums for objects and not embedded metadata

2015-01-28 Thread Ronald Houk
Hello,

I like Danielle's idea.  I wonder if it wouldn't be a good idea to decouple
the metadata from the data permanently.  Exiftool allows you to export the
metadata in lots of different formats like JSON.  You could export the
metadata into JSON, run the checksums and then store the photo and the JSON
file in a single tar-ball. From there you could use a JSON editor to
modify/add metadata.

 It would be simple to reintroduce the metadata into the file when needed.

On Mon, Jan 26, 2015 at 10:27 AM, danielle plumer 
wrote:

> Kyle,
>
> It's a bit of a hack, but you could write a script to delete all the
> metadata from images with ExifTool and then run checksums on the resulting
> image (see http://u88.n24.queensu.ca/exiftool/forum/index.php?topic=4902.0
> ).
> exiv2 might also work. I don't think you'd want to do that every time you
> audited the files, though; generating new checksums is a faster approach.
>
> I haven't tried this, but I know that there's a program called ssdeep
> developed for the digital forensics community that can do piecewise hashing
> -- it hashes chunks of content and then compares the hashes for the
> different chunks to find matches, in theory. It might be able to match
> files with embedded metadata vs. files without; the use cases described on
> the forensics wiki is finding altered (truncated) files, or reuse of source
> code.  http://www.forensicswiki.org/wiki/Ssdeep
>
> Danielle Cunniff Plumer
>
> On Sun, Jan 25, 2015 at 9:44 AM, Kyle Banerjee 
> wrote:
>
> > On Sat, Jan 24, 2015 at 11:07 AM, Rosalyn Metz 
> > wrote:
> >
> > >
> > >- How is your content packaged?
> > >- Are you talking about the SIPs or the AIPs or both?
> > >- Is your content in an instance of Fedora, a unix file structure,
> or
> > >something else?
> > >- Are you generating checksums on the whole package, parts of it,
> > both?
> > >
> >
> > The quick answer to this is that this is a low tech operation. We're
> > currently on regular filesystems where we are limited to feeding md5
> > checksums into a list. I'm looking for a low tech way that makes it
> easier
> > to keep track of resources across a variety of platforms in a
> decentralized
> > environment and which will easily adopt to future technology transitions.
> > For example, we have a bunch of stuff in Bepress and Omeka. Neither of
> > those is good for preservation, so authoritative files live elsewhere as
> do
> > a huge number of resources that aren't in these platforms. Filenames are
> > terrible identifiers and things get moved around even if people don't
> mess
> > with the files.
> >
> > We also are trying to come up with something that deals with different
> > kinds of datasets (we're focusing on bioimaging at the moment) and fits
> in
> > the workflow of campus units, each of which needs to manage tens of
> > thousands of files with very little metadata on regular filesystems. Some
> > of the resources are enormous in terms of size or number of members.
> >
> > Simply embedding an identifier in the file is a really easy way to tell
> > which files have metadata and which metadata is there. In the case at
> hand,
> > I could just do that and generate new checksums. But I think the generic
> > problem of making better use of embedded metadata is an interesting one
> as
> > it can make objects more usable and understandable once they're removed.
> > For example, just this past Friday I received a request to use an image
> > someone downloaded for a book. Unfortunately, he just emailed me a copy
> of
> > the image, described what he wanted to do, and asked for permission but
> he
> > couldn't replicate how he found it. An identifier would have been handy
> as
> > would have been embedded rights info as this is not the same for all of
> our
> > images. The reason we're using DOI's is that they work well for anything
> > and can easily be recognized by syntax wherever they may appear.
> >
> > On Sat, Jan 24, 2015 at 7:06 PM, Joe Hourcle <
> > onei...@grace.nascom.nasa.gov>
> >  wrote:
> >
> > >
> > > The problems with 'metadata' in a lot of file formats is that they're
> > > just arbitrary segments -- you'd have to have a program that knew
> > > which segments were considered 'headers' vs. not.  It might be easier
> > > to have it be able to compute a separate checksum for each segment,
> > > so that should the modifications change their order, they'd still
> > > be considered valid.
> > >
> >
> > This is what I seemed to be bumping up against so I was hoping there was
> an
> > easy workaround. But this is helpful information. Thanks,
> >
> > kyle
> >
>



-- 
Ronald Houk
Assistant Director
Ottumwa Public Library
102 W. Fourth Street
Ottumwa, IA 52501
(641)682-7563x203
rh...@ottumwapubliclibrary.org


Re: [CODE4LIB] Identifying misshelved items

2015-01-20 Thread Ronald Houk
Hi Cab,

Take a look at meld as well.  It is another beautiful open source gui for
diff-ing files. http://meldmerge.org/

On Tue, Jan 20, 2015 at 6:07 AM, Cab Vinton  wrote:

> Thanks, Ron & Becky.
>
> I remember Shelvar, but hadn't heard anything about it for a while. Adding
> tags to our entire collection is an initial hurdle, but could obviously be
> worthwhile in the long run.
>
> Ron's diff command approach is a bit too fine-grained for us as there are
> multiple acceptable shelflist orders for novels by the same author. I'd
> probably also need to come up with a way to make the output a bit more
> user-friendly for our pages :-)  That said, I'll probably still spend some
> time messing around with Windows equivalents of diff (PowerShell, WinMerge,
> etc.) as that's the OS our pages are most comfortable with.
>
> Thanks for the suggestions!
>
> Cab Vinton
> Plaistow Public Library
> Plaistow, NH
>
> On Thu, Jan 15, 2015 at 7:30 PM, Ronald Houk <
> rh...@ottumwapubliclibrary.org> wrote:
> > Just realized I had a typo. Should look something like.
> >
> > diff -Nau <(sort -k[[whatever field you want to sort by]] original.csv)
> > original.csv
> > On Jan 15, 2015 2:29 PM, "Ronald Houk" 
> > wrote:
> >
> >> This sounds like a perfect job for a unix/linux system.  I'd export this
> >> xls into a nice tab separated csv.  Then sort the column that contains
> the
> >> call no.  Then compare the sorted columns to the original column with
> diff.
> >>
> >> something along the lines of
> >>
> >> diff -Nau <(original.csv | sort -k[[whatever field you want to sort
> by]])
> >> original.csv
> >>
> >> For the dewey titles you could add the -n flag to sort.
> >>
> >> This is just a rough sketch, but with a little work I think it will work
> >> for you and what's better it won't cost you dime. :)
> >>
> >> On Thu, Jan 15, 2015 at 1:32 PM, Cab Vinton  wrote:
> >>
> >>> We're doing inventory here and would love to combine this with finding
> >>> items out of call number order. (The inventory process simply updates
> the
> >>> datelastseen field.)
> >>>
> >>> Koha's inventory tool generates an XLS file in the following format
> >>> (barcodes, too, actually):
> >>>
> >>>   Title Author Call number  The last jihad : Rosenberg, Joel, FIC ROSEN
> >>> Home
> >>> repair / Rosenbarg, Liz. FIC ROSEN  Abuse of power / Rosen, Fred. FIC
> >>> ROSEN  California
> >>> angel / Rosenberg, Nancy Taylor. FIC ROSEN
> >>> What we'd ideally like is a programmatic method of:
> >>>
> >>> 1./ identifying items like Home Repair and Abuse of Power, and
> >>>
> >>> 2./ specifying where such misshelved titles are currently located.
> >>>
> >>> For fiction, we're mostly concerned with authors out of order (i.e.,
> title
> >>> order *within* the same author can be ignored). For non-fiction, Dewey/
> >>> call number order is, of course, the desired result.
> >>>
> >>> Thoughts on how best to tackle this? And no, shelf-reading while
> scanning
> >>> is not an acceptable solution :-)
> >>>
> >>> My VBA skills are seriously rusty at this point, and there are some
> >>> complicating factors (e.g,. how to handle to books in a row which are
> >>> misshelved -- the second book's location should be compared to the last
> >>> correctly shelved book; see Rosen/ Rosenberg above).
> >>>
> >>> Has this wheel already been invented?
> >>>
> >>> Grateful for any & all suggestions!
> >>>
> >>> Best,
> >>>
> >>> Cab Vinton, Director
> >>> Plaistow Public Library
> >>> Plaistow, NH
> >>>
> >>
> >>
> >>
> >> --
> >> Ronald Houk
> >> Assistant Director
> >> Ottumwa Public Library
> >> 102 W. Fourth Street
> >> Ottumwa, IA 52501
> >> (641)682-7563x203
> >> rh...@ottumwapubliclibrary.org
> >>
>



-- 
Ronald Houk
Assistant Director
Ottumwa Public Library
102 W. Fourth Street
Ottumwa, IA 52501
(641)682-7563x203
rh...@ottumwapubliclibrary.org


Re: [CODE4LIB] Identifying misshelved items

2015-01-15 Thread Ronald Houk
Just realized I had a typo. Should look something like.

diff -Nau <(sort -k[[whatever field you want to sort by]] original.csv)
original.csv
On Jan 15, 2015 2:29 PM, "Ronald Houk" 
wrote:

> This sounds like a perfect job for a unix/linux system.  I'd export this
> xls into a nice tab separated csv.  Then sort the column that contains the
> call no.  Then compare the sorted columns to the original column with diff.
>
> something along the lines of
>
> diff -Nau <(original.csv | sort -k[[whatever field you want to sort by]])
> original.csv
>
> For the dewey titles you could add the -n flag to sort.
>
> This is just a rough sketch, but with a little work I think it will work
> for you and what's better it won't cost you dime. :)
>
> On Thu, Jan 15, 2015 at 1:32 PM, Cab Vinton  wrote:
>
>> We're doing inventory here and would love to combine this with finding
>> items out of call number order. (The inventory process simply updates the
>> datelastseen field.)
>>
>> Koha's inventory tool generates an XLS file in the following format
>> (barcodes, too, actually):
>>
>>   Title Author Call number  The last jihad : Rosenberg, Joel, FIC ROSEN
>> Home
>> repair / Rosenbarg, Liz. FIC ROSEN  Abuse of power / Rosen, Fred. FIC
>> ROSEN  California
>> angel / Rosenberg, Nancy Taylor. FIC ROSEN
>> What we'd ideally like is a programmatic method of:
>>
>> 1./ identifying items like Home Repair and Abuse of Power, and
>>
>> 2./ specifying where such misshelved titles are currently located.
>>
>> For fiction, we're mostly concerned with authors out of order (i.e., title
>> order *within* the same author can be ignored). For non-fiction, Dewey/
>> call number order is, of course, the desired result.
>>
>> Thoughts on how best to tackle this? And no, shelf-reading while scanning
>> is not an acceptable solution :-)
>>
>> My VBA skills are seriously rusty at this point, and there are some
>> complicating factors (e.g,. how to handle to books in a row which are
>> misshelved -- the second book's location should be compared to the last
>> correctly shelved book; see Rosen/ Rosenberg above).
>>
>> Has this wheel already been invented?
>>
>> Grateful for any & all suggestions!
>>
>> Best,
>>
>> Cab Vinton, Director
>> Plaistow Public Library
>> Plaistow, NH
>>
>
>
>
> --
> Ronald Houk
> Assistant Director
> Ottumwa Public Library
> 102 W. Fourth Street
> Ottumwa, IA 52501
> (641)682-7563x203
> rh...@ottumwapubliclibrary.org
>


Re: [CODE4LIB] Identifying misshelved items

2015-01-15 Thread Ronald Houk
This sounds like a perfect job for a unix/linux system.  I'd export this
xls into a nice tab separated csv.  Then sort the column that contains the
call no.  Then compare the sorted columns to the original column with diff.

something along the lines of

diff -Nau <(original.csv | sort -k[[whatever field you want to sort by]])
original.csv

For the dewey titles you could add the -n flag to sort.

This is just a rough sketch, but with a little work I think it will work
for you and what's better it won't cost you dime. :)

On Thu, Jan 15, 2015 at 1:32 PM, Cab Vinton  wrote:

> We're doing inventory here and would love to combine this with finding
> items out of call number order. (The inventory process simply updates the
> datelastseen field.)
>
> Koha's inventory tool generates an XLS file in the following format
> (barcodes, too, actually):
>
>   Title Author Call number  The last jihad : Rosenberg, Joel, FIC ROSEN
> Home
> repair / Rosenbarg, Liz. FIC ROSEN  Abuse of power / Rosen, Fred. FIC
> ROSEN  California
> angel / Rosenberg, Nancy Taylor. FIC ROSEN
> What we'd ideally like is a programmatic method of:
>
> 1./ identifying items like Home Repair and Abuse of Power, and
>
> 2./ specifying where such misshelved titles are currently located.
>
> For fiction, we're mostly concerned with authors out of order (i.e., title
> order *within* the same author can be ignored). For non-fiction, Dewey/
> call number order is, of course, the desired result.
>
> Thoughts on how best to tackle this? And no, shelf-reading while scanning
> is not an acceptable solution :-)
>
> My VBA skills are seriously rusty at this point, and there are some
> complicating factors (e.g,. how to handle to books in a row which are
> misshelved -- the second book's location should be compared to the last
> correctly shelved book; see Rosen/ Rosenberg above).
>
> Has this wheel already been invented?
>
> Grateful for any & all suggestions!
>
> Best,
>
> Cab Vinton, Director
> Plaistow Public Library
> Plaistow, NH
>



-- 
Ronald Houk
Assistant Director
Ottumwa Public Library
102 W. Fourth Street
Ottumwa, IA 52501
(641)682-7563x203
rh...@ottumwapubliclibrary.org