Re: [CODE4LIB] Responsive website question

2016-02-05 Thread Daron Dierkes
It sounds like you're doing this within a browser, so why not find some
sort of magnifying glass extension?  Like maybe this one?
https://addons.mozilla.org/en-US/firefox/addon/magnifier/



On Fri, Feb 5, 2016 at 12:53 PM, Junior Tidal 
wrote:

> Hi Kyle,
>
> Our site is also responsive. As a work around, I've used screenshots of
> the site.
>
> Hope that helps!
>
> Best,
> Junior
>
> Junior Tidal
> Associate Professor
> Web Services and Multimedia Librarian
> New York City College of Technology, CUNY
> 300 Jay Street, Rm A434
> Brooklyn, NY 11201
> 718.260.5481
>
> http://library.citytech.cuny.edu
>
>
> >>> Kyle Breneman  2/5/2016 1:40 PM >>>
> Happy Friday, everybody!
>
> Our library recently got a shiny new, responsive-esque website.
>   The reference librarians frequently zoom in
> on our homepage during class instruction, and have noticed that after they
> zoom in a bit, our homepage switches from desktop to the mobile layout.
>
> Is there any easy way around this?  In other words, is it possible to fix
> the site so that, if a user is on a desktop/laptop, zooming in on the
> homepage will *not* flip the user over to the mobile layout?
>
> Thanks for your help!
>
> Kyle
>


Re: [CODE4LIB] Accordion menus & mobile web best practices

2015-12-22 Thread Daron Dierkes
There are not many technical hurdles to implementing accordian menus.  You
could do it pretty easily with something like jQuery UI.
https://jqueryui.com/accordion/

Whether it works well on a phone or not is just up to the way you go about
it.

On Tue, Dec 22, 2015 at 1:20 PM, Kyle Breneman 
wrote:

> Thanks, all, for your suggestions and insight.  Your replies pointed out
> several things that I hadn't been thinking about, including accessibility
> and designing for future devices.
>
> Kyle
>
> On Fri, Dec 18, 2015 at 3:01 PM, Kyle Breneman 
> wrote:
>
> > Our library website is currently being redesigned to be responsive.  The
> > work is being done by an outside design firm and the project is being
> > managed by University Relations, our school's PR department.
> >
> > The mobile version of our responsive site has several accordion menus
> > (similar to attached).  I've asked for these accordion menus to be
> > self-closing; in other words, there is never more than one expansion of
> an
> > accordion open at one time - if a user clicks to open another part of the
> > accordion, the first part simultaneously slides shut.
> >
> > I've been told that self-closing accordions are contrary to best
> > practices:
> >
> > "Unfortunately, no, as this isn’t best practice. Accordions should
> require
> > a click each to open and close; in other words, nothing on your page
> should
> > move without a user action. This is true throughout our sites. See the
> > universal Quick Links in mobile."
> >
> > Is it true that self-closing accordion menus run counter to best
> practices
> > in mobile web design?  The sort of behavior that I'm asking for seems, to
> > me, intuitive and expected.
> >
> > Thanks for your input!
> >
> > Kyle Breneman
> > Integrated Digital Services Librarian
> > University of Baltimore
> >
>


Re: [CODE4LIB] Librarian seeks online tool to create interactive network map

2015-05-05 Thread Daron Dierkes
Gephi is very easy,
http://gephi.github.io/

Just download it, add your data as comma separated values, and mess with it
until it is pretty.  No python or R required.





On Tue, May 5, 2015 at 3:17 PM, Pikas, Christina K. <
christina.pi...@jhuapl.edu> wrote:

> NodeXL, iGraph in R, iGraph in Python... what's your favorite language?  I
> find iGraph in R very friendly and I really want to try rBokeh to see an
> interactive visualization.  So maybe more info on which skills you can
> leverage?
>
> Christina
>
>
> -Original Message-
> From: Code for Libraries [mailto:CODE4LIB@listserv.nd.edu] On Behalf Of
> Kimberly Silk
> Sent: Tuesday, May 05, 2015 3:57 PM
> To: CODE4LIB@listserv.nd.edu
> Subject: [CODE4LIB] Librarian seeks online tool to create interactive
> network map
>
> Hey everyone,
>
> I am looking for a more effective way to show how various projects and
> people across a number of universities are related. I've looked at
> mind-mapping tools (see
> http://lifehacker.com/five-best-mind-mapping-tools-476534555) and also
> http://www.thebrain.com/, but I think what I'm really trying to create is
> akin to a social network map, some thing like you see at
> http://flowingdata.com/2014/06/22/clubs-that-connect-world-cup-national-teams/
> but of course I don't need that level of sophistication -- though the
> interaction is sweet.
>
> any ideas, mind hive?
>
> Kim
>
>
> --
> Kimberly Silk, MLS
> Special Projects Officer, IDSE, Canadian Research Knowledge Network
> Principal, BrightSail Research & Consulting <
> http://t.strk02.email/e1t/c/5/f18dQhb0S7lC8dDMPbW2n0x6l2B9nMJW7t5XYg2BW0nTW1qwnXs63Bt1-VcVQQM56dN4nf6rhVvj02?t=http%3A%2F%2Fkimberlysilk.com%2Fbrightsail%2F&si=6278943115051008&pi=c5f577b4-3615-4b77-a49e-63c3eee835d8
> >
>  & Library Research Network
> <
> http://t.strk02.email/e1t/c/5/f18dQhb0S7lC8dDMPbW2n0x6l2B9nMJW7t5XYg2BW0nTW1qwnXs63Bt1-VcVQQM56dN4nf6rhVvj02?t=http%3A%2F%2Flibraryresearchnetwork.org%2F&si=6278943115051008&pi=c5f577b4-3615-4b77-a49e-63c3eee835d8
> >
>
> Chapter Cabinet Chair-Elect, SLA
>
> M: (416) 721-8955
> kimberly.s...@gmail.com
> LinkedIn: http://ca.linkedin.com/in/kimberlysilk/
> Twitter: @kimberlysilk
>
> "I really didn't realize the librarians were, you know, such a dangerous
> group. They are subversive. You think they're just sitting there at the
> desk, all quiet and everything. They're like plotting the revolution, man.
> I wouldn't mess with them."
> --- Michael Moore, film maker
>


Re: [CODE4LIB] Cover pages and Google

2014-11-25 Thread Daron Dierkes
Perhaps it depends on how you are generating PDFs.  If it is straight
acrobat, then it should be as easy as making a PDF of all but the cover,
running OCR, then adding the cover in as another page.  As long as you do
not generate OCR again, the added pages should stay image only.  I haven't
tried it, but I'm pretty sure that's possible.

If it is a question specific to your repository architecture then it might
be harder.




On Tuesday, November 25, 2014, Dan Scott  wrote:

> Could you provide some examples of the resources that you're excluding and
> searches that return those results (maybe with screen shots in case Google
> serves up different results to different users)? I'm having a bit of
> trouble understanding your problem description.
>
> I'll admit that my schema.org hammer is itchy, but I don't want to jump to
> conclusions as the problem might not even be a construction issue, let
> alone a nail :)
> On 24 Nov 2014 22:57, "Bernadette Houghton" <
> bernadette.hough...@deakin.edu.au > wrote:
>
> > We've discovered that cover pages we add to items in our research
> > repository have the unwelcome side effect of causing Google to display
> the
> > cover page citation in search results, rather than the intro or preface.
> > The problem doesn't occur in Google Scholar, just the main Google search
> > engine.
> >
> > One way to avoid this problem is to have the cover page formatted as an
> > image PDF rather than a text-readable PDF. Can anyone recommend a
> software
> > that will convert a text-readable PDF to an image PDF??
> >
> > TIA
> >
> > Bernadette Houghton
> > Digitisation and Preservation Librarian
> > Library
> > [Title: Deakin University logo]
> > Deakin University
> > Locked Bag 2, Geelong, VIC 3220
> > +61 3 52278230
> > bernadette.hough...@deakin.edu.au  bernadette.hough...@deakin.edu.au 
> > >
> > www.deakin.edu.au
> > Deakin University CRICOS Provider Code 00113B
> >
> >
> > Important Notice: The contents of this email are intended solely for the
> > named addressee and are confidential; any unauthorised use, reproduction
> or
> > storage of the contents is expressly prohibited. If you have received
> this
> > email in error, please delete it and any attachments immediately and
> advise
> > the sender by return email or telephone.
> >
> > Deakin University does not warrant that this email and any attachments
> are
> > error or virus free.
> >
>


Re: [CODE4LIB] Any good "introduction to SPARQL" workshops out there?

2014-05-02 Thread Daron Dierkes
For those with a lot of time on their hands, there's a site out there with
loads of free ebooks on such things including the SPARQL text mentioned
above.  Here: http://it-ebooks-search.info/search?q=sparkql




On Fri, May 2, 2014 at 10:39 AM, Hutt, Arwen  wrote:

> Thanks to both Owen and Deb!
> These are some great resources I'm going to explore them more.  I really
> appreciate the help!
> Arwen
>
> -Original Message-
> From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
> Debra Shapiro
> Sent: Thursday, May 01, 2014 9:33 AM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: Re: [CODE4LIB] Any good "introduction to SPARQL" workshops out
> there?
>
> I organized a SPARQL webinar that LITA put on in February. The instructor
> was Bob DuCharme, who also wrote an O'Reilly book -
> http://www.worldcat.org/oclc/752976161
>
> You may be able to view it at the link below; I expect DuCharme would be
> willing to contract with UCSD to tailor something for you -
>
> HTH,
> deb
>
> > Thank you for participating in today's LITA webinar "SKOS, SPARQL, and
> vocabulary management" part three of a three part series of webinars on
> Linked Data.
> >
> > You may access the recording of today's session here:
> > http://ala.adobeconnect.com/p1n8obr32vd/
>
> On May 1, 2014, at 11:23 AM, "Hutt, Arwen"  wrote:
>
> > We're interested in an introduction to SPARQL workshop for a smallish
> group of staff.  Specifically an introduction for fairly tech comfortable
> non-programmers (in our case metadata librarians), as well as a refresher
> for programmers who aren't using it regularly.
> >
> > Ideally (depending on cost) we'd like to bring the workshop to our
> staff, since it'll allow more people to attend, but any recommendations for
> good introductory workshops or tutorials would be welcome!
> >
> > Thanks!
> > Arwen
> >
> > 
> > Arwen Hutt
> > Head, Digital Object Metadata Management Unit Metadata Services,
> > Geisel Library University of California, San Diego
> > 
>
> dsshap...@wisc.edu
> Debra Shapiro
> UW-Madison SLIS
> Helen C. White Hall, Rm. 4282
> 600 N. Park St.
> Madison WI 53706
> 608 262 9195
> mobile 608 712 6368
> FAX 608 263 4849
>


Re: [CODE4LIB] Tool Library 2.0

2014-02-27 Thread Daron Dierkes
In St. Louis, to my knowledge we do not have a makerspace as part of a
library.  We do however have a hackerspace called Arch Reactor and a new
TechShop is coming soon, which I guess is maybe something similar but
diffferent?

Could any of you help clarify the terms for me and maybe explain what
libraries have to do with them?


On Thu, Feb 27, 2014 at 3:16 PM, Cary Gordon  wrote:

> Personally, I would put soldering irons in phase 2, as they really do
> require training to use. Without a pretty decent skillset, you can burn
> through a lot of led strips, etc.
>
> My lab consists of a Sparkfun kit hot-glued to the top of a parts box.
> This arrangement has been very helpful for my chronic mislayer self. It's a
> makerspace in a box.
>
> Cary
>
> http://www.flickr.com/photos/36809832@N00/12821466713/
>
> Cary
>
> On Feb 27, 2014, at 12:33 PM, Edward Iglesias 
> wrote:
>
> > Hello All,
> >
> > A colleague and I were recently asked to help create a "tool library for
> > makerspaces" for a local state library consortia. The idea being they
> would
> > lend out kits such as Arduino's with breadboards to libraries that are
> > thinking of setting up some kind of makerspace but unsure where to start.
> >
> > So any of you have any "must haves" for such a collection.  I'm thinink
> >
> > soldering irons
> > arduinos
> > Raspberry Pis
> > Flora
> > breadboards
> > lots of connectors
> > leds
> >
> > etc...
> >
> > Thanks,
> >
> > Edward Iglesias
>


Re: [CODE4LIB] Python CMSs

2014-02-13 Thread Daron Dierkes
If you're new to python and django there will be a steep learning curve for
you, but probably a much steeper one for people after you who may not do
python at all.  Drupal and Wordpress are limited, but non-technical
librarians can still get in pretty easy to fix typos and add links at
least..  Codecademy has a decent intro python course:
http://www.codecademy.com/tracks/python
Udemy has a few python courses with some django as well.

A big reason why I've been learning django is to try to understand how our
library can work with the various DH projects that use our collections. If
we need to at some point take on permanent ownership of these projects or
if we want to develop them further, a basic familiarity on the part of our
library staff seems like a good idea.


Re: [CODE4LIB] Creating pdfs from images and their text

2014-01-17 Thread Daron Dierkes
But Raffaele, how do you generate the hOCR in the first place if you're
using human-generated transcripts and not OCR?  Hand coding each page would
take forever.


On Fri, Jan 17, 2014 at 3:24 AM, raffaele messuti <
raffaele.mess...@gmail.com> wrote:

> Padraic Stack wrote:
> > What is a straightforward way to combine the text with overlaid images
> > to create searchable pdfs?
>
> having transcription in hOCR[1] format the tool you should need is
> hocr2pdf[2].
> i never tried for pdfs, years ago i made some djvu following this
> tutorial[3]
>
> [1] http://en.wikipedia.org/wiki/HOCR
> [2] http://manpages.ubuntu.com/manpages/lucid/man1/hocr2pdf.1.html
> [3] https://philikon.wordpress.com/2009/07/23/digitizing-books-to-djvu/
>
> ciao.
>
> --
> raffaele
>


Re: [CODE4LIB] Creating pdfs from images and their text

2014-01-16 Thread Daron Dierkes
I don't think I can answer your question but I we have a similar problem.

I'm not sure about all OCR programs, but the version of Tesseract I've seen
in Islandora creates two files, one is the .txt file you would expect and
the other is an hOCR file with very interesting mark up linking words in
the transcript to coordinates on their associated jpg or tiff.   For
manuscript materials, we have human-generated transcripts that can be
swapped in Islandora with the machine generated OCR, but there's no way to
easily map the words onto the image since editing the hOCR by hand is only
useful if you have a really good sense of where the coordinates fit on your
image.

There are programs out there to get better coordinates for human generated
transcripts and http://www.shared-canvas.org/ seems to be one of the better
tools available for that purpose, but I haven't found DM, T-PEN, Scripto,
etc. easy to integrate across really large collections.  But that kind of
transcription program lets users match words to their locations on pages.
The most rational public transcription programs out there, IMO, is the DIY
History site at the University of Iowa (http://diyhistory.lib.uiowa.edu/),
but I don't see how those transcripts can get mapped onto images.

There are some uiowa.edu people on this listserv.  I'm curious to know how
they make their images and transcripts speak to each other.









On Thu, Jan 16, 2014 at 11:21 AM, Padraic Stack wrote:

> Hi folks,
>
> I have a number of typescript / manuscript images on which it is quite
> time consuming to run OCR. (Or more accurately it is quite time consuming
> to correct the OCR).
>
> For some of these I have text files containing accurate transcriptions. In
> other cases I have TEI files with these transcriptions.
>
> What is a straightforward way to combine the text with overlaid images to
> create searchable pdfs?
>
> I know my way around the command line and can follow tutorials but I'm not
> a programmer so the more straightforward the solution the better.
>
> I have had a go with pdftkBuilder and a result can be seen here [
> https://www.dropbox.com/s/fxp6rnt24043aez/result3.pdf] but there are a
> number of problems:
>
> 1. it involves 'printing' the text to pdf and 'stamping'  the image over
> it. The result entails a margin unless the image matches a standard paper
> size.
> 2. the underlying text doesn't match up to the image. I would love if it
> could but can live with it if can't.
> 3. it is very time consuming - ideally I would like a solution that could
> be scripted and left to run.
>
> Any advice would be greatly appreciated.
>
>
> The best I have
>
> --
>
> Padraic
>
>
> Padraic Stack | Digital Humanities Support Officer | NUI Maynooth |
> padraic.st...@nuim.ie |Phone: Mon: 01 474 7187 Tue - Fri: 01 474 7197
>


Re: [CODE4LIB] links from finding aid to digital object

2014-01-14 Thread Daron Dierkes
What alternatives has your survey suggested?  Would anyone suggest that a
finding aid and its digital contents should not be in communication?


On Tue, Jan 14, 2014 at 10:12 AM, Steven Majewski wrote:

>
> On Jan 14, 2014, at 10:54 AM, Johnston, Leslie  wrote:
>
> > I suspect there are some in Virginia Heritage, but I don't know how to
> limit a search to finding aids with links.
> >
> > http://vaheritage.org/
>
> No way to search by links, as hrefs are part of xml tags and only text,
> not tags are indexed.
> ( Searching for “http” , for example, finds all of the URLs written in
> plain text only. No links. )
>
> But if you need an example, the guide with the most links is probably this
> one:
>
> A Calendar of The Jefferson Papers of the University of Virginia Jefferson
> Papers of the University of Virginia, Calendar Multiple-numbers
>
>
> — Steve Majewski / UVA Alderman Library
>
> >
> >> -Original Message-
> >> From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of
> >> Edward Summers
> >> Sent: Tuesday, January 14, 2014 10:39 AM
> >> To: CODE4LIB@LISTSERV.ND.EDU
> >> Subject: [CODE4LIB] links from finding aid to digital object
> >>
> >> Hi all,
> >>
> >> I was wondering if anyone can point me at example(s) of finding aids
> >> (either EAD XML or HTML) that are linked to digital object of some
> >> kind. For example a container list that links to a digital image that
> >> is available on the Web.
> >>
> >> I'm doing a bit of an informal survey so if you see someone has
> >> responded, but you have a different example please send it along either
> >> here on list or to me directly.
> >>
> >> Thanks!
> >> //Ed
> >>
> >> PS. sorry for the duplication.
>
>