Re: [CODE4LIB] IBM disk array expansion

2013-05-14 Thread Graham Stewart
We used to run IBM FastT 600 storage servers here, which are the 
ancestors of the DS series, using the same Ingenio controllers.


With those you could add expansions units without downtime. The 
procedure involved cabling up the the new units, then powering them on, 
then, once recognized, inserting the disks a couple at a time, waiting 
until they are recognized.


Of course, advice from IBM support would be a good idea :-) ... For 
example IBM would sometimes caution that firmware in the new ESMs could 
be at a higher level that the existing ESMs, and could cause problems.


Best of luck!
--
Graham Stewart
Network and Storage Services Manager
Information Technology Services
University of Toronto Libraries
416-978-6337


On 13-05-14 09:59 AM, Adam Wead wrote:

Hi all,

Hardware question for anyone with experience using IBM products.

I have a DS3500 disk array with dual controllers.  I've installed an expansion 
unit, with dual ESMs, and want to connect it up with the array without having 
to power everything down.

I'm almost positive I can do this, but haven't been able to find a definitive 
answer.  Can anyone speak to this from experience?  Are there any special 
procedures or pitfalls?

Thanks in advance,

…adam

__
Adam Wead
Systems and Digital Collections Librarian
Library + Archives
Rock and Roll Hall of Fame and Museum
216.515.1960
aw...@rockhall.org

This communication is a confidential and proprietary business communication. It 
is intended solely for the use of the designated recipient(s). If this 
communication is received in error, please contact the sender and delete this 
communication.



Re: [CODE4LIB] internet archive experiment

2010-05-14 Thread Graham Stewart

Hi,

I may be able to assist you with the content mirroring part of this. 
The University of Toronto Libraries hosts one of the Internet Archive 
scanning operations through the Open Content Alliance and we host 
content originally scanned by the Archive through the OCUL 
Scholarsportal project at this URL:  http://books.scholarsportal.info


In order to retrieve content from the IA (since it is sent immediately 
to San Francisco as it is scanned) I've written a set of scripts that 
download content based on various parameters.


-the starting point is a list of IA identifiers and other metadata 
pulled from an advanced search query.


-from those which file types you want to download (*.pdf, *_marc.xml, 
*.djvu, *_meta.xml, etc.) can be specified.


-The downloads are then queued and retrieved to specified local file 
systems.


The system uses a mysql backend, perl, and curl for http downloads, with 
an option for rsync.  Designed to run on Linux systems.  It contains 
fairly sophisticated tools for checking download success, file size 
comparison with the Archive, md5 error checking, re-running against the 
Archive in case content changes, and can be adapted to a variety of needs.


So far we've downloaded about 400,000 pdfs and associated metadata 
(about 14 TB altogether).  It could be used, however to, for example, 
just download marc records for integration into an ILS (a separate 
challenge, of course), and to build pointers to the archive's content 
for the fulltext.


Have had plans to open source it for some time, but other work always 
gets in the way.  If you (or anyone) want to take a look and try it out, 
just let me know.


--
Graham Stewart  graham.stew...@utoronto.ca  416-550-2806
Network and Storage Services Manager, Information Technology Services
University of Toronto Libraries
130 St. George Street
Toronto, Ontario, Canada M5S 1A5

On 10-05-14 03:34 PM, Eric Lease Morgan wrote:

We are doing a tiny experiment here at Notre Dame with the Internet Archive, 
specifically, we are determining whether or not we can supplement a special 
collection with full text content.

We are hosting at site colloquially called the Catholic Portal -- a collection 
of rare, infrequently held, and uncommon materials of a Catholic nature. [1] 
Much of the content of the Portal is metadata -- MARC and EAD records/files. I 
think the Portal would be more useful if it contained full text content. If it 
did, then indexing would be improved and services against the texts could be 
implemented.

How can we get full text content? This is what we are going to try:

   1. parse out identifying information from
  metadata (author names, titles, dates,
  etc.)

   2. construct a URL in the form of a
  Advanced Search query and send it to the
  Archive

   3. get back a list of matches in an XML
  format

   4. parse the result looking for the best
  matches

   5. save Internet Archive keys identifying
  full text items

   6. mirror Internet Archive content locally
  using keys as pointers

   7. update local metadata files pointing to
  Archive content as well as locally
  mirrored content

   8. re-index local metadata

If we are (somewhat) successful, then search results would not only have pointers to the 
physical items, but they would also have pointers to the digitized items. Not only could 
they have pointers to the digitized items, but they could also have pointers to 
services against the texts such as make word cloud, display concordance, plot 
word/phrase frequency, etc. These later services are spaces where I think there is great 
potential for librarianship.

Frankly, because of the Portal's collection policy, I don't expect to find very 
much material. On the other hand, the same process could be applied to more 
generic library collections where more content may have already been digitized.

Wish us luck.

[1] Catholic Portal - http://www.catholicresearch.net/
[2] Advanced search - http://www.archive.org/advancedsearch.php



Re: [CODE4LIB] calling another webpage within CGI script - solved!

2009-11-24 Thread Graham Stewart

Hi,

We run many Library / web / database applications on RedHat servers with 
SELinux enabled.  Sometimes it takes a bit of investigation and  horsing 
around but I haven't yet found a situation where it had to be disabled. 
 setsebool and chcon can solve most problems and SELinux is an 
excellent enhancement to standard filesystem and ACL security.


-Graham

--
Graham Stewart
Network and Storage Services Manager, Information Technology Services
University of Toronto Library
130 St. George Street
Toronto, Ontariograham.stew...@utoronto.ca
Canada   M5S 1A5Phone: 416-978-6337 | Mobile: 416-550-2806 | 
Fax: 416-978-1668


Ken Irwin wrote:

Hi all,

Thanks for your extensive suggestions and comments. A few folks suggested that 
SELinux might be the issue. Tobin's suggestion to change one of the settings 
proved effective:
# setsebool -P httpd_can_network_connect 1.

Thanks to everyone who helped -- I learned a lot.

Joys
Ken

-Original Message-
From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of Greg 
McClellan
Sent: Tuesday, November 24, 2009 10:04 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] calling another webpage within CGI script

Hi,

I had a similar problem a while back which was solved by disabling 
SELinux. http://www.crypt.gen.nz/selinux/disable_selinux.html


-Greg


Re: [CODE4LIB] calling another webpage within CGI script - solved!

2009-11-24 Thread Graham Stewart

An interesting topic ... heading out to cast vote now.

In our environment, about 6 years ago we informally identified the gap 
(grey area, war, however it is described) between server / network 
managers and developers / Librarians as an obstacle to our end goals and 
have put considerable effort into closing it.  The key efforts being 
communication (more planning, meetings, informal sessions), 
collaboration (no-one is working in a vacuum), and the willingness to 
expand/stretch job descriptions (programmers sometimes participate in 
hardware / OS work and sysadmins will attend interface / application 
planning meetings).  Supportive management helps.


The end result is that sysadmins try as hard as possible to fully 
understand what an application is doing/requires on their 
hardware/networks, and programmers almost never run any applications 
that sysadmins don't know about.


So, SELinux has never been a problem because we know what a server needs 
to do before it ends up in a developer's hands and developers know not 
to pound their heads against the desk for a day before talking to 
sysadmins about something that doesn't work.  Well, for the most part, 
anyway ;-)


-Graham

Ross Singer wrote:

On Tue, Nov 24, 2009 at 11:18 AM, Graham Stewart
graham.stew...@utoronto.ca wrote:

We run many Library / web / database applications on RedHat servers with
SELinux enabled.  Sometimes it takes a bit of investigation and  horsing
around but I haven't yet found a situation where it had to be disabled.
 setsebool and chcon can solve most problems and SELinux is an excellent
enhancement to standard filesystem and ACL security.


Agreed that SELinux is useful but it is a tee-otal pain in the keister
if you're ignorantly working against it because you didn't actually
know it was there.

It's sort of the perfect embodiment between the disconnect between the
developer and the sysadmin.  And, if this sort of tension interests
you, vote for Bess Sadler's presentation at Code4lib 2010: Vampires
vs. Werewolves: Ending the War Between Developers and Sysadmins with
Puppet and anything else that interests you.

http://vote.code4lib.org/election/index/13

-Ross Bringin' it on home Singer.


--
Graham Stewart
Network and Storage Services Manager, Information Technology Services
University of Toronto Library
130 St. George Street
Toronto, Ontariograham.stew...@utoronto.ca
Canada   M5S 1A5Phone: 416-978-6337 | Mobile: 416-550-2806 | 
Fax: 416-978-1668