I'm looking for some help:

1./ We would like to improve our security by using secure-id cards for
one-time passwords. Has anyone managed to use secure-id cards with the
AFS version of kerberos? Is anyone using secure-id cards at all?

2./ We have a user group who develop and run seismic data processing
applications. They currently use NFS on a number of platforms including
SP-2 and suffer from problems with NFS cross mounts, inconsistent name
space and different passwords on different machines (no NIS or kerberos).
They are considering using AFS (either in our existing cell or perhaps a
seperate cell) but have a total of 500GB of data. Individual data sets
can be up to 3GB and the applications read them sequentually.

Does anyone have experience with very large AFS volumes? What servers and
disk configurations do you use? I wondered about using a couple of SPARC
1000's with 256GB of raid disk on each as file servers. To reduce network
i/o and improve performance I wondered if it would be possible to allow
users to run really large jobs on the fileservers themselves? Is this
sensible? Does anyone allow users to run jobs directly on AFS
fileservers?

3./ This question is way off-topic for info-afs but subscribers to this
list probably come from the right mix of acedemic and commercial sites to
help me - I'm trying to get some new ideas about how we run and operate
our IT for our management team. Replies to this are probably best mailed
to me direct to keep noise on info-afs down. Here goes (appologies if you
are not interested in this):

Shell International Exploration and Production have recently extensively
reorganised. As part of our reorganisation we are looking at new ways to
conduct our IT business. In the past Shell has been very (too?)
conservative about the way we use IT. It has taken us too long to utilise
new technologies and we have probably relied too much on 3rd party
products and 'mainframe' type solutions rather than utilising things such
as GNU software or tailoring software in-house ourselves. We find it
difficult to implement new technoloogies ourselves because of the
problems of keeping staff trained and the inertia inevitable in such a
large company. By the time we get a technology implemented it is too late
to get the full value of it. We outsource some services to large service
companies, but probably don't make enough use of business partnerships
with smaller high-tech companies or universities for improving our
infrastructure. 

Here at Shell Research we are far more advanced in our use of technology
than in other parts of Shell but we could improve the way we do things
and also provide advice for other Shell group companies. I have the
following questions:

How do other large corporations such as banks conduct their IT? 

What do you outsource? Running of services or project work such as
DCE/DFS evaluation?

Who do you outsource to? Large companies or small specialist companies? 

How do you maintain the right balance between retaining in-house skills
and outsourcing things that are not core business?

Who are your business partners for running and operating key areas of IT?

Do you make use of Universities to tailor code to fit your infrastructure
(eg kerberising applications)?

Do you make use of 3rd party products such as HP OpC or do you write/buy
you own code for systems monitoring and administration?

How do you keep up with new technologies, for instance if we were
suddenly forced for some reason to implement a production DCE cell we
would be horribly short of skills. How do you cope with such scenarios?

If you are a University or small niche-market company, do large
corporations make use of your skills? What can you provide?

What are the current trends for IT provision within large companies?

Any suggestions about anything?


Thanks for your time,

--
Andy Haxby                              [EMAIL PROTECTED]
Systems Healer
Shell Research Technical Services
Rijswijk Netherlands

Reply via email to