I've been asked to pass along some observations on the 202 objectives. Some
of them might lead to tweaks in the 202 but it might result in a split
where people can choose a "data center" path or a "cloud/virtual server"
path since the needs are so different. Or even a split between a
traditional sysadmin path and a devops path.

Intro - I'm not a sysadmin (except for my home network), I'm a developer at
SnapLogic - an integration provider. We have an ops team responsible for
the production server but the dev team is increasingly taking on the
responsibility to configure both supported server configurations (for
internal testing) and to duplicate specific customer configuration (for bug
fixes). My observations are based on that perspective.

FTP vs. Databases

Some customers still use traditional FTP servers but everyone has a
database. Usually multiple ones - different versions, different
authentication mechanism, etc. Since there's a limited number of questions
I think there will be more value in asking questions about database
configuration than ftp server configuration.

Email

Email is similar. Many sites still run their own servers but it's
incredibly common for sites to use hosted accounts at gmail or outlook.
People still need to know how to configure their servers to forward email
to these sites. On my personal systems I can use an application-specific
password but that couldn't scale to any extent unless all servers used the
same password or the satellite systems all forward mail to a handful of
forwarders that have those application-specific passwords.

Also - anti-spam measures like SPF and DKIM. I'm probably biased here but I
would prefer to see a question on this (what is it, how to configure your
DNS entry to support it, what are the concerns when you use a scalable
cloud solution where email could come from previously unknown IP addresses,
etc.)

Authentication

This may be our biggest pain point. Kerberos is huge now that we're moving
into the Hadoop ecosystem. There's the basics - setting up a KDC, using
LDAP as a backend to KDC and also using LDAP to provide user and system
public x.509 keys, basic kadmin, what the / means in the principal name,
what user impersonation means, how HDFS can support hdfs/_HOST@REALM
instead of a specific host principal, etc. Some of our customers are moving
to 'kerberos everywhere' policies so even servers that didn't traditionally
use kerberos are starting to now. That means traditional databases, ssh,
etc.

OAuth is a second pain point when integrating to many web services. Part of
that is because OAuth is a very loose standard and everyone is different
but part of it is the need to be able to maintain a public callback point,
to know how to refresh a token once, etc. There could easily be a question
on setting up your own OAuth system, configuring servers to accept it,
supporting clients that are programs, not people, so you have fewer options
when authenticating to the third party that vouches for you.

There's still a need for an LPIC-3 specialization but it's impossible to do
much of our work with the ability to work with kerberos and oauth.

AWS, docker, kubernetes

Many companies now use AWS extensively or even exclusively. You don't want
to duplicate the AWS certs but there's a good chance that the new system
you need to add a server to is running on EC2 or a virtual environment
instead of traditional hardware. That moots many of the issues in 201 and
202 since you don't control the hardware, you don't control the IP address
allocation (DHCP), and you probably don't control the routing via the
traditional methods. Many sites will use Route 53 instead of running their
own DNS server - they'll need to know how to set up zone files but won't
need to know the details of bind.

In our dev work we usually use EC2 instances but some devs use a virtualbox
instance and oursoft goal is Kubernetes with a relatively small set of base
docker images and final configuration via ansible. The k8s cluster may be
on local hardware or EC2 instances - the sysadmin will need to know how to
maintain each but the devs won't care.

I know there's an LPIC-3 test on this but again I would argue that these
technologies are now so common that any senior sysadmin should have as much
confidence in setting up these environments as setting up an environment in
a data center when given the configuration details by an LPIC-3 type person.

Puppet, Chef, Ansible, Salt

With scalable solutions sysadmin tasks have to be automated - AWS etc can
automatically add or remove hundreds or thousands of servers. You can't
manage them manually and even if you did you would make errors. Hence the
initial push for these solutions. Once you've done that it naturally
extends to all of your sysadmin tasks.

That's my personal pain point as we start to integrate the work. With
long-lived instances I can do everything manually. With a solution where we
might bring up an instance for less than an hour in order to run some tests
it has to be automated. In our case we use ansible. I expect the final
process will be doing it once manually, to understand all of the variables,
then capturing it in an ansible playbook.

I know that you can create your own base images for EC2 and specify some
commands to run at startup. That's great if you're pure AWS but doesn't
help if you're using a mix of EC2 and virtual machines so I would put that
in an LPIC-3 topic or leave it to the AWS certs.

I hope this doesn't sound too much like sour grapes after my low test
scores - I know I know my stuff, it's just figuring out the gaps between
that and the requirements of a sysadmin in a traditional data center. I
just wanted to give you the perspective from someone who needs to be able
to independently set up a large variety of servers but relies on others for
the platform itself.

Bear Giles
_______________________________________________
lpi-examdev mailing list
[email protected]
http://list.lpi.org/cgi-bin/mailman/listinfo/lpi-examdev

Reply via email to