grg <> noted:
> these computer systems which are failing and grounding the
> carriers are actually the same ones they've been using for decades.  I
> presume you're referring to the high profile ground stops for United,
> Delta, and American in the last few years?  Running on SHARES, Deltamatic,
> and SABRE respectively, as they (or in UA's case, Continental Airlines)
> have been running basically forever, on the same TPF mainframes.  I
> would argue this is an example which goes directly against [Kent's]
> point -- for more robustness they should have moved to more modern
> systems, and their reluctance to do so is causing major failures.

Well said, and it's an example from that same industry I used to work in. I
fled that industry after a few months at Volpe Center in Cambridge, working on
a project that I called "a subversive plot to introduce Linux into high levels
of the US Government". While that wasn't my intent at the time, I just needed
more dev systems for my team, that's essentially what happened. The ETMS
system which schedules airplane landing slots went into service in 1986 on
Apollo Domain. In '98 I helped their project to switch from Apollo to HP/UX.
The code was originally written in Pascal and was converted to C using
scripts. The project was re-launched on HP/UX in '99, and then migrated onto
Linux sometime before 2007. Based on that history and on the government's
inability to install or replace complex systems at reasonable cost (see:, I'll wager the code I worked on in '98 is still in production

Perhaps that's good, ahem, but these days I want to work on more-dynamic
projects. Foreign aviation technology is starting to run circles around that
here in the United States, particularly throughout Asia. A lot of those
passenger annoyances and fees we face here aren't even contemplated by
businesses which have moved on from obsolete technologies.

It's not all uniformly good, of course; today's CS graduates are less likely
to know the low-level architecture and algorithms that we all knew in the past
when we worked to optimize systems. But at least now I'm at a company where
we've got teams that build high-perf software (in C++), we're known for doing
it at petabyte-volume (daily!), and we scale it up on a combination of
hardware we own (a couple thousand bare-metal machines in a data center) and
on the AWS infra (we spend 9-figures annually on our AWS bill). Some of what
we deploy internally is sloppy/unreliable like what y'all have been arguing
here, which limits the pace of our new-feature deployments in painful ways,
but what the customers see is far and away more robust than in years past.


Discuss mailing list

Reply via email to