Re: OT: Traffic Light Control (was Re: First real-world SCADA attack in US)

2011-11-25 Thread Joel jaeggli
On 11/25/11 12:02 , Jay Hennigan wrote:
> On 11/25/11 11:34 AM, Joel jaeggli wrote:
> 
>> Cars generically cause at lot more deaths than faulty traffic
>> controllers 13.2 per 100,000 population in the US annually.
> 
> The cars don't (often) cause them.  The drivers do.  Yes, there are the
> rare mechanical failures but the most likely cause is wetware.  Ditto
> airplane crashes.  A mild example:

while they may well have otherwise been runover by an oxcart in the
absence of automobiles, if they we're behind the wheel of a complex 2
ton machine there would be no accident.

> http://www.ntsb.gov/aviationquery/brief.aspx?ev_id=20001212X18632
> 
> --
> Jay Hennigan - CCIE #7880 - Network Engineering - j...@impulse.net
> Impulse Internet Service  -  http://www.impulse.net/
> Your local telephone and internet company - 805 884-6323 - WB6RDV
> 




Re: OT: Traffic Light Control (was Re: First real-world SCADA attack in US)

2011-11-25 Thread Jay Hennigan
On 11/25/11 11:34 AM, Joel jaeggli wrote:

> Cars generically cause at lot more deaths than faulty traffic
> controllers 13.2 per 100,000 population in the US annually.

The cars don't (often) cause them.  The drivers do.  Yes, there are the
rare mechanical failures but the most likely cause is wetware.  Ditto
airplane crashes.  A mild example:

http://www.ntsb.gov/aviationquery/brief.aspx?ev_id=20001212X18632

--
Jay Hennigan - CCIE #7880 - Network Engineering - j...@impulse.net
Impulse Internet Service  -  http://www.impulse.net/
Your local telephone and internet company - 805 884-6323 - WB6RDV



Re: OT: Traffic Light Control (was Re: First real-world SCADA attack in US)

2011-11-25 Thread Joel jaeggli
On 11/22/11 08:16 , Jay Ashworth wrote:
> - Original Message -
>> From: "Owen DeLong" 
> 
>> As in all cases, additional flexibility results in additional ability
>> to make mistakes. Simple mechanical lockouts do not scale to the
>> modern world. The benefits of these additional capabilities far
>> outweigh the perceived risks of programming errors.
> 
> The perceived risk in this case is "multiple high-speed traffic fatalities".
> 
> I believe we rank that pretty high; it's entirely possible that a traffic
> light controller is the most potentially dangerous artifact (in terms of 
> number of possible deaths) that the average citizen interacts with on a 
> daily basis.

Cars generically cause at lot more deaths than faulty traffic
controllers 13.2 per 100,000 population in the US annually.



Re: OT: Traffic Light Control (was Re: First real-world SCADA attack in US)

2011-11-23 Thread Thomas Maufer


I have to jump in on this thread. Traffic light controllers are a fun category 
of technical artifacts. The weatherproof boxes that the relays used to live in 
have stayed the same size for decades, but now the controllers just take a 
teeny tiny circuit board rattling around in this comparatively huge box. And 
it's full of software, dontcha know? So why not have lots of newfangled 
features? Curiously, the people who make the insides of the box have a WHOLE 
DIFFERENT way of thinking about "what a traffic light controller should do?" - 
the "insider" people are in the 21st century, while the "outsider" people are 
in the early 20th century. Lemme splain.

A particular traffic light controller that I tested in 2007 had an FTP server 
inside it. I have no idea why. So I tried fuzzing it. 5 minutes into the test, 
the test aborted because the DuT wouldn't restart anymore. Upon investigation, 
we discovered that a particular FTP sequence had triggered a bug that had a 
rather unfortunate (side-)effect: The flash file system of the traffic light 
controller was formatted or erased. As a bonus, the device also had crashed and 
it was awaiting a ZMODEM file download since it didn't have a boot image any 
more. We couldn't test anything else because we didn't have the special serial 
cable to (re-)install the OS. Fail-safe? Not hardly: Not when it has no 
software! It's a lump of highly refined sand, in a plastic case.

There are many lessons here, not least of which is: Ship the device with the 
smallest possible attack surface! Why the heck was FTP enabled? Clearly this 
device had never been subjected to any negative testing. And these devices are 
meant to be networked, so that FTP bug will be tickled someday, I just don't 
know when. Yes, it was reported to the vendor, and no, I have no idea if they 
ever fixed it.

Also, in this thread I have seen several references to "fail-safe" or 
"redundancy" features. In my experience, those are often some of the weakest 
aspects of some systems. In one case, I my testing rendered a 
multi-million-dollar highly redundant VoIP soft switch useless by constantly 
causing the primary to fail - and while the secondary was being activated, 
there was a quiet period of 2-3 seconds during which time no calls went 
through. Shortly after the secondary had become the primary, it failed again, 
continuing the cycle. Literally traffic amounting to one packet (about 100 
bytes, IIRC) per second of carefully crafted SIP INVITES could make this switch 
completely useless. The bug I found involved SIP INVITE messages that could not 
be filtered…unless you didn't want to accept VoIP phone calls at all, which 
calls into question your purchase of the multi-million-dollar highly redundant 
soft switch. That bug was fixed.

Software is tricky stuff. The number of ways it can fail is practically 
infinite, but there is generally only a small number of ways for it to work 
correctly. Networked software is particularly challenging to write because the 
software engineers don't get to control their inputs. The intervening network 
can (does) fold, spindle, mutilate, truncate, drop, reorder or duplicate 
packets and your code on the receiving end has to try to understand what was 
intended by the sender. Oh, and the sender might be following an older version 
of the standard (if one even exists) or simply have included some bugs of their 
own. Because the coders are so focused on making their code do what the MRD/PRD 
required - on a tight schedule! - they have little time to imagine all the 
possible ways their code might fail. Their error-handling routines are simply 
never imaginative enough to handle real-world brokenness. It *is* possible to 
test this stuff, but time pressures in release schedules don't leave a lot of 
breathing room for developers to take on whole new classes of tasks that are 
outside their expertise (security testing). So you end up with a traffic light 
controller that erases its own flash file system when it receives a slightly 
strange but completely legal FTP command, or a highly redundant VoIP soft 
switch that is only good at ping-ponging from primary to secondary CPUs. Don't 
even get me started on problems I have found in carrier-class routers.

I don't need to name names: All software has bugs (except possibly the code in 
the main computers on the Space Shuttle). Every engineer I have ever known has 
tried to write their code well, but automated negative testing has only 
recently caught up to where the engineers and QA staff can focus on what they 
do best (write and test code that implements features that someone can buy), 
and let purpose-built tools do the negative testing for them, so their 
error-handling routines can be robust, too. Fixing bugs is generally 
straightforward. Finding them has always been the challenge.

~tom




On 23 Nov 2011, at 17:59 , Brett Frankenberger wrote:

> On Wed, Nov 23, 2011 at 05:45:08PM -0500, Jay Ashworth wrote:
>>

Re: OT: Traffic Light Control (was Re: First real-world SCADA attack in US)

2011-11-23 Thread Brett Frankenberger
On Wed, Nov 23, 2011 at 05:45:08PM -0500, Jay Ashworth wrote:
> 
> Yeah.  But at least that's stuff you have a hope of managing.  "Firmware
> underwent bit rot" is simply not visible -- unless there's, say, signature 
> tracing through the main controller.

I can't speak to traffic light controllers directly, but at least some
vital logical controllers do check signatures of their firmware and
programming and will fail into a safe configuration if the
signatures don't validate.

 -- Brett



Re: OT: Traffic Light Control (was Re: First real-world SCADA attack in US)

2011-11-23 Thread Jay Hennigan
On 11/23/11 3:38 PM, Jay Ashworth wrote:

> Yes: but as Don Norman would ask: *where was the failure here*?  You can't
> blame all of it on the field tech, even though he had the Last Clear Chance
> to avoid it, if the rest of the system wasn't designed to help protect him
> (procedures, labeling, packaging, etc...). 

It, as with most cases of Things That go Horribly Wrong (tm) was not *a*
failure but a series of them, none of which by itself would have been
particularly significant.

> I don't suppose that made the news, since there wasn't an actual collision?

Not outside of the Public Works and Risk Management Departments, but it
was pretty big news there.

The incident resulted in a 100% city-wide audit of all controller and
conflict monitor programming by a two-person team as well as the
procedure that every conflict monitor board would have a distinctively
colored label placed on it with the name of the intersection, the date
it was programmed, the name of the person who programmed it, and the
name of the person who inspected the programming.

--
Jay Hennigan - CCIE #7880 - Network Engineering - j...@impulse.net
Impulse Internet Service  -  http://www.impulse.net/
Your local telephone and internet company - 805 884-6323 - WB6RDV



Re: OT: Traffic Light Control (was Re: First real-world SCADA attack in US)

2011-11-23 Thread Jay Ashworth
 Original Message -
> From: "Jay Hennigan" 

> A somewhat inexperienced technician arrived on scene rebooted the
> controller and it went back to factory defaults which are N/S vs. E/W.
> Had the conflict monitor (a circuit board with a diode array, hardware -
> not software) been correctly programmed for that intersection, it
> would have kicked back to flash. No problem.
> 
> But it wasn't.
> 
> And because the left turn arrows were hard-wired in the signal heads
> to the same wire as the solid green phase, there was a conflict.

Oops.

> Fortunately the technician heard the blaring horns and witnessed a
> couple of near-misses before an accident occurred. He put the
> intersection back on flash, dug out the print for the conflict monitor
> and programming, called for help, and got it fixed.

IME, the near miss count is enough higher than the accident count (that
I see; about 10:1 or more) to actually give me some faith in drivers.  ;-)

> Normally sane defaults in a non-standard configuration, sloppy
> procedures, and human error coupled with a failure.

Yes: but as Don Norman would ask: *where was the failure here*?  You can't
blame all of it on the field tech, even though he had the Last Clear Chance
to avoid it, if the rest of the system wasn't designed to help protect him
(procedures, labeling, packaging, etc...). 

> From a practical standpoint it is difficult for one person to observe
> more than one or possibly two phases, especially from the location of
> the controller which is typically placed a few feet away from the
> corner so that it gets run over less frequently.

This is actually easier these days, since they've started hanging a "red 
light on" bulb of about 25 watts *under* one fixture in each direction. 

> >> As such, I'd say that the probability of a conflicting green occurring
> >> and causing an injury accident is pretty low even with (relatively)
> >> modern digital signal controllers.
> >
> > Yup, it does appear that's true.
> 
> But it happens.

I sort've thought it might.

I don't suppose that made the news, since there wasn't an actual collision?

Cheers,
-- jra
-- 
Jay R. Ashworth  Baylink   j...@baylink.com
Designer The Things I Think   RFC 2100
Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA  http://photo.imageinc.us +1 727 647 1274



Re: OT: Traffic Light Control (was Re: First real-world SCADA attack in US)

2011-11-23 Thread Jay Hennigan
On 11/23/11 2:52 PM, Jay Ashworth wrote:

> Well, sure: what's the *incidence* of conflicting greens?
> 
> I wasn't suggesting that the incidence of accidents would be any different
> between conflicting greens and other types of failures (though my intuition
> is that it would be higher), but that's swamped by how often the condition
> actually occurs, which, appears to require someone physically running a
> truck into the control box, or a chain of 5 or 6 failures in cascade to 
> occur, based on other postings on this thread.

Real-world scenario that actually happened:

There is an intersection where a majority of the E/W traffic makes left
turns to N/S.  The signal there has three phases.  N/S solid green, East
solid green with left arrow (protected left turn) and West with solid
green and left arrow.  East and West are never green simultaneously,
this would be a conflict due to the full phase protected left turns.

At some time unknown the controller was replaced and a stock N/S vs. E/W
conflict monitor wound up in the box.  Nobody owned up to this. (Human
error, sloppy procedure, and lack of audit trail.)

The programming of the controller was OK, however and the intersection
ran just fine.

Time passed, probably months.  Something glitched the controller and it
crashed.  This also put the intersection into four-way flash.

A somewhat inexperienced technician arrived on scene rebooted the
controller and it went back to factory defaults which are N/S vs. E/W.
Had the conflict monitor (a circuit board with a diode array, hardware -
not software) been correctly programmed for that intersection, it would
have kicked back to flash.  No problem.

But it wasn't.

And because the left turn arrows were hard-wired in the signal heads to
the same wire as the solid green phase, there was a conflict.
Fortunately the technician heard the blaring horns and witnessed a
couple of near-misses before an accident occurred.  He put the
intersection back on flash, dug out the print for the conflict monitor
and programming, called for help, and got it fixed.

Normally sane defaults in a non-standard configuration, sloppy
procedures, and human error coupled with a failure.

>From a practical standpoint it is difficult for one person to observe
more than one or possibly two phases, especially from the location of
the controller which is typically placed a few feet away from the corner
so that it gets run over less frequently.


>> As such, I'd say that the probability of a conflicting green occurring
>> and causing an injury accident is pretty low even with (relatively)
>> modern digital signal controllers.
> 
> Yup, it does appear that's true.

But it happens.

--
Jay Hennigan - CCIE #7880 - Network Engineering - j...@impulse.net
Impulse Internet Service  -  http://www.impulse.net/
Your local telephone and internet company - 805 884-6323 - WB6RDV



Re: OT: Traffic Light Control (was Re: First real-world SCADA attack in US)

2011-11-23 Thread Jay Ashworth
- Original Message -
> From: "Owen DeLong" 

> >> but that's not the only risk. When the traffic
> >> signal is failing, even if it's failing with dark or red in every
> >> direction, the intersection becomes more dangerous. Not as
> >> dangerous as conflicting greens,
> >
> > By 2 or 3 orders of magnitude, usually; the second thing they teach
> > you in driver ed is "a dark traffic signal is a 4-way stop".
> 
> I'm not so sure that's true. (The 2-3 orders of magnitude part). When
> I worked ambulance, we responded to a lot more collisions in 4-way
> stop intersections and malfunctioning (dark or flashing red) signal
> intersections than we did in intersections with conflicting greens. A
> whole lot ore, like none of the conflicting greens and many of the
> others.

Well, sure: what's the *incidence* of conflicting greens?

I wasn't suggesting that the incidence of accidents would be any different
between conflicting greens and other types of failures (though my intuition
is that it would be higher), but that's swamped by how often the condition
actually occurs, which, appears to require someone physically running a
truck into the control box, or a chain of 5 or 6 failures in cascade to 
occur, based on other postings on this thread.

> As such, I'd say that the probability of a conflicting green occurring
> and causing an injury accident is pretty low even with (relatively)
> modern digital signal controllers.

Yup, it does appear that's true.

> >>but more dangerous than a properly operating
> >> intersection. If we can eliminate 1000 failures without conflicting
> >> greens, at the cost of one failure with a conflicting green, it
> >> might be a net win in terms of safety.
> >
> > The underlying issue is trust, as it so often is. People assume (for
> > very good reason) that crossing greens is completely impossible. The
> > cost of a crossing-greens accident is *much* higher than might be
> > imagined; think "new Coke".
> 
> Sorry, I have trouble understanding how you draw a parallel between a
> crossing greens accident and new coke.
> 
> Yes, people assume a crossing greens situation is completely
> impossible. People assume a lot of very unlikely things are completely
> impossible. Many people think that winning the lottery is completely
> impossible for them. A fraction of those people choose not to play on
> that basis, rendering that belief basically true. Even with modern
> software-controlled signaling, crossing greens events are extremely
> uncommon. So much so that I have never actually encountered one.

Me neither.

This does not forbid me from speculating on it. :-)

> I will say that the relative complexity of configuring the software
> systems vs. wiring a relay based system to correctly protect a modern
> complex intersection would make the relay system inherently
> significantly less likely to have completely protected logic. In fact,
> it might even be electrically impossible to completely protect the
> logic in some modern intersection configurations because they don't
> make relays with that many poles.

That's a possibility, certainly.  It seems an interesting masters project
for an electrical engineer.  How many zeros can you get into the p number?
 
> Conversely, the software configuration interface is pretty well
> abstracted to the level of essentially describing the intersection in
> terms of source/destination pairs and paths crossed by each pair.
> Short of a serious bug in the overall firmware or the configuration
> compiler (for lack of a better term), I'd say that such gross errors
> in the configuration of the conflict monitor are pretty unlikely.
> Indeed, the history of traffic light malfunctions with digital
> controllers would seem to bear this out. The safety record appears to
> be pretty good.

Yes, but I was aiming more for failure conditions than mis-programming
conditions.
 
> So rare, in fact, that traffic light malfunctions do not appear in a
> list of traffic accident causes that totaled more than 99% of traffic
> accidents when I added up the percentages. I can only assume that
> since light malfunctions overall are not a statistically significant
> fraction of accidents, conflicting greens must represent an even
> smaller and more insignificant fraction.

No kidding.  That's pleasant to hear.
 
Cheers,
- jra
-- 
Jay R. Ashworth  Baylink   j...@baylink.com
Designer The Things I Think   RFC 2100
Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA  http://photo.imageinc.us +1 727 647 1274



Re: OT: Traffic Light Control (was Re: First real-world SCADA attack in US)

2011-11-23 Thread Jay Ashworth
> Within each intersection controller is a PC board with a diode matrix
> called a "conflict monitor". It has inputs from all of the green and
> yellow phases including pedestrian walk signals, turn arrows, etc.
> 
> It's the job of the traffic engineer installing the system to program
> the conflict monitor for that intersection. By default they're
> programmed for a simple North-South vs. East-West intersection of
> two-way streets with pedestrian controls. If anything different, the
> conflict monitor is reprogrammed in the field to match the
> intersection.
> 
> In the event of a conflict, defined as green, yellow or walk signals
> that would cause conflicting traffic being allowed, the conflict monitor
> forces the intersection into red flashing in all directions and
> disconnects control from the microprocessor until manually reset
> on-site. If networked, it also sends a conflict alarm. If the
> conflict monitor is removed, the intersection goes to flash.

So, while "flash" isn't the default condition, which the controller is
taken *out* of by the conflict monitor, that monitor is at least *static
logic*, with essentially no moving parts?  I can live with that, I guess.

> Conflicting green is only possible if the conflict monitor is
> mis-programmed or the external connections to the signal heads are
> mis-wired. Even a short-circuit in the external wiring between two
> green phases would be detected unless the feed wires of the
> conflicting phases are cut to the signal box.

Got it.

> In the real world, "Stuff happens". Trucks cut corners and turn the
> traffic heads to point the wrong way. Controllers get replaced with a
> stock unit after a failure or accident knocking down the signal box
> without being properly set up for that intersection.

Yeah.  But at least that's stuff you have a hope of managing.  "Firmware
underwent bit rot" is simply not visible -- unless there's, say, signature 
tracing through the main controller.

> But, an external cracker even with full access won't be able to cause
> a conflict. Massive traffic jams by messing with the timing, short or
> long cycles, etc. but not a conflict.

Which is what I was hoping for: a failure might cause that, but an attack
has to be a) local and b) fairly knowledgable.

Cheers,
-- jra
-- 
Jay R. Ashworth  Baylink   j...@baylink.com
Designer The Things I Think   RFC 2100
Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA  http://photo.imageinc.us +1 727 647 1274



Re: OT: Traffic Light Control (was Re: First real-world SCADA attack in US)

2011-11-23 Thread Robert E. Seastrom

Mark Radabaugh  writes:

> On 11/23/11 11:23 AM, valdis.kletni...@vt.edu wrote:
>> On Wed, 23 Nov 2011 11:14:34 EST, Bryan Fields said:
>>> So really all a hacker needs is a pair of dykes, some electrical tape, and 
>>> an
>>> all black jumpsuit.
>> Actually, you want a really dark blue jumpsuit.  All-black creates a 
>> sillouette in
>> all but the very darkest conditions.
> White service truck, orange cone, jeans, heavy cotton shirt and an
> orange vest in the middle of the day is far less noticeable.

Don't forget the hard hat.  Aluminum forms clipboard and Nextel
phone complete the outfit but are optional.

-r




Re: OT: Traffic Light Control (was Re: First real-world SCADA attack in US)

2011-11-23 Thread Mark Radabaugh

On 11/23/11 11:23 AM, valdis.kletni...@vt.edu wrote:

On Wed, 23 Nov 2011 11:14:34 EST, Bryan Fields said:

So really all a hacker needs is a pair of dykes, some electrical tape, and an
all black jumpsuit.

Actually, you want a really dark blue jumpsuit.  All-black creates a sillouette 
in
all but the very darkest conditions.
White service truck, orange cone, jeans, heavy cotton shirt and an 
orange vest in the middle of the day is far less noticeable.



--
Mark Radabaugh
Amplex

m...@amplex.net  419.837.5015




Re: OT: Traffic Light Control (was Re: First real-world SCADA attack in US)

2011-11-23 Thread Valdis . Kletnieks
On Wed, 23 Nov 2011 11:14:34 EST, Bryan Fields said:
> So really all a hacker needs is a pair of dykes, some electrical tape, and an
> all black jumpsuit.

Actually, you want a really dark blue jumpsuit.  All-black creates a sillouette 
in
all but the very darkest conditions.


pgprHPVYAjpnH.pgp
Description: PGP signature


Re: OT: Traffic Light Control (was Re: First real-world SCADA attack in US)

2011-11-23 Thread Bryan Fields
On 11/22/2011 23:29, Jay Hennigan wrote:
> But, an external cracker even with full access won't be able to cause a
> conflict.  Massive traffic jams by messing with the timing, short or
> long cycles, etc. but not a conflict.

So really all a hacker needs is a pair of dykes, some electrical tape, and an
all black jumpsuit.  At 3 am pry open the box and go to work.  Bet if they
trained at it they could be done in under 5 min.

Thank god it's so much easier to just shoot the lights out if you want to be a
vandal.

-- 
Bryan Fields

727-409-1194 - Voice
727-214-2508 - Fax
http://bryanfields.net



Re: OT: Traffic Light Control (was Re: First real-world SCADA attack in US)

2011-11-22 Thread Jay Hennigan
On 11/22/11 8:16 AM, Jay Ashworth wrote:
> - Original Message -
>> From: "Owen DeLong" 
> 
>> As in all cases, additional flexibility results in additional ability
>> to make mistakes. Simple mechanical lockouts do not scale to the
>> modern world. The benefits of these additional capabilities far
>> outweigh the perceived risks of programming errors.
> 
> The perceived risk in this case is "multiple high-speed traffic fatalities".
> 
> I believe we rank that pretty high; it's entirely possible that a traffic
> light controller is the most potentially dangerous artifact (in terms of 
> number of possible deaths) that the average citizen interacts with on a 
> daily basis.

I'm familiar with this.  The modern Safetran brand of traffic light
controllers are indeed microprocessor based and networked for time sync,
although they can also use local GPS.  Network is typically radio or
twisted pair modem.  McCain, BiTran, etc. are similar.

The master controllers do run IP so the risk is there that they can be
either deliberately or accidentally exposed to the Internet.  Before
this they typically had a dial-up modem and could be accessed by anyone
war-dialing with a VT-100 emulator and some password guessing.  Many are
still this way.

Within each intersection controller is a PC board with a diode matrix
called a "conflict monitor".  It has inputs from all of the green and
yellow phases including pedestrian walk signals, turn arrows, etc.

It's the job of the traffic engineer installing the system to program
the conflict monitor for that intersection.  By default they're
programmed for a simple North-South vs. East-West intersection of
two-way streets with pedestrian controls.  If anything different, the
conflict monitor is reprogrammed in the field to match the intersection.

In the event of a conflict, defined as green, yellow or walk signals
that would cause conflicting traffic being allowed, the conflict monitor
forces the intersection into red flashing in all directions and
disconnects control from the microprocessor until manually reset
on-site.   If networked, it also sends a conflict alarm.  If the
conflict monitor is removed, the intersection goes to flash.

Conflicting green is only possible if the conflict monitor is
mis-programmed or the external connections to the signal heads are
mis-wired.  Even a short-circuit in the external wiring between two
green phases would be detected unless the feed wires of the conflicting
phases are cut to the signal box.

In the real world, "Stuff happens".  Trucks cut corners and turn the
traffic heads to point the wrong way.  Controllers get replaced with a
stock unit after a failure or accident knocking down the signal box
without being properly set up for that intersection.

But, an external cracker even with full access won't be able to cause a
conflict.  Massive traffic jams by messing with the timing, short or
long cycles, etc. but not a conflict.

--
Jay Hennigan - CCIE #7880 - Network Engineering - j...@impulse.net
Impulse Internet Service  -  http://www.impulse.net/
Your local telephone and internet company - 805 884-6323 - WB6RDV



Re: OT: Traffic Light Control

2011-11-22 Thread Robert Bonomi

On Tue, Nov 22, 2011 at 02:26:34PM -0500, Jay Ashworth wrote:
>
> > Some other things to consider.
> > 
> > Relays are more likely to fail. Yes, the relay architecture was
> > carefully designed such that the most failures would not result in
> > conflicting greens, 
> 
> My understanding was that it was completely impossible.  You could 
> fail dark, but you *could not* fail crossing-green.

Just to put one point to rest.

I, personally, have witnessed traffic lights showing 'green both directions'.
*TWICE*.  One was in the mid-1960s, with what was undoubtedly relay-based 
control logic; the second was in the late 1990s, *probably* with solid-state
'management' controls , but I don't know for certain.  (The 'relatively
recent' unit's I've seen the insides of have solid-state logic driving final
'output' relays that provide power to the actual signal head.)

In the first case, the pedestal-mounted control unit had been subjected to
excessive impact forces, and some of the 'output' wires had shorted together.





Re: OT: Traffic Light Control (was Re: First real-world SCADA attack in US)

2011-11-22 Thread Owen DeLong
> 
>> but that's not the only risk. When the traffic
>> signal is failing, even if it's failing with dark or red in every
>> direction, the intersection becomes more dangerous. Not as dangerous
>> as conflicting greens, 
> 
> By 2 or 3 orders of magnitude, usually; the second thing they teach you
> in driver ed is "a dark traffic signal is a 4-way stop".
> 

I'm not so sure that's true. (The 2-3 orders of magnitude part). When I worked 
ambulance, we responded to a lot more collisions in 4-way stop intersections 
and malfunctioning (dark or flashing red) signal intersections than we did in 
intersections with conflicting greens. A whole lot ore, like none of the 
conflicting greens and many of the others.

As such, I'd say that the probability of a conflicting green occurring and 
causing an injury accident is pretty low even with (relatively) modern digital 
signal controllers.

>>but more dangerous than a properly operating
>> intersection. If we can eliminate 1000 failures without conflicting
>> greens, at the cost of one failure with a conflicting green, it might
>> be a net win in terms of safety.
> 
> The underlying issue is trust, as it so often is.  People assume (for
> very good reason) that crossing greens is completely impossible.  The
> cost of a crossing-greens accident is *much* higher than might be
> imagined; think "new Coke".
> 

Sorry, I have trouble understanding how you draw a parallel between a crossing 
greens accident and new coke.

Yes, people assume a crossing greens situation is completely impossible. People 
assume a lot of very unlikely things are completely impossible. Many people 
think that winning the lottery is completely impossible for them. A fraction of 
those people choose not to play on that basis, rendering that belief basically 
true. Even with modern software-controlled signaling, crossing greens events 
are extremely uncommon. So much so that I have never actually encountered one.

>> Modern intersections are often considerably more complicated than a
>> two phase "allow N/S, then allow E/W, then repeat" system. Wiring relays
>> to completley avoid conflict in that case is very complex, and,
>> therefore, more error prone. Even if a properly configured relay
>> solution is more reliable than a properly configured solid-state
>> conflict-monitor solution, if the relay solution is more likely to be
>> misconfigured, then there's not necessarily a net win.
> 
> Sure.  But we have no numbers on either side.
> 

I will say that the relative complexity of configuring the software systems vs. 
wiring a relay based system to correctly protect a modern complex intersection 
would make the relay system inherently significantly less likely to have 
completely protected logic. In fact, it might even be electrically impossible 
to completely protect the logic in some modern intersection configurations 
because they don't make relays with that many poles.

Conversely, the software configuration interface is pretty well abstracted to 
the level of essentially describing the intersection in terms of 
source/destination pairs and paths crossed by each pair. Short of a serious bug 
in the overall firmware or the configuration compiler (for lack of a better 
term), I'd say that such gross errors in the configuration of the conflict 
monitor are pretty unlikely. Indeed, the history of traffic light malfunctions 
with digital controllers would seem to bear this out. The safety record appears 
to be pretty good.

So rare, in fact, that traffic light malfunctions do not appear in a list of 
traffic accident causes that totaled more than 99% of traffic accidents when I 
added up the percentages. I can only assume that since light malfunctions 
overall are not a statistically significant fraction of accidents, conflicting 
greens must represent an even smaller and more insignificant fraction.

>> Cost is an object. If implementing a solid state controller is less
>> expensive (on CapEx and OpEx basis) than a relay-based controller, then
>> it might be possible to implement traffic signals at four previously
>> uncontrolled intersections, instead of just three. That's a pretty big
>> safety win.
> 
> See above about whether people trust green lights to be safe.
> 

People trust cars to be safe. What is your point?

Owen




Re: OT: Traffic Light Control (was Re: First real-world SCADA attack in US)

2011-11-22 Thread Brett Frankenberger
On Tue, Nov 22, 2011 at 02:26:34PM -0500, Jay Ashworth wrote:
>
> Yes, but the complexity of a computerized controller is 3-6 orders of
> magnitude higher, *and none of it is visible*
 
You can't see the electrons in the relays either.

> > Some other things to consider.
> > 
> > Relays are more likely to fail. Yes, the relay architecture was
> > carefully designed such that the most failures would not result in
> > conflicting greens, 
> 
> My understanding was that it was completely impossible.  You could 
> fail dark, but you *could not* fail crossing-green.

If properly wired, maybe.  But probably not.  I'd have to see the
architecture, but, for example, is there any risk of a power surge at
the wrong time welding the green contacts togethor and resulting in a
permanent green in one direction that doesn't lock the other direction
out?  (Maybe not.  But I'm confident that some failure could be
contrived with a detailed explanation of a real system.)

> >  but that's not the only risk. When the traffic
> > signal is failing, even if it's failing with dark or red in every
> > direction, the intersection becomes more dangerous. Not as
> > dangerous as conflicting greens,
> 
> By 2 or 3 orders of magnitude, usually; the second thing they teach you
> in driver ed is "a dark traffic signal is a 4-way stop".

Of course, not everyone follows the rules.  People learn "red means
stop" well before driver ed, but sometimes they don't stop, even at a
red.

Traffic Signal Out cases are often a mess, because you have a
relatively complicated or busy intersection, and a collection of
drivers only 95% of which (for example) actually know how to handle the
case, and even many of those 95% are very tentative because they know
that not all the other drivers know the rules.  The upside is that most
of the collisions that result are low speed.

> > but more dangerous than a properly operating
> > intersection. If we can eliminate 1000 failures without conflicting
> > greens, at the cost of one failure with a conflicting green, it might
> > be a net win in terms of safety.
> 
> The underlying issue is trust, as it so often is.  People assume (for
> very good reason) that crossing greens is completely impossible.  The
> cost of a crossing-greens accident is *much* higher than might be
> imagined; think "new Coke".

New Coke was imposed on all coke drinkers, though.  A better analogy is
airline plane crashes.  They are exceedingly rare, but when they
happen, almost everyone knows about them.  Yet people still fly.  Even
immediately after the crash.  (And a modern airplane is orders of
magnitude more complicated than a solid state conflict monitor.)

Conflicting greens are also exceedingly rare, and it's not nationwide
or worldwide news when they occur. 

If conflicting greens start occuring routinely, yeah, people are going
to lose confidence in the system.  But we could likely withstand a
couple orders of magnitude increase in the number of green-on-green
incidents without any meaningful reduction in confidence in traffic
signals.

Still, obviously, the point isn't to keep increasing the frequency of
conflicting green incidents until people start to lose confidence.  The
point is that there's no evidence of any meaningful increase in risk
with electronic controllers.

> > Modern intersections are often considerably more complicated than a
> > two phase "allow N/S, then allow E/W, then repeat" system. Wiring relays
> > to completley avoid conflict in that case is very complex, and,
> > therefore, more error prone. Even if a properly configured relay
> > solution is more reliable than a properly configured solid-state
> > conflict-monitor solution, if the relay solution is more likely to be
> > misconfigured, then there's not necessarily a net win.
> 
> Sure.  But we have no numbers on either side.

Yeah, and I looked.  There's nothing I could find.  But ... I'd be
shocked to find evidence of a statistically higher risk of conflicting
greens in electronic conflict-monitor implementations over relay-based
systems of comparable intersection complexity.

Vital electronics is a well-established industry.

We all work in a bug-of-the-week industry, where we demand more speed,
more features, and so on, and accept quite a bit of risk associated
with that.  Even the careful networks that nominally place a high value
on stability don't have a reliability comparable to a traffic signal or
an airplane.  But that doesn't mean reliable electronic systems can't
be built.  Just that you have to prioritize that over other things if
that's what you want.  And that's what the vital electronics industry
does.

 -- Brett



Re: OT: Traffic Light Control (was Re: First real-world SCADA attack in US)

2011-11-22 Thread Jay Ashworth
> Relay logic has the potential for programming (i.e. wiring) errors
> also.

Yes, but the complexity of a computerized controller is 3-6 orders of
magnitude higher, *and none of it is visible*

> It's not fair to compare "conflict monitor" to "properly programmed
> relay logic". We either have to include the risk of programming
> failures (which means "improper wiring" in the case of relay logic) in
> both cases, or exclude programming failures in both cases.

See above, and note that there are at least a couple orders of magnitude 
more possible failure modes on a computerized controller as well.

> Some other things to consider.
> 
> Relays are more likely to fail. Yes, the relay architecture was
> carefully designed such that the most failures would not result in
> conflicting greens, 

My understanding was that it was completely impossible.  You could 
fail dark, but you *could not* fail crossing-green.

>  but that's not the only risk. When the traffic
> signal is failing, even if it's failing with dark or red in every
> direction, the intersection becomes more dangerous. Not as dangerous
> as conflicting greens, 

By 2 or 3 orders of magnitude, usually; the second thing they teach you
in driver ed is "a dark traffic signal is a 4-way stop".

> but more dangerous than a properly operating
> intersection. If we can eliminate 1000 failures without conflicting
> greens, at the cost of one failure with a conflicting green, it might
> be a net win in terms of safety.

The underlying issue is trust, as it so often is.  People assume (for
very good reason) that crossing greens is completely impossible.  The
cost of a crossing-greens accident is *much* higher than might be
imagined; think "new Coke".
 
> Modern intersections are often considerably more complicated than a
> two phase "allow N/S, then allow E/W, then repeat" system. Wiring relays
> to completley avoid conflict in that case is very complex, and,
> therefore, more error prone. Even if a properly configured relay
> solution is more reliable than a properly configured solid-state
> conflict-monitor solution, if the relay solution is more likely to be
> misconfigured, then there's not necessarily a net win.

Sure.  But we have no numbers on either side.

> Cost is an object. If implementing a solid state controller is less
> expensive (on CapEx and OpEx basis) than a relay-based controller, then
> it might be possible to implement traffic signals at four previously
> uncontrolled intersections, instead of just three. That's a pretty big
> safety win.

See above about whether people trust green lights to be safe.

> And, yes, convenience is also an objective. Most people wouldn't want
> to live in a city where the throughput benefit of modern traffic
> signalling weren't available, even if they have to accept a very, very
> small increase in risk.

Assuming they knew they were accepting it.

But if it amounts to "Well, it's going to cost you more if we do it
[right]", well, look out for #OccupyMainStreet.

"We can fake it cause it's cheaper" is pretty close to a dead approach,
I suspect.

Cheers,
-- jra
-- 
Jay R. Ashworth  Baylink   j...@baylink.com
Designer The Things I Think   RFC 2100
Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA  http://photo.imageinc.us +1 727 647 1274



Re: OT: Traffic Light Control (was Re: First real-world SCADA attack in US)

2011-11-22 Thread Brett Frankenberger
On Tue, Nov 22, 2011 at 11:16:54AM -0500, Jay Ashworth wrote:
> - Original Message -
> > From: "Owen DeLong" 
> 
> > As in all cases, additional flexibility results in additional
> > ability to make mistakes. Simple mechanical lockouts do not scale
> > to the modern world.  The benefits of these additional capabilities
> > far outweigh the perceived risks of programming errors.

Relay logic has the potential for programming (i.e. wiring) errors
also.

It's not fair to compare "conflict monitor" to "properly programmed
relay logic".  We either have to include the risk of programming
failures (which means "improper wiring" in the case of relay logic) in
both cases, or exclude programming failures in both cases.

> The perceived risk in this case is "multiple high-speed traffic fatalities".

Some of the benefits of the newer systems are safety related also.
 
> I believe we rank that pretty high; it's entirely possible that a traffic
> light controller is the most potentially dangerous artifact (in terms of 
> number of possible deaths) that the average citizen interacts with on a 
> daily basis.

Some other things to consider.

Relays are more likely to fail.  Yes, the relay architecture was
carefully designed such that the most failures would not result in
conflicting greens, but that's not the only risk.  When the traffic
signal is failing, even if it's failing with dark or red in every
direction, the intersection becomes more dangerous.  Not as dangerous
as conflicting greens, but more dangerous than a properly operating
intersection.  If we can eliminate 1000 failures without conflicting
greens, at the cost of one failure with a conflicting green, it might
be a net win in terms of safety.

Modern intersections are often considerably more complicated than a two
phase "allow N/S, then allow E/W, then repeat" system.  Wiring relays
to completley avoid conflict in that case is very complex, and,
therefore, more error prone.  Even if a properly configured relay
solution is more reliable than a properly configured solid-state
conflict-monitor solution, if the relay solution is more likely to be
misconfigured, then there's not necessarily a net win.

Cost is an object.  If implementing a solid state controller is less
expensive (on CapEx and OpEx basis) than a relay-based controller, then
it might be possible to implement traffic signals at four previously
uncontrolled intersections, instead of just three.  That's a pretty big
safety win.

And, yes, convenience is also an objective.  Most people wouldn't want
to live in a city where the throughput benefit of modern traffic
signalling weren't available, even if they have to accept a very, very
small increase in risk.
  
 -- Brett



OT: Traffic Light Control (was Re: First real-world SCADA attack in US)

2011-11-22 Thread Jay Ashworth
- Original Message -
> From: "Owen DeLong" 

> As in all cases, additional flexibility results in additional ability
> to make mistakes. Simple mechanical lockouts do not scale to the
> modern world. The benefits of these additional capabilities far
> outweigh the perceived risks of programming errors.

The perceived risk in this case is "multiple high-speed traffic fatalities".

I believe we rank that pretty high; it's entirely possible that a traffic
light controller is the most potentially dangerous artifact (in terms of 
number of possible deaths) that the average citizen interacts with on a 
daily basis.
-- 
Jay R. Ashworth  Baylink   j...@baylink.com
Designer The Things I Think   RFC 2100
Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA  http://photo.imageinc.us +1 727 647 1274