At 08:09 PM 3/10/00 -0600, John Kelsey wrote:

>But there's a related engineering question:  Does
>it make sense to build large systems in which there's no way
>for humans to overrule the actions of programs once they're
>set in motion?
...
>To use a more common example, I believe there were some cars
>(maybe experimental, I don't know) which would simply refuse
>to start the ignition until all passengers had their
>seatbelts on.  There's no doubt that it's possible to design
>such a car.  But you couldn't sell them without making it
>illegal to buy any other car, and users would flock to
>mechanics to have the feature removed in droves, regardless
>of the law.

It seems unnecessarily extreme to focus on (a) large systems and (b) 
draconian enforcement of rules with obvious bad side-effects.

Point (a) is that small-scale irrevocable decisions abound, and we've 
learned to live with them.  On my computer, I issue irrevocable commands 
hundreds of times per day.  A good example is sending email, even though it 
might have been possible to design the system to permit 
revocation.  Encrypting a file using a public key to which I don't hold the 
private key (and expunging the original) is another irrevocable act that I 
commonly perform.  Submitting a document to a cryptological time-lock 
system is no worse than this.

Point (b) is that if a 100% effective time-lock system were devised, I 
would not expect that people would be required to use it (in the sense 
suggested by the seatbelt interlock scenario above).  Anybody who wanted to 
defeat such a draconian requirement could just keep a copy somewhere else.

Reply via email to