Re: [9fans] Fwd: Call for Papers: LASER 2012?Learning from Authoritative Security Experiment Results

2012-01-11 Thread tlaronde
On Tue, Jan 10, 2012 at 10:19:36PM -0800, ron minnich wrote:
 This is kind of a fun one: stuff that DID NOT work. I like the basic idea

I generally learn more from what I do wrong than from what I do right---
sometimes because when it works, it is not absolutely for the reasons
I had explicitely in view... so the lesson is less than zero.

And there is the classical joke about the experiment on a flea:

Researcher tells the flea: jump!---and the flea jumps.
He cuts one leg. Jump!---and the flea, with more difficulty, jumps.
He cuts another leg. Jump!---after some time and great efforts, it
jumps.
He cuts one more. Jump!---and the flea doesn't jump.

Scientific conclusion: when one cuts legs to a flea, it becomes deaf.
-- 
Thierry Laronde tlaronde +AT+ polynum +dot+ com
  http://www.kergis.com/
Key fingerprint = 0FF7 E906 FBAF FE95 FD89  250D 52B1 AE95 6006 F40C



Re: [9fans] Fwd: Call for Papers: LASER 2012—Learning from Authoritative Security Experiment Results

2012-01-11 Thread Wes Kussmaul
On Tue, 2012-01-10 at 22:19 -0800, ron minnich wrote:
 This is kind of a fun one: stuff that DID NOT work. I like the basic
 idea ... 

  “failures” may actually provide clues to even more significant
 results than the original experimenter had intended. The research is
 useful, even though the results are unexpected.

When Mario Salvadori gave his grandmother a copy of his new book, _Why
Buildings Stand Up_, she thanked him and said but I'd much rather know
why buildings fall down.

That remark was the catalyst for his next book, _Why Buildings Fall
Down_.




[9fans] Fwd: Call for Papers: LASER 2012—Learning from Authoritative Security Experiment Results

2012-01-10 Thread ron minnich
This is kind of a fun one: stuff that DID NOT work. I like the basic idea
...

ron

-- Forwarded message --
From: Edward Talbot edward.tal...@gmail.com
Date: Tue, Jan 10, 2012 at 1:48 PM
Subject: Call for Papers: LASER 2012—Learning from Authoritative Security
Experiment Results
To: Ronald Minnich rminn...@gmail.com


Ron -

I hope all is well with you!

I'm on the Organizing Committee for the subject workshop.  The Call For
Papers for the workshop has been released and is attached and copied below.

Your efforts to insure that this CFP is widely distributed are appreciated.


The workshop website is http://www.cert.org/laser-workshop/

Thanks for your time and consideration.

Ed
--
Edward B. Talbot
Cell: (925) 667-5994
Google Voice: (925) 452-7827

*LASER 2012—Learning from Authoritative Security Experiment Results*



The goal of this workshop is to provide an outlet for publication of
unexpected research results in security—to encourage people to share not
only what works, but also what doesn’t.  This doesn’t mean bad research—it
means research that had a valid hypothesis and methods, but the result was
negative. Given the increased importance of computer security, the security
community needs to quickly identify and learn from both success and
failure.

`Journal papers and conferences typically contain papers that report
successful experiments that extend our knowledge of the science of
security, or assess whether an engineering project has performed as
anticipated. Some of these results have high impact; others do not.
Unfortunately, papers reporting on experiments with unanticipated results
that the experimenters cannot explain, or experiments that are not
statistically significant, or engineering efforts that fail to produce the
expected results, are frequently not considered publishable, because they
do not appear to extend our knowledge.  Yet, some of these “failures” may
actually provide clues to even more significant results than the original
experimenter had intended. The research is useful, even though the results
are unexpected.

Useful research includes a well-reasoned hypothesis, a well-defined method
for testing that hypothesis, and results that either disprove or fail to
prove the hypothesis.  It also includes a methodology documented
sufficiently so that others can follow the same path. When framed in this
way, “unsuccessful” research furthers our knowledge of a hypothesis and
testing method. Others can reproduce the experiment itself, vary the
methods, and change the hypothesis; the original result provides a place to
begin.

As an example, consider an experiment assessing a protocol utilizing
biometric authentication as part of the process to provide access to a
computer system. The null hypothesis might be that the biometric technology
does not distinguish between two different people; in other words, that the
biometric element of the protocol makes the approach vulnerable to a
masquerade attack. Suppose the null hypothesis is not rejected. It would
still be worth publishing this result. First, it might prevent others from
trying the same biometric method. Second, it might lead them to further
develop the technology—to determine whether a different style of biometrics
would improve matters, or if the environment in which authentication is
being attempted makes a difference.  For example, a retinal scan may be a
failure in recognizing people in a crowd, but successful where the users
present themselves one at a time to an admission device with controlled
lighting, or when multiple “tries” are included. Third, it might lead to
modifying the encompassing protocol so as to make masquerading more
difficult for some other reason.

Equally important is research designed to reproduce the results of earlier
work. Reproducibility is key to science, to validate or uncover errors or
problems in earlier work. Failure to reproduce the results leads to a
deeper understanding of the phenomena that the earlier work uncovers.

The workshop focuses on research that has a valid hypothesis and
reproducible experimental methodology, but where the results were
unexpected or did not validate the hypotheses, where the methodology
addressed difficult and/or unexpected issues, or that identified previously
unsuspected confounding issues.

We solicit research and position papers addressing these issues, especially
(but not exclusively) on the following topics:

·Unsuccessful research in experimental security

·Methods, statistical analyses, and designs for security experiments

·Experimental confounds, mistakes, mitigations

·Successes and failures in reproducing the experimental techniques
and/or results of earlier work

Extended abstracts, full position papers, and research submissions should
be 6–10 pages long including tables, figures, and references. Please use
the ACM Proceedings Format at
http://www.acm.org/sigs/publications/proceedings-templates (Option 1,