Hi,

In short, I don't know what Russell covered, which links he pointed to
or how good his coverage was with the talk, but the linked article below
is pretty poor.

On 6/05/2014 11:16 PM, Tim Connors wrote:
> Following on from Russell's talk, to further enlighten you, here is a
> useful description of the hearts bleeding:
> 
> http://abclocal.go.com/wls/story?section=news%2Fiteam&id=9526738
> 
> 'Heartbleed. The term comes from the communication between two so-called
> "hearts" on a server which verify your security as you shop, check e-mails
> and bank statements. There is now a backdoor break-in between those
> hearts, and it's bleeding.'

This is about a "heartbeat" vulnerability, hearts is a funny way of
saying it here.

Of the odd 66% of the Internet that /may/ have been vulnerable, slightly
less than 20% were actually running the bad code and that was at the
worst point -- during those two years, there would have be a scaling up
of websites, but not beyond 20%.  Sure that is a lot of websites.

The most critical, as in used massively, like Google and Facebook have
worked on fixes behind the scenes way before most of the world found out
about the issue.

This is a huge issue some have likened it to Y2K ... the major
difference is that Y2K was known about and properly handled for almost
all critical systems -- it was a very significant amount of work.  It's
actually not close to the same issue, Y2K is completely different.

The heartbleed issue meant that servers AND clients could have been
making memory requests continually, getting 64KB at a time of random
memory on either the client or the server -- remember, BOTH were
vulnerable if they used the effected versions of OpenSSL.

Losing the certificates themselves is not so much a problem, it is
losing the keys to those certs -- the certs alone are provided with
every SSL [or more correctly TLS] connection setup.  And of course the
raw memory dumps could easily have contained usernames and passwords,
code or all sorts of things that could have been quite interesting.

If you ran a server (or client) with the effected versions of OpenSSL,
then there is no way to tell if you have or have not been exploited.
It's time to change the keys, get new certs, change passwords, etc....
oh and the article was right about not changing passwords early -- you
need to know the site has been fixed first.

Lots of certs were re-generated quickly after public disclosure, but
with old dates, so it is hard to tell if a site is fixed, you certainly
can't tell by the cert date(s).  Apparently the Commonwealth Bank was
effected, but they claim that only the main website was vulnerable, not
Netbank -- can you trust them?  I think NOT!  Banks do NOT care about
security as much as they need to; why do you think tap-and-pay systems
are so good for them ... it's because the RETAILER takes ALL the risk
whilst the bank takes NO RISK at all.

A much bigger problem related to the certs is how browsers handle
certification revocation, or rather handle it very poorly.  The
revocation lists in a browser would only cover the /normal/ number of
revocations in a single day.  Browsers can use OCSP which can also have
soft fail errors that are accepted, much like SPF allowing soft fail.
If OCSP fails to get a result (testing for cert revocation), then it
might just go ahead and act as if it did get a confirmation result that
the cert was okay.

If you use Firefox, set the following to true (about:config entry)
   security.OCSP.require
   [the default is false]


This reference is more useful in relation to heartbleed itself and helps
people work out which services to reset passwords:

http://mashable.com/2014/04/09/heartbleed-bug-websites-affected/

Cheers
A.

_______________________________________________
luv-main mailing list
[email protected]
http://lists.luv.asn.au/listinfo/luv-main

Reply via email to