Post by Jason CoombsWhat 'good purposes' did you have in mind?
What higher purpose is there above full disclosure with a proof of
concept? Disclosure spreads awareness, and awareness allows defense.
Disclosure spreads awareness, that most certainly is undisputable.
However, in addition to allowing defense, awareness also reduces the
difficulty of exploitation. This is the only problem with *true* full
disclosure -- that of balancing the user's right to defend him or herself
with the implicit threat that disclosure is to those who aren't defended.
Security as a practice lacks two things today that are necessary for Full
Disclosure to be viable. Protection (or, defense, as you put it), needs to
be *equal* and *immediate* (or as close to it as possible).
In an idealistic world, full disclosure is a great concept. It's a lot
like socialism. It's held back by the fact that the world we live in and
the people we live in it with are not ideal.
For full disclosure to be effective, disclosure should result in the almost
immediate remediation of the vulnerability with virtually zero visible
side-effects. If we had a seamless system of disclosure and remediation,
software updates would be a non-necessity. Software could simply receive
updated threat data as it was made available, and block threats as they
were discovered, using *existing* code (much the same way an IDS uses
signatures, but on an application level, blocking behavior rather than
filtering raw data). We've made huge progress in this area, with products
like SecureIIS, URLScan, etc., being able to block some attacks
heuristically -- even *before* they're known. But, the fact is, most
applications are still in the dark. No such technology exists for clients,
because the data they process is much more varied.
Right now, most users are sitting ducks if I announce a vulnerability
before the vendor has patched it (particularly in client software like IE
or Firefox). For full disclosure to be effective at protecting users, this
has to change. The technology to protect them must be created, users must
be informed, and it must be easily accessible. Preferrably, the security
technology should be embedded into the product itself.
Idealism is fine, when you're setting goals. However, it's not acceptable
for everyday policy decisions. I don't practice full disclosure. I *DO*
allow vendors time to patch. I *DO* believe that critical infrastructure
should be protected before vulnerability information is unleashed to the
script kiddies. However, I do this only by necessity.
Ideally, security will someday cease to be a business -- because every user
will be sufficiently empowered to secure his or her own systems. Right
now, though, no such world exists. My disclosure policies are tempered by
the fact that without vendor response (and sometimes, even with it), the
largest portion of the affected user base will remain unprotected. If we
can make the link tighter between disclosure and solution, then the
decision for the ideal world (that of informing the user) will also be the
sound decision for the real world.
Post by Jason CoombsThe secret is no longer a secret, and it didn't remain one as long as
you had hoped it would. This reduces the chances that the secret will be
exploited against people who aren't aware that there is a secret.
The fact that the secret is now public reduces the chances that it will be
exploited against people who now know there *was* a secret. However, it
simultaneously increases the risk of exploitation against those who aren't
aware that such a "secret" existed. Unfortunately, that now means that a
few users are better protected, and a greater number are now at greater
risk.
Post by Jason CoombsNothing at all would have been gained by delaying disclosure, other than
to give attackers a bigger window of opportunity to mount successful
attacks and design new exploits that will launch successfully against a
completely unprepared computing public.
...and to protect those users who have no usable or identifiable avenue of
protection outside of vendor-supplied software updates.
Post by Jason CoombsYour belief that you could keep a secret, or that you have any right to
keep such a secret even if you could, is moronic and it's wrong-headed.
No offense intended, but if anything here is moronic, it is the blind
adherence to idealism expressed by this statement. While I agree it is a
difficult decision to make to keep a secret from a user to protect him from
another, it is a decision that must be made, given how threats are (not)
handled today.
By announcing every vulnerability found as soon as it is discovered, we end
up with a user-base at a generally increased risk. Yes, some users are
protected, but the overwhelming majority are more vulnerable. While there
is a time when keeping a secret is no longer justified by the desire to
protect users (such as times where it is apparent that the vendor will not
be releasing an update in the near future -- a timeline up to
interpretation), granting a vendor some time following the discovery of a
vulnerability to identify and solve it is widely known to increase the
security of the user base as a whole more than immediate disclosure (which
may actually decrease it).
We're not talking about keeping such a secret for a lifetime, only a matter
of weeks at most. It was certainly feasible, and the damage done by
failing to keep that secret is no more (and is probably less) than the
damage done by not attempting, and releasing the information immediately.
Awareness and threat are most definitely *NOT* zero sum in today's world.
As for the right of the discoverer to keep a secret... we wouldn't be
having this discussion if a secret had been kept. As for the policy of
notifying vendors, the courts of law and public opinion have ruled against
you. Researchers have every right and (in today's circumstances) an
ethical obligation to do so. In a scenario such as this where users had
the choice to "protect" themselves by crippling their browser, the decision
was certainly logical, and most certainly within the researcher's right.
There's something else at work here, it's called innovation. Receiving
credit and right to one's own discovery inspires one to keep producing such
discoveries. Without that basic notion of intellectual property,
technological innovations that have made many people's lives better (and
made this debate possible) would not be here. When you look at it from the
angle of protecting yourself by getting consistently better information,
it's not as appealing to willingly trample the intellectual property rights
of others.
But I shouldn't be surprised by that, after all, you are the one who
remarked "so sue me" in response to substantiated allegations of piracy
against you:
http://lists.grok.org.uk/pipermail/full-disclosure/2005-April/033111.html
Before we get into the rhetoric, let's put idealism aside and have a little
balance.
Regards,
Matthew Murphy
--------------------------------------------------------------------
mail2web - Check your email from the web at
http://mail2web.com/ .
_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/