Post by b***@www.zefox.netPost by Richard KettlewellPost by b***@www.zefox.netI'm looking for links to techniques for finding backdoors in software
and hardware. It's a matter of personal curiosity inspired by the
exploding pager incident lately in the news and a call for banning
certain software developers. An obvious question is whether use of
open-source software is a meaningful help.
In principle, yes.
* CVE-2024-3094 (the xz/SSH backdoor) could in principle have been
detected by source code review, although in fact the hint that led to
its discovery was runtime behavior. The open source model didn’t help
here; in fact it hurt - an under-resource open source project became
an attacker vector for a well-managed one.
* CVE-2015-7755 (a hardcoded adminstrative password in closed-source
router firmware) was ultimately identified by internal code
review[1]. In [2], there’s some discussion of how to find it by
analysing the binary. It’s hard to say whether it would have been
found earlier had the router firmware been open source; it surely
depends on the level of attention focused on the project.
* CVE-2015-7756 (a compromised RNG in the same firmware) was also
identified by internal code review[1]. I think there’s some chance
that this would have been fixed earlier in an open source, the use of
Dual EC DRBG would have been considered a red flag by anyone who had
been paying attention since 2006[3]. Whether the broken X9.31 PRNG
would have been detected or replaced at the same time is hard to say
though.
* If the alleged IPSec backdoor[4] ever made it into the OpenBSD tree
then it was never found. That would be an open source failure, but the
code was reviewed pretty thoroughly; it’s at least as likely, probably
more so, that it never existed or never got integrated into OpenBSD.
[1] https://supportportal.juniper.net/s/article/2015-12-Out-of-Cycle-Security-Bulletin-ScreenOS-Multiple-Security-issues-with-ScreenOS-CVE-2015-7755-CVE-2015-7756?language=en_US
[2] https://www.rapid7.com/blog/post/2015/12/20/cve-2015-7755-juniper-screenos-authentication-backdoor/
[3] http://rump2007.cr.yp.to/15-shumow.pdf
[4] https://en.wikipedia.org/wiki/OpenBSD#Alleged_backdoor
Post by b***@www.zefox.netFuzzing seems an obvious choice, but slow.
Fuzzing can find can find certain classes of defects but isn’t likely to
detect a cryptographically protected backdoor.
* Fuzzing would not have found CVE-2024-3094. There is no way a fuzzer
could have synthesized a valid Ed448 signature without the attacker’s
private key.
* It’s at least conceivable that CVE-2015-7755 might have been found by
an adaptive fuzzer of some kind.
* Fuzzing would not have found CVE-2015-7756. Superficially everything
worked as it should; but VPN traffic could be decrypted by someone
with the attacker’s private key (‘e’ in the language of [5]).
[5] https://eprint.iacr.org/2016/376.pdf
Post by b***@www.zefox.netIt's understood that a deterministic solution is impossible, but it
would be interesting to know what approaches are practical and how
effective they are.
I think the most popular approach is object code analysis, probably
build on a lot of experience of typical backdoor strategies. Search
terms would be ‘reversing’ and ‘decompiling’.
Hmm, that's a bit surprising. So it's easier to find suspicious code
by examining the compiled object files, rather than the source?
If I'm following correctly, the only practical assurance of security
in a hardware+software situation is to home-source both. Nothing bought
can be trusted?
If true, the proposition has substantial consequences.
Thanks for writing!
It's worse than that even. See
Reflections on Trusting trust
https://dl.acm.org/doi/pdf/10.1145/358198.358210
by Ken Thompson, the co-creator of Unix along with DMR.
It describes adding some code to the C compiler that will insert into
login.c a magic password for root power, and into a re-compile of the
compiler, the code needed to propagate itself and that login.c backdoor.
At that point, the source code can be put back to not have either bit of
code in it, but the compiler will continue to insert it into future
versions of itself, preserving the login backdoor. (at least until the
compiler or login.c source code changes to such an extent that it can't
recognize where to insert the extra code)
Since you need a compiler binary to get started building/rebuilding a
toolchain, even if the source code is ALL clean, you can't trust the end
result.
That was published in 1983, and hints that some of the idea may have
existed even earlier. These days such a thing could be smarter and survive
longer.
If you had two different compilers of independent origin, and full source
for both, and some way of comparing that the resulting binaries were, not
identical bit-for-bit, but functionally identical, then you might be ok.
And of course you can spot non-obfuscated strings in a binary, but if they
are obfuscated cleverly, it will take a lot of reverse engineering to spot
them.
Decompiling, good luck with that. Disassembling, yes; but decompiling would
depend on knowing the exact idioms of the exact version of the compiler and
relevant options (optimization, etc) used. Not a bat's chance in hell of
getting that to work consistently.
Beat the crap out of it in an isolated, throwaway VM, not to mention
running valgrind, purify, and anything else like that you can? Still no
guarantee you get it all. For all I know, proving something complex enough
is secure might be like a general solution to the halting
problem...impossible.
If you do everything yourself on an isolated network, and don't import
anyone else's code (open source or not), then maybe. But there's always the
insider threat...
As long as tinfoil hats are not involved (but Faraday cages around your
computer room might make sense), there's no such thing as too much
paranoia. :-)
That said, a computer buried in concrete so it can't run is secure. But
it's not useful. You have to decide what level of risk reduction is worth
the eventually increasing cost for diminishing returns.