Discussion:
Finding backdoors
(too old to reply)
b***@www.zefox.net
2024-09-26 15:26:27 UTC
Permalink
I'm looking for links to techniques for finding backdoors in software
and hardware. It's a matter of personal curiosity inspired by the
exploding pager incident lately in the news and a call for banning
certain software developers. An obvious question is whether use of
open-source software is a meaningful help. Fuzzing seems an obvious
choice, but slow.

It's understood that a deterministic solution is impossible, but it
would be interesting to know what approaches are practical and how
effective they are.

Thanks for reading, and any guidance.

bob prohaska
Marco Moock
2024-09-26 18:15:03 UTC
Permalink
Post by b***@www.zefox.net
I'm looking for links to techniques for finding backdoors in software
and hardware.
The only way is to learn the programming language, then check the code
and compile it yourself. Of course, the other stuff on you machine
(compiler, linker etc.) needs to be reviewed too, so a huge task no
single person can do.
Post by b***@www.zefox.net
It's a matter of personal curiosity inspired by the exploding pager
incident lately in the news and a call for banning certain software
developers. An obvious question is whether use of open-source
software is a meaningful help. Fuzzing seems an obvious choice, but
slow.
OSS has the benefit that the code is public any many people can look
inside. Although, that doesn't mean that anybody will have a look. The
sshd/liblzma backdoor proofed that again.

The more people look at it, the better it is, but this is not always
enough.
--
kind regards
Marco

Send spam to ***@cartoonies.org
William Unruh
2024-09-26 19:57:39 UTC
Permalink
Post by Marco Moock
Post by b***@www.zefox.net
I'm looking for links to techniques for finding backdoors in software
and hardware.
The only way is to learn the programming language, then check the code
and compile it yourself. Of course, the other stuff on you machine
(compiler, linker etc.) needs to be reviewed too, so a huge task no
single person can do.
Post by b***@www.zefox.net
It's a matter of personal curiosity inspired by the exploding pager
incident lately in the news and a call for banning certain software
developers. An obvious question is whether use of open-source
software is a meaningful help. Fuzzing seems an obvious choice, but
slow.
OSS has the benefit that the code is public any many people can look
inside. Although, that doesn't mean that anybody will have a look. The
sshd/liblzma backdoor proofed that again.
No, the open source code is public. That does not mean that the code on the
device is the true compilation of the source code, nor does it mean that
you can actually read the machine code to see if it is a true
compilation of the source code. Most commercial hardware has its machine
code hidden so that competitors cannot steal it to make knock-offs or
their hardware. As far as I know we have no real idea of how the pagers
etc were made to explode. Battery shorts, implated explosives are the
obvious first guesses, but figuring out exatly how the machines exploded
is going to be hard since the victims are not very anxious to indentify
themselves or hand over their fragmented phones.

And they probably used pagers/ etc from that source before and thy were
fine. You cannot take apart every one of the thousands of phones you buy
to see if they are hacked.
Post by Marco Moock
The more people look at it, the better it is, but this is not always
enough.
Marco Moock
2024-09-27 15:31:51 UTC
Permalink
Post by William Unruh
Post by Marco Moock
Post by b***@www.zefox.net
I'm looking for links to techniques for finding backdoors in
software and hardware.
The only way is to learn the programming language, then check the
code and compile it yourself. Of course, the other stuff on you
machine (compiler, linker etc.) needs to be reviewed too, so a huge
task no single person can do.
Post by b***@www.zefox.net
It's a matter of personal curiosity inspired by the exploding pager
incident lately in the news and a call for banning certain software
developers. An obvious question is whether use of open-source
software is a meaningful help. Fuzzing seems an obvious choice, but
slow.
OSS has the benefit that the code is public any many people can look
inside. Although, that doesn't mean that anybody will have a look.
The sshd/liblzma backdoor proofed that again.
No, the open source code is public. That does not mean that the code
on the device is the true compilation of the source code, nor does it
mean that you can actually read the machine code to see if it is a
true compilation of the source code.
This would be possible with deterministic compile processes. You can
then compare the machine code.
Post by William Unruh
Most commercial hardware has its machine code hidden so that
competitors cannot steal it to make knock-offs or their hardware.
This is different from the compiled machine code and another way to
inject malicious stuff.
If you want to be sure about every part, you would need access to the
code of UEFI/BIOS, firmware of devices etc.
Post by William Unruh
As far as I know we have no real idea of how the pagers etc were made
to explode.
Implemented explosives.
Post by William Unruh
Battery shorts
Very unlikely.
Post by William Unruh
And they probably used pagers/ etc from that source before and thy
were fine. You cannot take apart every one of the thousands of phones
you buy to see if they are hacked.
For security reasons, they should do that.
--
kind regards
Marco

Send spam to ***@cartoonies.org
William Unruh
2024-09-27 16:26:59 UTC
Permalink
...
Post by Marco Moock
Post by William Unruh
As far as I know we have no real idea of how the pagers etc were made
to explode.
Implemented explosives.
You know this? (How)
Post by Marco Moock
Post by William Unruh
Battery shorts
Very unlikely.
Post by William Unruh
And they probably used pagers/ etc from that source before and thy
were fine. You cannot take apart every one of the thousands of phones
you buy to see if they are hacked.
For security reasons, they should do that.
For security reasons airports should sweep the runways after every
plane has taken off or landed (see what happened to the Concord when
they did not do that). Of course that would mean flying would
become impossible due to the very small number of takeoffs and landings
possible, and making our society much less secure, not more.
Marco Moock
2024-09-27 18:11:00 UTC
Permalink
Post by William Unruh
...
Post by Marco Moock
Post by William Unruh
As far as I know we have no real idea of how the pagers etc were
made to explode.
Implemented explosives.
You know this? (How)
Post by Marco Moock
Post by William Unruh
Battery shorts
Very unlikely.
Post by William Unruh
And they probably used pagers/ etc from that source before and thy
were fine. You cannot take apart every one of the thousands of
phones you buy to see if they are hacked.
For security reasons, they should do that.
For security reasons airports should sweep the runways after every
plane has taken off or landed (see what happened to the Concord when
they did not do that). Of course that would mean flying would
become impossible due to the very small number of takeoffs and
landings possible, and making our society much less secure, not more.
You have to decide. Having security always needs effort that will cost
time and often also money.
--
kind regards
Marco

Send spam to ***@cartoonies.org
Richard Kettlewell
2024-09-28 09:09:09 UTC
Permalink
Post by b***@www.zefox.net
I'm looking for links to techniques for finding backdoors in software
and hardware. It's a matter of personal curiosity inspired by the
exploding pager incident lately in the news and a call for banning
certain software developers. An obvious question is whether use of
open-source software is a meaningful help.
In principle, yes.

* CVE-2024-3094 (the xz/SSH backdoor) could in principle have been
detected by source code review, although in fact the hint that led to
its discovery was runtime behavior. The open source model didn’t help
here; in fact it hurt - an under-resource open source project became
an attacker vector for a well-managed one.

* CVE-2015-7755 (a hardcoded adminstrative password in closed-source
router firmware) was ultimately identified by internal code
review[1]. In [2], there’s some discussion of how to find it by
analysing the binary. It’s hard to say whether it would have been
found earlier had the router firmware been open source; it surely
depends on the level of attention focused on the project.

* CVE-2015-7756 (a compromised RNG in the same firmware) was also
identified by internal code review[1]. I think there’s some chance
that this would have been fixed earlier in an open source, the use of
Dual EC DRBG would have been considered a red flag by anyone who had
been paying attention since 2006[3]. Whether the broken X9.31 PRNG
would have been detected or replaced at the same time is hard to say
though.

* If the alleged IPSec backdoor[4] ever made it into the OpenBSD tree
then it was never found. That would be an open source failure, but the
code was reviewed pretty thoroughly; it’s at least as likely, probably
more so, that it never existed or never got integrated into OpenBSD.

[1] https://supportportal.juniper.net/s/article/2015-12-Out-of-Cycle-Security-Bulletin-ScreenOS-Multiple-Security-issues-with-ScreenOS-CVE-2015-7755-CVE-2015-7756?language=en_US

[2] https://www.rapid7.com/blog/post/2015/12/20/cve-2015-7755-juniper-screenos-authentication-backdoor/

[3] http://rump2007.cr.yp.to/15-shumow.pdf

[4] https://en.wikipedia.org/wiki/OpenBSD#Alleged_backdoor
Post by b***@www.zefox.net
Fuzzing seems an obvious choice, but slow.
Fuzzing can find can find certain classes of defects but isn’t likely to
detect a cryptographically protected backdoor.

* Fuzzing would not have found CVE-2024-3094. There is no way a fuzzer
could have synthesized a valid Ed448 signature without the attacker’s
private key.

* It’s at least conceivable that CVE-2015-7755 might have been found by
an adaptive fuzzer of some kind.

* Fuzzing would not have found CVE-2015-7756. Superficially everything
worked as it should; but VPN traffic could be decrypted by someone
with the attacker’s private key (‘e’ in the language of [5]).

[5] https://eprint.iacr.org/2016/376.pdf
Post by b***@www.zefox.net
It's understood that a deterministic solution is impossible, but it
would be interesting to know what approaches are practical and how
effective they are.
I think the most popular approach is object code analysis, probably
build on a lot of experience of typical backdoor strategies. Search
terms would be ‘reversing’ and ‘decompiling’.
--
https://www.greenend.org.uk/rjk/
b***@www.zefox.net
2024-10-01 04:01:29 UTC
Permalink
Post by Richard Kettlewell
Post by b***@www.zefox.net
I'm looking for links to techniques for finding backdoors in software
and hardware. It's a matter of personal curiosity inspired by the
exploding pager incident lately in the news and a call for banning
certain software developers. An obvious question is whether use of
open-source software is a meaningful help.
In principle, yes.
* CVE-2024-3094 (the xz/SSH backdoor) could in principle have been
detected by source code review, although in fact the hint that led to
its discovery was runtime behavior. The open source model didn’t help
here; in fact it hurt - an under-resource open source project became
an attacker vector for a well-managed one.
* CVE-2015-7755 (a hardcoded adminstrative password in closed-source
router firmware) was ultimately identified by internal code
review[1]. In [2], there’s some discussion of how to find it by
analysing the binary. It’s hard to say whether it would have been
found earlier had the router firmware been open source; it surely
depends on the level of attention focused on the project.
* CVE-2015-7756 (a compromised RNG in the same firmware) was also
identified by internal code review[1]. I think there’s some chance
that this would have been fixed earlier in an open source, the use of
Dual EC DRBG would have been considered a red flag by anyone who had
been paying attention since 2006[3]. Whether the broken X9.31 PRNG
would have been detected or replaced at the same time is hard to say
though.
* If the alleged IPSec backdoor[4] ever made it into the OpenBSD tree
then it was never found. That would be an open source failure, but the
code was reviewed pretty thoroughly; it’s at least as likely, probably
more so, that it never existed or never got integrated into OpenBSD.
[1] https://supportportal.juniper.net/s/article/2015-12-Out-of-Cycle-Security-Bulletin-ScreenOS-Multiple-Security-issues-with-ScreenOS-CVE-2015-7755-CVE-2015-7756?language=en_US
[2] https://www.rapid7.com/blog/post/2015/12/20/cve-2015-7755-juniper-screenos-authentication-backdoor/
[3] http://rump2007.cr.yp.to/15-shumow.pdf
[4] https://en.wikipedia.org/wiki/OpenBSD#Alleged_backdoor
Post by b***@www.zefox.net
Fuzzing seems an obvious choice, but slow.
Fuzzing can find can find certain classes of defects but isn’t likely to
detect a cryptographically protected backdoor.
* Fuzzing would not have found CVE-2024-3094. There is no way a fuzzer
could have synthesized a valid Ed448 signature without the attacker’s
private key.
* It’s at least conceivable that CVE-2015-7755 might have been found by
an adaptive fuzzer of some kind.
* Fuzzing would not have found CVE-2015-7756. Superficially everything
worked as it should; but VPN traffic could be decrypted by someone
with the attacker’s private key (‘e’ in the language of [5]).
[5] https://eprint.iacr.org/2016/376.pdf
Post by b***@www.zefox.net
It's understood that a deterministic solution is impossible, but it
would be interesting to know what approaches are practical and how
effective they are.
I think the most popular approach is object code analysis, probably
build on a lot of experience of typical backdoor strategies. Search
terms would be ‘reversing’ and ‘decompiling’.
Hmm, that's a bit surprising. So it's easier to find suspicious code
by examining the compiled object files, rather than the source?

If I'm following correctly, the only practical assurance of security
in a hardware+software situation is to home-source both. Nothing bought
can be trusted?

If true, the proposition has substantial consequences.

Thanks for writing!

bob prohaska
Richard Kettlewell
2024-10-01 15:44:29 UTC
Permalink
Post by b***@www.zefox.net
Post by Richard Kettlewell
I think the most popular approach is object code analysis, probably
build on a lot of experience of typical backdoor strategies. Search
terms would be ‘reversing’ and ‘decompiling’.
Hmm, that's a bit surprising. So it's easier to find suspicious code
by examining the compiled object files, rather than the source?
I’m not sure how you reach that conclusion. People analyse object code
because for the most part that’s what they have, not because it’s
necessarily easier than analysing source code.
--
https://www.greenend.org.uk/rjk/
Richard L. Hamilton
2024-10-18 21:04:32 UTC
Permalink
Post by b***@www.zefox.net
Post by Richard Kettlewell
Post by b***@www.zefox.net
I'm looking for links to techniques for finding backdoors in software
and hardware. It's a matter of personal curiosity inspired by the
exploding pager incident lately in the news and a call for banning
certain software developers. An obvious question is whether use of
open-source software is a meaningful help.
In principle, yes.
* CVE-2024-3094 (the xz/SSH backdoor) could in principle have been
detected by source code review, although in fact the hint that led to
its discovery was runtime behavior. The open source model didn’t help
here; in fact it hurt - an under-resource open source project became
an attacker vector for a well-managed one.
* CVE-2015-7755 (a hardcoded adminstrative password in closed-source
router firmware) was ultimately identified by internal code
review[1]. In [2], there’s some discussion of how to find it by
analysing the binary. It’s hard to say whether it would have been
found earlier had the router firmware been open source; it surely
depends on the level of attention focused on the project.
* CVE-2015-7756 (a compromised RNG in the same firmware) was also
identified by internal code review[1]. I think there’s some chance
that this would have been fixed earlier in an open source, the use of
Dual EC DRBG would have been considered a red flag by anyone who had
been paying attention since 2006[3]. Whether the broken X9.31 PRNG
would have been detected or replaced at the same time is hard to say
though.
* If the alleged IPSec backdoor[4] ever made it into the OpenBSD tree
then it was never found. That would be an open source failure, but the
code was reviewed pretty thoroughly; it’s at least as likely, probably
more so, that it never existed or never got integrated into OpenBSD.
[1] https://supportportal.juniper.net/s/article/2015-12-Out-of-Cycle-Security-Bulletin-ScreenOS-Multiple-Security-issues-with-ScreenOS-CVE-2015-7755-CVE-2015-7756?language=en_US
[2] https://www.rapid7.com/blog/post/2015/12/20/cve-2015-7755-juniper-screenos-authentication-backdoor/
[3] http://rump2007.cr.yp.to/15-shumow.pdf
[4] https://en.wikipedia.org/wiki/OpenBSD#Alleged_backdoor
Post by b***@www.zefox.net
Fuzzing seems an obvious choice, but slow.
Fuzzing can find can find certain classes of defects but isn’t likely to
detect a cryptographically protected backdoor.
* Fuzzing would not have found CVE-2024-3094. There is no way a fuzzer
could have synthesized a valid Ed448 signature without the attacker’s
private key.
* It’s at least conceivable that CVE-2015-7755 might have been found by
an adaptive fuzzer of some kind.
* Fuzzing would not have found CVE-2015-7756. Superficially everything
worked as it should; but VPN traffic could be decrypted by someone
with the attacker’s private key (‘e’ in the language of [5]).
[5] https://eprint.iacr.org/2016/376.pdf
Post by b***@www.zefox.net
It's understood that a deterministic solution is impossible, but it
would be interesting to know what approaches are practical and how
effective they are.
I think the most popular approach is object code analysis, probably
build on a lot of experience of typical backdoor strategies. Search
terms would be ‘reversing’ and ‘decompiling’.
Hmm, that's a bit surprising. So it's easier to find suspicious code
by examining the compiled object files, rather than the source?
If I'm following correctly, the only practical assurance of security
in a hardware+software situation is to home-source both. Nothing bought
can be trusted?
If true, the proposition has substantial consequences.
Thanks for writing!
It's worse than that even. See

Reflections on Trusting trust

https://dl.acm.org/doi/pdf/10.1145/358198.358210

by Ken Thompson, the co-creator of Unix along with DMR.

It describes adding some code to the C compiler that will insert into
login.c a magic password for root power, and into a re-compile of the
compiler, the code needed to propagate itself and that login.c backdoor.

At that point, the source code can be put back to not have either bit of
code in it, but the compiler will continue to insert it into future
versions of itself, preserving the login backdoor. (at least until the
compiler or login.c source code changes to such an extent that it can't
recognize where to insert the extra code)

Since you need a compiler binary to get started building/rebuilding a
toolchain, even if the source code is ALL clean, you can't trust the end
result.


That was published in 1983, and hints that some of the idea may have
existed even earlier. These days such a thing could be smarter and survive
longer.

If you had two different compilers of independent origin, and full source
for both, and some way of comparing that the resulting binaries were, not
identical bit-for-bit, but functionally identical, then you might be ok.
And of course you can spot non-obfuscated strings in a binary, but if they
are obfuscated cleverly, it will take a lot of reverse engineering to spot
them.

Decompiling, good luck with that. Disassembling, yes; but decompiling would
depend on knowing the exact idioms of the exact version of the compiler and
relevant options (optimization, etc) used. Not a bat's chance in hell of
getting that to work consistently.

Beat the crap out of it in an isolated, throwaway VM, not to mention
running valgrind, purify, and anything else like that you can? Still no
guarantee you get it all. For all I know, proving something complex enough
is secure might be like a general solution to the halting
problem...impossible.

If you do everything yourself on an isolated network, and don't import
anyone else's code (open source or not), then maybe. But there's always the
insider threat...

As long as tinfoil hats are not involved (but Faraday cages around your
computer room might make sense), there's no such thing as too much
paranoia. :-)

That said, a computer buried in concrete so it can't run is secure. But
it's not useful. You have to decide what level of risk reduction is worth
the eventually increasing cost for diminishing returns.
b***@www.zefox.net
2024-10-18 23:37:24 UTC
Permalink
Post by Richard L. Hamilton
It's worse than that even. See
Reflections on Trusting trust
https://dl.acm.org/doi/pdf/10.1145/358198.358210
by Ken Thompson, the co-creator of Unix along with DMR.
It describes adding some code to the C compiler that will insert into
login.c a magic password for root power, and into a re-compile of the
compiler, the code needed to propagate itself and that login.c backdoor.
At that point, the source code can be put back to not have either bit of
code in it, but the compiler will continue to insert it into future
versions of itself, preserving the login backdoor. (at least until the
compiler or login.c source code changes to such an extent that it can't
recognize where to insert the extra code)
Since you need a compiler binary to get started building/rebuilding a
toolchain, even if the source code is ALL clean, you can't trust the end
result.
That was published in 1983, and hints that some of the idea may have
existed even earlier. These days such a thing could be smarter and survive
longer.
If you had two different compilers of independent origin, and full source
for both, and some way of comparing that the resulting binaries were, not
identical bit-for-bit, but functionally identical, then you might be ok.
And of course you can spot non-obfuscated strings in a binary, but if they
are obfuscated cleverly, it will take a lot of reverse engineering to spot
them.
Decompiling, good luck with that. Disassembling, yes; but decompiling would
depend on knowing the exact idioms of the exact version of the compiler and
relevant options (optimization, etc) used. Not a bat's chance in hell of
getting that to work consistently.
Beat the crap out of it in an isolated, throwaway VM, not to mention
running valgrind, purify, and anything else like that you can? Still no
guarantee you get it all. For all I know, proving something complex enough
is secure might be like a general solution to the halting
problem...impossible.
If you do everything yourself on an isolated network, and don't import
anyone else's code (open source or not), then maybe. But there's always the
insider threat...
As long as tinfoil hats are not involved (but Faraday cages around your
computer room might make sense), there's no such thing as too much
paranoia. :-)
That said, a computer buried in concrete so it can't run is secure. But
it's not useful. You have to decide what level of risk reduction is worth
the eventually increasing cost for diminishing returns.
A fascinating article, entirely new to me.

Have any examples of such a trojan been found, or even suspected?
After forty years, one might expect to see it "in the wild" or at
least see plausible consequences if it's viable in practice.

Thank you!

bob prohaska
Richard L. Hamilton
2024-10-19 02:20:04 UTC
Permalink
Post by b***@www.zefox.net
Post by Richard L. Hamilton
It's worse than that even. See
Reflections on Trusting trust
https://dl.acm.org/doi/pdf/10.1145/358198.358210
by Ken Thompson, the co-creator of Unix along with DMR.
It describes adding some code to the C compiler that will insert into
login.c a magic password for root power, and into a re-compile of the
compiler, the code needed to propagate itself and that login.c backdoor.
[...]
Post by b***@www.zefox.net
A fascinating article, entirely new to me.
Have any examples of such a trojan been found, or even suspected?
After forty years, one might expect to see it "in the wild" or at
least see plausible consequences if it's viable in practice.
Thank you!
I gather he ran it briefly on some system(s) he administered, so it
definitely worked. It wouldn't propagate except if someone copied the
gimmicked compiler binaries.

The source code for the changes is out there. Of course, compilers
have changed a lot since then, and the big open source compilers (gcc,
clang) have a lot of eyes on their source, and the binaries to start
rebuilding the toolchain with are probably built by "trusted" people
and accompanied with gpg signatures. Which is not to say that it
couldn't be done. Think about how gradually the xz based backdoor was
introduced; although that did have a lot less eyes on it.

While I've had interesting enough connections to have had a reasonably
high level of general awareness expected of me, that did not extend to
the point of either using such things or front line defense against
them. So I couldn't say that something like it hasn't appeared. If in
doubt, I would assume that all known possible threats and plenty of
unknown ones existed.

Consider if someone makes changes to a web based source control system
(git, hg, etc) or publishing software (wordpress, etc); that could do
something interesting to a lot of people's code, or to a lot of
generated web pages. Doubtless a few such things have happened,
although I don't recall specifics.

Loading...