BIOS hacked; bury it! ..?

Over the past year (five Internet years), we have seen regular messages about the ‘hacking’ of BIOSes. Due to which all that we can trump up for information security, is nullified through this lowest thinkable level form of unknown, unlocked backdoors.

After a week or so, usually the news value drops and we hear very little. Mainly by this being such a deep, deep into technology issue; a showcase of a class break — it’s hardly worthwhile to think about solutions. The perpetrators, usually considered to be agencies of powers-that-be of Western, Eastern or anything in between origin, are seen to mainly fight each other and we can only be mangled in between can’t we? The one taints the BIOS on the chips that the other installs, the other does the same the other way around. And, how long ago isn’t it that you yourself were babbling in de BIOS with some assembler or even lower-level code ..?

However, and this is similar to laws against crypto (as e.g., here), those with bad intent may use the backdoors just as the good guys (the above, that work their a’s off for your privacy, right?) might. And don’t we all want to remain at least a little in control ..?

Hence the question: What would be roadblocks against a solution of ‘isolation’ of a possibly tainted BIOS ..? I’m thinking here of some form of inverse, upside-down sandbox. That isolates and screens all messaging from and to the BIOS and filters all malicious, unauthorized stuff out.

This calls for clear and complete rules about what is generally good and normal, and what is naughty. We may solve that with checksums, hashes of extensive lists of functionality we would allow. But who calculates these checksums, and how reliable is the baseline when already off-the-dock OEM stuff may contain malware in the BIOSes; who can you trust ..? And all that white listing: Isn’t there a huge context dependency regarding superficially trustable but in effect malicious messages? With a sandbox we put the problem at a somewhat higher, more insightful level. Be it also somewhat ‘higher’ in the architecture which raises the question whether the sandbox is sufficient and isolates completely, all around. And the sandbox has to run on … the chip with (support of the) BIOS…
This creates an arms’ race where the bad guys (unsafe-BIOS-wanters) will try to make the BIOS circumvent or dig through the sandbox, and where the good guys will have to build repairs, patches and new versions to plug ever new the leaks. Looks almost like the information security we all know already.

And here, too, the question is: Who can we trust? The sandbox should be made and maintained by utterly independent experts. Do we know these Lone Wolves well enough, how do we establish the sufficiency of their technical expertise, what are their interests, aren’t they secretly (and I’m thinking double secrets here, too) bribed or coerced to let that one agency in despite it all? And, how do we know the patches we receive, would be reliable? If we can’t trust the most mundane of apps or -store, how can we be sure to not download an infected sandbox?

In short, the simple question of feasibility of a sandbox over the BIOS to keep things safe, ushers in a surge wave of new questions — but those are all questions we already have on other, ‘higher’ levels of security: Are the patches of the applications we have, reliable? What about the antimalware-software we deploy (yeah, bring in the ‘haha’)? The employees and contractors of our Managed Security Provider (we chose, NB, as lowest-cost supplier)?

But also for this reason, my question is: What do I miss; are there principled, logical fallacies here or is it a matter of (tons of) effort that we put in, should be prepared to put in?

Dazzles one. Hence, for relaxation:
20150911_155809
[For the people living here, rather Mehhh of course]

4 thoughts on “BIOS hacked; bury it! ..?”

Leave a Reply

Maverisk / Étoiles du Nord