Geoff Chappell - Software Analyst
INTRODUCTION IN OCCASIONAL PROGRESS: There always seem to be more interesting things to do than write introductions.
There was a time, round about 1999, when I entertained some hope that the growing industry that specialises in computer security might welcome methods for studying software without having to execute it. Especially at the occasional suggestion by Richard M. Smith of problems that might benefit from more than passing attention, I investigated a few threats, mysteries and abuses as conscientiously as I would for commercial work.
In the decades since, I have at best dabbled at computer security—arguably not even enough to count as a hobbyist. I am, first and foremost, a programmer in spirit, even if my time in practice is directed far more at reverse engineering. It was my good fortune, or not, to realise very early that if you write code that interacts usefully with an operating system then you will end up stepping through the operating system in the debugger and so you might better do the debugging in advance of writing anything. Add that the operating system is in various parts mis-documented, under-documented and completely undocumented, and your debugging in advance gets you into a virtuous spiral of discovering more interaction that’s useful for more writing. To think of this debugging in advance as reverse engineering was only natural at the time, and still is, but it is not reverse engineering as known since for computer security.
A few times around 2010, a virus or worm came to my attention and I looked to see if there was something that I, as a highly proficient reverse engineer of Windows, might usefully add to what was yet published by what was then just the beginnings of an industry of security researchers who reverse-engineer malware. There typically never was much to add, but sometimes there was much more to add than there ought to be.
The Clampi Trojan in 2009 had come to my attention only through reading a national newspaper. It had by then been picked over by malware experts for many months, but excuses that key parts of it were disguised too cleverly for so-called static analysis were a challenge I couldn’t resist. That I came late to the party was even more true of the Stuxnet worm a year later, and yet I think I can fairly say that my write-up of one of Stuxnet’s kernel-mode drivers is the first that was anywhere near to definitive and that without my take on an exploited Control Panel bug Microsoft’s knowingly dishonest excuse of the bug as a parsing error might never have been contested. For the Aurora attack on Google and others in late 2009, I was almost current: though I didn’t learn of the attack until mid-January 2010, I had a detailed explanation of the relevant Internet Explorer bug published before Microsoft had yet released its fix.
Though these all were written by very much an outsider, starting from no particular knowledge of exploiting vulnerabilities, none of them are slight and some are surely works that any professional in security research might be proud of. Ask Google, today, 3rd August 2021, for pages that contain Aurora and createEventObject, i.e., the name of the method that’s exploited, and mine is top. That need say nothing about quality, of course, but it is some measure that my dabblings in malware analysis are not easily disregarded just for being non-professional.
Years later, separately from malware analysis, I decided to treat a practical matter of kernel-mode driver installation as a security issue. For this, I took my cue from Microsoft: everything that Microsoft has ever written about signatures for kernel-mode drivers in Windows 10 would have it that tightening was required for computer security. The official line is that drivers must be signed by Microsoft and administrators cannot be trusted to override this. But if administrators can in fact override it by setting undocumented registry values, then Microsoft has permitted a hole in security by the standards that Microsoft itself chose to set for kernel-mode driver signing. Thus is my article on Back Doors for Cross-Signed Drivers here among my dabbler’s Notes on Computer Security despite being written by a professional programmer of kernel-mode drivers.
Possibly just from realising during the pandemic year that I had spent half my life studying Microsoft’s operating systems (DOS and then two types of Windows) as concurrently both a programmer and reverse engineer, I found myself asking why decades of reverse engineering Windows (by me but by now very, very much more by others) have not been accompanied by a wealth of academic inquiry into Windows.
Windows is by far the most sophisticated, most substantial software that has seen the widest use for the longest time, yet it’s hardly known in academic study. How is Windows designed and programmed? How has it affected the development of computer programming? What have been its effects on the increasingly pervasive integration of computing into wider society? Governments seem set on another round of anti-trust and other investigations into today’s big tech for dominance of the Internet and for the impact of social media, but they have conspicuously little from academic study to inform them of what, if anything, was achieved by the last round with Microsoft about Windows.
Among the contributing factors, the stand-out is that Windows is closed-source. Where academic study would go deeper for open-source software, it is instead cut off, first by the unavailability of source code but then by a perception that however substantial is the non-academic study of Windows it is all in various ways unsuitable as a substitute for source code.
WRITING IN PROGRESS
I particularly like looking into abuses because a recurring theme of my interest in software is consumer protection. The software industry—here taken in its widest sense as including not just manufacturers but researchers—takes advantage of consumers, mostly because it can. Much of this is not deliberate and is even relatively innocent. With so much commercial pressure for mass production of what is essentially still a hand-crafted product, some slippage in rigour is only to be expected all round. Even with all the ideals that one might want for precision and vigilance, outcomes are inevitably not at the standard that they might be. Errors and vagueness at this website are testament to that!
Yet everyone must suspect that sometimes there is more to it. There is just so little risk of being caught. As much as it is human nature to slack off, it is also human nature to see an opportunity and exploit it. Even if errant behaviour in the product is demonstrated beyond dispute, software companies say what they want by way of euphemism, excuses and even outright denial. And sometimes, perhaps not often, but certainly sometimes, they actually do plan a mischief—and plan to get away with it through the euphemism, excuses and denial that work so well even when not planned.
Dishonest denial certainly applied in the two cases I investigated in 1999. Perhaps too satisfyingly neatly, one was a problem of security, the other of privacy.
Though both these investigations were fun, and even seemed important at the time, only one ever got written up for my old website, and I have updated it here: America Online Exploits Bug in Own Software. As suggested by the title, the mischief in this case was very much planned. Rather than correct a known bug, America Online (AOL) devised a way to exploit it for Remote Code Execution on old versions—and did not hurry to fix the bug in new versions.
The other was at least as interesting but was written up by Richard: The RealJukeBox monitoring system. Its controversial disregard of privacy even made it to the New York Times: CD Software Is Said to Monitor Users’ Listening Habits. While the software conveniently went to the Internet to get track listings of an inserted CD, it also sent an identifier that had also been sent at installation when it had then been accompanied by a light encryption of information that the user had been encouraged to enter as product registration. This reuse of the identifier gave the recipient the means of learning something that music publishers had until then only dreamed of ever knowing: a customer bought this CD, but how often do they play it? Of course, the software manufacturer denied everything. Even after I decrypted for Richard the personally identifying information that had been sent with Richard’s registration (Richard having, of course, kept packet captures), the manufacturer still had excuses! That any of this was any concern to anyone then looks at best quaint now, but the case is surely instructive for anyone who wants to understand how we got to a world where so many so blithely trade away their privacy.
Both these cases show a software manufacturer caught in a lie, denying some alleged behaviour that compromised a computer’s security in one case and its user’s privacy in the other. To my knowledge, neither manufacturer was ever called to account for this dishonesty in any meaningful way—and the AOL spokesperson at the time apparently continued very successfully at AOL for at least a decade.
Having seen such abuse up close and, as importantly, seen that nobody cares much to stop it, I was then not much interested in computer security for many years. My specialty is in the efficient extraction of the last details that anyone might conceivably want for proof. In an industry where that goes nowhere, my skills and interests just don’t make a good fit.
An investigation for computer security is primarily concerned with identifying a threat, having people confirm it by reproducing the observations, and then devising some means to defeat the threat or at least deflect it. Though it seems to be everyone’s habit to throw around words like “in-depth” and “comprehensive”, the fact is that commercial interests just don’t run as far as getting the sort of detailed explanation that I aim to produce. Instead, on seeing something bad, a security company builds recognition of it into the next round of security products, or brings the loophole to the attention of whoever makes the susceptible program or operating system, and they build a solution into their next version. Either way, upgrading is encouraged, the something bad has been turned into something good, and everyone moves on.
To me, this does not seem an entirely commendable process. It may be the best that is practicable with the resources that are most readily to hand, but it also smacks of convenience in matters that are rife with conflicts of interest. Of course, I am a self-interested agitator and I also have to admit that I nowadays feel confirmed in what I used to think were merely prejudices. Without going so far as saying that anti-virus manufacturers, etc., play both sides of the fence, I can’t help noting that the natural symbiosis between those who threaten and those who would defend is unnaturally strong when it comes to computer software.
When it comes to abuses, the part of the software industry that devotes itself to computer security is arguably worse than the industry as a whole, because they justify their own bad behaviour as being necessary compromises in a good cause. Indeed, they don’t so much justify it, as take it for granted or overlook it, or anyway never admit that there might be reasonable concerns about what they do.