Joining Apple

I’m pleased to announce that I’ve accepted a position with Apple’s Security Engineering and Architecture team, and am very excited to be working with a group of like minded individuals so passionate about protecting the security and privacy of others.

This decision marks the conclusion of what I feel has been a matter of conscience for me over time. Privacy is sacred; our digital lives can reveal so much about us – our interests, our deepest thoughts, and even who we love. I am thrilled to be working with such an exceptional group of people who share a passion to protect that.

Attacking the Phishing Epidemic

As long as people can be tricked, there will always be phishing (or social engineering) on some level or another, but there’s a lot more that we can do with technology to reduce the effectiveness of phishing, and the number of people falling victim to common theft. Making phishing less effective ultimately increases the cost to the criminal, and reduces the total payoff. Few will argue that our existing authentication technologies are stuck in a time warp, with some websites still using standards that date back to the 1990s. Browser design hasn’t changed very much since the Netscape days either, so it’s no wonder many people are so easily fooled by website counterfeits.

You may have heard of a term called the line of death. This is used to describe the separation between the trusted components of a web browser (such as the address bar and toolbars) and the untrusted components of a browser, namely the browser window. Phishing is easy because this is a farce. We allow untrusted elements in the trusted windows (such as a favicon, which can display a fake lock icon), tolerate financial institutions that teach users to accept any variation of their domain, and use a tiny monochrome font that can make URLs easily mistakable, even if users were paying attention to them. Worse even, it’s the untrusted space that we’re telling users to conduct the trusted operations of authentication and credit card transactions – the untrusted website portion of the web browser!.

Our browsers are so awful today that the very best advice we can offer everyday people is to try and memorize all the domains their bank uses, and get a pair of glasses to look at the address bar. We’re teaching users to perform trusted transactions in a piece of software that has no clear demarcation of trust.

The authentication systems we use these days were designed to be able to conduct secure transactions with anyone online, not knowing who they are, but most users today know exactly who they’re doing business with; they do business with the same organizations over and over; yet to the average user, a URL or an SSL certificate with a slightly different name or fingerprint means nothing. The average user relies on the one thing we have no control over: What the content looks like.

I propose we flip this on its head.

When Apple released Apple Pay on the Web, they did something really unique, but it wasn’t the payment mechanism that was revolutionary to me – it was the authentication mechanism. It’s not perfect, but it does have some really great concepts that I think we can, and should, adopt into browser technology.  Let’s break down the different concepts of Apple’s authentication design.

Trusted Content

When you pay with Apple Pay, a trusted overlay pops up over the content you’re viewing and presents a standardized, trusted interface to authenticate your transaction. Having a trusted overlay is completely foreign to how most browsers operate. Sure, http authentication can pop up a window asking for a username and password, but this is different. Safari uses an entirely separate component with authentication mechanisms that execute locally, not as part of the web content, and that the web browser can’t alter. Some of these components run in a separate execution space than the browser, such as the Secure Element on an iPhone. The overlay itself is code running in Safari and the operating system, instead of being under the control of the web page.

A separate trusted user interface component is unmistakable to the user, but many such components can be spoofed by a cleverly designed phishing site. The goal here is to create a trusted compartment for the authentication mechanism to live that extends beyond the capabilities of what can typically be done in a web browser. Granted, overlays and even separate windows can be spoofed, and so creating a trusted user interface is no easy task.

Trusted Organization

From the user’s perspective, it doesn’t matter what the browser is connecting to, only what the web page looks like. One benefit Apple Pay has over typical authentication is that, because the execution code for it lives outside of the web page (and in code), it has control over what systems it connects to, what certificates it’s pinned to, and how that information gets encrypted. We don’t really have this with web-based authentication mechanisms. The phishing site might have no SSL at all, or might use a spoofed certificate. The responsibility of authenticating the organization is left up to the user, which was simply an awful idea.

Authenticating the User Interface

Usually when you think about an authentication system, you think about the user authenticating with the website, but before that happens with Apple Pay, the Apple Pay system first authenticates with the user to demonstrate that it’s not a fake.

In the case of Apple Pay, the overlay displays your various billing and shipping addresses and credit cards on file; sensitive information that Apple knows, but a phishing site won’t. Some of this is stored locally on your computer so that it’s never transmitted.

We’ve seen less effective versions of this with “SiteKey”, sign-on pictures and so on, but those can easily be proxied by the man-in-the-middle because the user is relying on the malicious website to perform the authentication. In Apple’s model, Apple code performs the authentication completely irrespective of what content is loaded into the browser.

No Passwords Transmitted

The third important component to note of Apple Pay is that passwords aren’t being sent, and in fact aren’t being entered at all. There’s nothing to scam the user out of except for some one-time use cryptograms that aren’t valid for any other use. While TouchID is cool, there are also a number of other forms of password-less authentication mechanisms you can deploy once you’re executing in trusted execution space.

One of the most common forms of password-free authentication is challenge/response. C/R authentication has been around for a long time, and allow legacy systems to continue using passwords, but greatly reduces the risk of interception by not sending the password. As much as a fan of biometrics fused with hardware I am, this isn’t very portable. That is, I can’t just jump on my friend’s computer and pay for something with Apple Pay without reprovisioning it.

Let’s assume that the computer has control over the authentication mechanism, instead of the website. The server knows your password, and so do you. The server can derive a cryptographic challenge based on that password. Your computer can compute the proper response based on the password you enter. Challenge/response can be done many different ways. Even the ancient Kerberos protocol supported cryptographic challenge response. That secure user interface can flat out refuse to send your password anywhere, and so a phishing site would have to convince the user to type it not just into a different site, but into a completely different authentication mechanism that they’ll be able to identify as different. Sure, some people are gullible to this, but a lot fewer than are gullible to a perfect copy of a website. That small percentage of gullible people is a smaller problem to manage.

Why don’t we use challenge/response on web pages today? For one, because we’re still authenticating in untrusted space (the browser window). The user has no idea (and doesn’t care) what happens to their password when they type it into some web browser window, and it’s just as easy to phish someone no matter what authentication mechanism you’re using in the background. What makes this feasible now is that in our ideal model, we’re doing authentication in trusted execution space – space that’s independent of the web page. This changes the game. Take the Touch Bar for example. TouchID is authenticated on the Touch Bar, but password entry could also be authenticated on it from the web browser.

An Optimal Authentication Mechanism

The ultimate goal is to condition the user to a standardized interface that can both authenticate the validity of the resource as well as authenticate itself to the user before the user is willing to accept its legitimacy and input a password.

Conditioning the User

A user interface element that is very difficult to counterfeit can also be quite difficult to create, but the benefits are considerable: If someone spends enough time around real money, they’ll be able to spot a counterfeit with a much higher success rate. On the other hand, having to look at a dozen different, poorly implemented authentication pages will condition users to accept anything they see as being real.

Our ideal authentication mechanism has an unmistakable and unreproducible user interface element. The user visits a website requiring authentication, and that website includes the necessary tags to invoke the browser’s authentication code, executed separately. Regardless of the website, this standardized authentication component is activated with a standard look; as a trusted component of the browser. Plain Jane, this could easily be an overlay that appears over the portion of the web browser that’s out of reach by the website (e.g. the address bar area). Get a bit fancier, and we’re talking about incorporating the Touch Bar or other “out of band” mechanisms on equipped machines to notify the user that an authentic authorization is taking place.

Get the user used to seeing the same authentication mechanism over and over again, and they’ll be able to spot cheap counterfeits much easier. Needle moved.

Authenticating the User Interface

The user interface itself needs to be authenticated with the user in ways that make cheap knockoffs stand out. Since the browser controls this, and not the website itself, we can do a number of different things here:

  • Display the user’s desktop login icon and full name in the window.
  • Display personal information specified by the user when the browser is first set up; e.g. “show me my first card in Apple Pay” or “show me my mailing address” whenever I am presented with an authentication window.
  • Display information in non-browser areas, such as on devices equipped with a Touch Bar, change the system menu bar to blue or green, or present other visual cues not accessible to a web browser.
  • Provide buttons that interact with the operating system in a way that a browser can’t (one silly example would be to invert the colors of the entire screen when held down).
  • Suspend and dim the entire browser window during authentication.

Authenticating the Resource

Authenticating the resource that the user is connecting to is one of the biggest challenges in phishing. How do you tell the user that they’re connecting to a potentially malicious website without knowing what that website is? We’re off to a good start by executing code locally (rather than remote website code) to perform the authentication. Because of this, we can do a few interesting things that we couldn’t do before:

  • We can validate that the destination resource is using a valid SSL certificate. Granted, this can be spoofed, however it also increases the cost of running a phishing site; not just in dollars, but in the amount of time required to provision new SSL certificates against the amount of time it takes to add one to a browser blacklist.
  • We can automatically pin SSL certificates to specific websites when the user first enrolls their account, and keep track of websites they’ve set up authentication with, so that we can warn them when asked to authenticate on a website that they never enrolled on.
  • Existing black lists and white lists can now be tied to SSL certificate information, allowing us to make better automated determinations on the user’s behalf.
  • We can share all of this information across all of the user’s devices e.g. via iCloud, Firefox’s cloud sync, and so on, to make it portable.

Other elaborate things we can do with protocol might include storing a cached copy of an icon provided by the website when the site is first provisioned, giving the user a visual cue. In order for a phishing site to copy that visual cue, the user would have to step through a very obvious enrollment process that is designed to look noticeably different from the authentication process. Icons for any previously unknown sites could display a yellow exclamation mark or similar, to warn the user. In other words, that piece of content can only be displayed by websites the user has previously set up, because we’re in control of that content in local code.

We can also do some things that we are doing now, but better. For example, we can display the organization name and website name very clearly in our trusted window, in large text, and perhaps with additional visual cues, such as underlining similarities to other websites (e.g. PayPai.com) in red, and highlighting numbers in red (e.g. PayPa1.com). There’s no other content now to distract the user, because this is all happening in a trusted overlay, presumably even dimming the browser window.

The user will still receive warnings when authentication on someone else’s computer, and this is a good thing. The idea is to draw attention to the fact that your’e conducting a non-standard transaction and could potentially be giving our your credentials.

The goal with all of this is to remove the website content as the authenticating component. This is the #1 visual element the end-user is going to use to determine the legitimacy of a website: what it looks like. What I am suggesting is to dim that content completely and force them to focus on some very specific information and warnings.

 

Authentication With and Without Passwords

To improve upon our ideal authentication mechanism, we can deploy some better authentication protocols. Sending passwords back and forth can be omitted as a function of this mechanism. Websites adopting this new authentication mechanism present a great opportunity to force better protocol alternatives. Password authentication can be removed completely, using biometrics, when possible.

Two-Factor Authentication can be phished, but requiring it at enrollment (either by SMS, email, or authenticator) can dramatically limit a victim’s exposure to phishing. Requiring a secondary form of authentication for any passworded mechanisms will certainly diminish the success rate of a phish, and also increase the cost, requiring the man in the middle to be present and able to log in at that very moment.

For passworded authentication, challenge/response using cryptographic challenges can be forced, because we are running local code, and not website code. Once you’ve resolved that this standard will not support sending passwords in any way, shape, or form, you can reduce the transit attack surface significantly.

Conclusion

The overall benefit of an authentication mechanism that executes locally as a component of the browser (and potentially the operating system), rather than as a component of the website, is significant. This would mean the standardization of user interface components, protocol and security elements, resource validation, and provide a single point of entry to examine for further anti-phishing efforts that could extend far beyond URL validation, as we’re limited in doing now.

Given, this won’t address many other forms of social engineering. It’s very easy to send an email telling someone their account is limited, and direct them to some insecure site, but the idea is to condition and familiarize the user with one common set of authentication visuals so that they will question the legitimacy of any alternative visual elements if they appear. At the present, the visual elements between a legitimate authentication page and a malicious one are identical. This approach sets out to stop that.

Not only would such a scheme greatly diminish the overall effectiveness of phishing attacks, but it would simultaneously help to get rid of all the awful custom code by organizations doing authentication completely wrong. We see this every day; authentication has become a hodge podge of developer ineptitude. Placing this responsibility on the browser’s code, rather than the website’s, will help to provide what would hopefully become an accepted standard (should a working group address this subject), and at the very worst a few web browsers “doing it wrong” and needing to be fixed, than thousands of websites all needing to be fixed.

As long as people can be tricked, there will always be phishing (or social engineering) on some level or another, but there’s a lot more that we can do with technology to reduce the effectiveness of phishing, and the number of people falling victim to common theft.

 

Confide: A Quick 20 Minute Look

My inbox has been lighting up with questions about Confide, after it was allegedly found to have been used by staffers at the White House. I wish I had all of the free time that reporters think I have (I’d be so happy, living life as a broke beach bum). I did spend a little bit of time, however reverse engineering the binary and doing a simple forensic examination of it. Here’s my “literature in a rush” version…

  1. A forensic analysis didn’t yield any messages in the application’s backup files. This is the same information that comes off with a forensic imaging from a vast majority of forensics tools. I did get basic configuration information, such as the number of times the app had been launched, last use, some unique identifiers, and so on. If someone were to get a hold of the device, using normal forensic acquisition techniques, messages don’t appear to be stored anywhere they would normally come off the phone.
  2. What does get stored, and this is obvious through the application’s GUI, are undelivered messages you sent to any of your contacts. This is part of Confide’s retraction feature, and if anyone gets UI access to your device (e.g. compels you for a passcode, or looks over your shoulder), they can read any undelivered messages, the content, who they were sent to, and the time they were sent. Even if you don’t pay for the retraction feature, Confide conveniently leaves the messages there so that you can see their advertising, in hopes that you will one day pay for this feature.
  3. The encryption itself appears to be a fusion of PKI (public key cryptography) with some symmetric encryption components. I can’t really describe it completely because all of the encryption appears to be home brew. That is, the encryption and decryption routines, random key generation, and so on all appear to be custom coded as part of its internal KFCoreCrypto classes. Home grown encryption is nearly, but not quite, almost entirely nothing like tea.
  4. The encryption appears to try and operate like most other e2e apps, where users have a public key and that public key is used to encrypt messages that are later decrypted with a private key. What seems different about this encryption (other than being home brew) is that it appears to regenerate the public key under certain circumstances. It’s unclear why, but unlike Signal and WhatsApp, which consider it something to alert you about if your public key changes, Confide appears to consider this part of its function. Key exchange is always the most difficult part of good encryption routines, and so if the application is desensitized to changing public keys, it’s possible (although not confirmed) that the application could be susceptible to the same types of man-in-the-middle attacks that we’ve seen theorized in WhatsApp (if you leave the alerts off) and iMessage.
  5. Because it has home grown encryption and because I am not a specialized encryption expert, I cannot vouch for the sanity of this encryption, except to say that I don’t think home grown encryption is ever a good thing, especially when talking about a closed source application that isn’t subject to peer review by top cryptographers. I would be glad to provide a top cryptographer (such as Matt Green) any information needed in order to evaluate the sanity of the encryption. I have some reservations about the random number generator and other code that appears to incorporate some semblance of what looks like normal cryptography, but is also mixed in with a lot of weird arithmetic shift operations and other dog food.

In short, the application doesn’t smell fully kosher, and warrants a further review before I could endorse its use in the White House. If I were the White House’s CIO, I would – other than hate my life – not endorse any third party mobile application that didn’t rely on FIPS 140-2 accepted cryptographic routines, such as Apple’s common crypto.

I sure hope they’re not using apps like this (or even the good e2e apps) to send classified material over. Has anyone asked them?

Protecting Your Data at a Border Crossing

With the current US administration pondering the possibility of forcing foreign travelers to give up their social media passwords at the border, a lot of recent and justifiable concern has been raised about data privacy. The first mistake you could make is presuming that such a policy won’t affect US citizens.  For decades, JTTFs (Joint Terrorism Task Forces) have engaged in intelligence sharing around the world, allowing foreign governments to spy on you on behalf of your home country, passing that information along through various databases. What few protections citizens have in their home countries end at the border, and when an ally spies on you, that data is usually fair game to share with your home country. Think of it as a backdoor built into your constitutional rights. To underscore the significance of this, consider that the president signed an executive order just today stepping up efforts at fighting international crime, which will likely result in the strengthening of resources to a JTTFs to expand this practice of “spying on my brother’s brother for him”.

Once policies that require surrendering passwords (I’ll call them password policies from now on) are adopted, the obvious intelligence benefit will no doubt inspire other countries to establish reciprocity in order to leverage receiving better intelligence about their own citizens traveling abroad. It’s likely that the US will inspire many countries, including many oppressive nations, to institute password policies at the border. This ultimately can be used to bypass search and seizure laws by opening up your data to forensic collection. In other words, you don’t need Microsoft to service a warrant, nor will the soil your data sits on matter, because it will be a border agent connecting directly your account with special software.

I am not a lawyer, and I can’t provide you with legal advice about your rights, or what you can do at a border crossing to protect yourself legally, but I can explain the technical implications of this, as well as provide some steps you can take to protect your data regardless of what country you’re entering.

The implications of a password policy is quite severe. Forensics software is designed to collect, index, organize, and make searchable every artifact possible from an information source. Often times, weak design can allow these tools to even recover deleted data, as was evidenced recently by Elcomsoft’s tool to recover deleted Safari history. Once in an intelligence database, this can be correlated with other data, even including your interests, shopping habits, and other big data bought from retailers. All of this can be fed into even basic ML to spit out a confidence score that you are a terrorist based on some N-dimensional computation, or plot you on a K-nearest neighbor chart to see how close you plot to others under suspicion. The possibilities really are endless.

You might think that you can simply change your passwords after a border encounter, but what you may not realize is that a forensics tool is capable of imaging potentially your entire life from a single access to your account. Whether it’s old iPhone backups sitting in iCloud that can date back years, or your entire Facebook private message history, once an API is wired into a forensics tool, that one moment in time exposes all of your historical data to the border agent, which ultimately exposes all of your historical data to an intelligence database.

With that said, the goal is to avoid exposing your account information at the border so that it can’t be stolen from you in the first place. The key to mastering the art of protecting your data at a border is to forward plan for continuity of access outside of the constraints of the border crossing, while positioning yourself as if you were the adversary during this encounter. There are a number of different ways to do this which can range from social engineering to compartmentalization of data. How you choose to do it depends on your data needs while abroad.

All of these suggestions attempt to provide a technical basis to get you to “can’t”; that is, so you “can’t” expose your own data even if you were compelled to. In my experience, “can’t” will often get you better mileage than “won’t”, however depending on the country you’re entering, it’s possible that “can’t” could also get you jailed. It’s your responsibility to decide what information you need to be able to expose if compelled or threatened; this, you can keep at the front of your memory, like passwords. Getting to “can’t”, however, is much harder than getting to “won’t”, and since you probably already know how to do the latter, I’ll focus on the art of “can’t”.

Encryption

Obviously, you want all of your devices encrypted and powered off at the border. There are plenty of ways to access content on devices (even locked ones) if the encryption is already unlocked in memory. This is kind of a given, but I felt the need to mention it anyway. Encryption only gets you to “won’t”, of course, which is why it’s not a significant part of this post. Encryption alone won’t get you to “can’t”, but it is a good starting point.

Compartmentalization

Throughout this post, be thinking about the different layers of data. Your most personal crucial data is the data that you don’t want to bring with you; your inner-ring data. There are other layers around this, outer-rings of data that you consider sacrificial to certain degrees. Learning how to divide this data up before copying to your devices will help minimize the exposure of your content in the event all of your devices are compromised. Compartmentalizing your data into different layers is designed to help you organize what information you won’t be bringing with you, or what you will be protecting with various techniques discussed here or otherwise.

Burner Devices

The first, and most common piece of travel advice an information security expert will give is to use burner devices when possible. This is because the best way to avoid having your data stolen is to simply not have the data with you. In our threat model here, that also means that you cannot have any means to access the data remotely. For this reason, a burner device will get you only so far, but can still be an important ingredient.

Any data that you do not need to have with you on your trip should be backed up at home, including accounts and passwords that you won’t need to connect to while abroad. Ideally, use multiple drives and keep copies of the data at multiple sites, encrypted. If your house burns down (or is ransacked) while you’re away, you really want to have an off-site backup somewhere.

Properly wiped burner devices containing minimal data will reduce your exposure; one of the benefits of using a burner is that you’ve got a device that’s never been exposed to your most important data (let’s call this your “inner ring”), but only your outer rings of data. You’ll also want to keep the burner devices isolated from accounts that could sync old data back onto them, such as old call history databases from an iCloud backup. It’s not just the data you’re putting on now that matters, but having a clean system with no forensic trace on it.

Typically, people use burner devices to secure their exit from a hostile country. The rationale is that your device may have been compromised at some point during your trip, resulting in malware or even an implant being installed on the device to provide persistent surveillance capabilities. So not only does a burner device help in providing a clean room to carefully place outer-ring data, but it is more useful when exiting, to ensure you don’t bring any bugs back with you. If you can discard it before getting to the border, then you won’t even need to give it a second thought.

Budget constraints may not make this possible, but keep in mind that your laptop could be seized at any time and kept for months, even by the US government. If you are overly concerned about your device being searched at the border, and can’t “burn” it, mail it to yourself at some discreet name and location, and overnight. Of course that has risks too. There are some great physical anti-tamper primers out there that can be used to help ensure security while in transit.

2-Factor Authentication

You will no doubt have some online accounts that you’ll need access to while abroad; if you can’t live without your Twitter or Facebook account, or access to your source code repositories, etc., the next important step is to activate 2FA for these accounts. 2FA requires that you not only have a password, but also a one time use code that is either sent to or generated by your device.

2FA in itself isn’t a solution, as many forensics tools can prompt the examiner for a 2FA token, and you can potentially be compelled to provide a token at the border. This is where a bit of ingenuity comes into play, which we’ll discuss next. The takeaway from this section, however, is not to bring any accounts across the border that don’t have 2FA enabled. If you are compelled to give up any password, you’re giving away access to the account.

Any accounts that you cannot protect with 2FA are best left to burner accounts with only outer-ring data,, but bear in mind that simply deactivating an account doesn’t protect you. With the same password, a border agent can easily re-activate a dead account. Should they obtain knowledge of the account through forensic technique, etc., you may still risk exposure.

Locking Down 2FA

There are a few different forms of 2FA, but all generally provide you with backup codes when you activate it. Store these backup codes either at home (if coming back into your home country), or keep them in a safe place in electronic form where you know you can get to them securely from the other side of the border. If you must use snail mail, encipher them using one of many ciphers that can still be done by hand. Other options include use of steganography, secure comms with an affiliate, or hardware token.

To lock down 2FA at a border crossing, you’ll need to disable your own capability to access the resources you’ll be asked to surrender. For example, if your 2FA sends you an SMS message when you log in, either discard or mail yourself the SIM for that number, and bring a prepaid SIM with you through the border crossing; one with a different number. If you are forced to provide your password, you can do so, however you can’t produce the 2FA token required in order to log in. Purchasing a prepaid SIM in a foreign country is a fairly common behavior.

If you use an authenticator application, such as Google Authenticator or 1Password, delete the application from your devices. Worse case scenario, the border agent can force you to re-download the applications, but you won’t be able to re-provision them without the backup codes you have waiting for you on the other side. There is a social element here, of course, such as “oh, I can only access my account from my home computer, I’m sorry, I don’t have it installed on this phone. I guess I’m locked out too!”

Locking Down Email

Once 2FA fails, preventing you from accessing your own accounts, a border agent may attempt to access your email to reset the accounts. Ensure that your devices have all been signed out of your email, and that no passwords are stored on the device. Ideally, use a completely different email account to provision your accounts – one that is not normally synced with your devices. This is sound security advice too for protection from everyday phishing.

You can go through the same dance to lock yourself out of that email account as well, of course, making those backup codes only available to yourself on the other side of the border. The 2FA for that email account can forward to some dead account that you’ve since closed, or take this as far as you want to go with it. Chances are a border agent is only willing to go so far down the rabbit hole before giving up, but YMMV.

Data Redundancy and the Cloud

While you may wipe your devices of personal data, a traveler often needs at a minimum access to their basic contacts and calendar. This information can be synced in iCloud; before arriving at the border, wiping your device will remove all of your personal information, including iCloud data, from the phone. Once you’ve arrived at your destination, using your 2FA backup code to re-sync your iCloud content will give you back your minimum working data to be functional again.

Your iCloud information is, of course, subject to warrants, however border crossings often go by much looser rules. The probability of obtaining a warrant is generally going to be low at a border crossing, unless you’ve got reason to believe otherwise, but there are also rules involving what soil your data sits on (rules that have been pushed on recently, mind you, in this country). Keeping your data in any online system will no doubt expose it to a warrant, but that’s not what we’re trying to protect ourselves from here.

Pair Locking

I’ve written about Pair Locking extensively in the past. It’s an MDM feature that Apple provides allowing you to provision a device in such a way that it cannot be synced with iTunes. It’s intended for large business enterprises, but because forensics software uses the same interfaces that iTunes does, this also effectively breaks every mainstream forensics acquisition tool on the market as well. While a border agent may gain access to your handset’s GUI, this will prevent them from dumping all of the data – including deleted content – from it.

Without pair locking, giving UI access to a border agent allows them to image much of the raw data on the device, which ultimately can give them a six month or even a twelve month picture of your activity, rather than just what’s available from the screen.

Now, backup encryption is a great mechanism, and this too will break forensics tools, but you can also be compelled out of a backup password. If you are, all of the social media account passwords and other information can be extracted from your device. This is why I recommend pair locking on top of backup encryption: It completely prevents any such tools from connecting to the phone, even if your device UI and backup password have been compromised.

Of course, this means that you also can’t carry the pairing records around with you on the laptop you’re crossing the border with. These pair records, found in /var/db/lockdown on a Mac, need to go in with the backup codes and other files you have prepared for yourself in advance.

Fingerprint Readers

If any of your devices include fingerprint readers, it’s best to disable them and delete your prints before going through a checkpoint, for obvious reasons. Of course, this really plays into the position of “won’t” versus “can’t”, if you can still be compelled to give up your device passcode. Nonetheless, it raises the bar considerably, even against warrants, which can compel a fingerprint in the US, but in most cases cannot compel a passcode.

Misdirection vs. Lying

I never recommend lying to a border agent, no matter what country you’re in. Misdirection is also a far better alternative to securing your data. If, by happenstance, you’ve set up your security so that you cannot access what they need yourself, this in my opinion is far better than simply telling someone that you don’t have a social media account. Everything that you say and do may end up in a file on you for next time you pass through the border, and if you’re found to be lying, you’ll be denied entry.

Get your method down before you leave home. “My Twitter account only works from my home computer” is an honest and accurate response, and much better than getting caught in a lie later on about not having a social media account. Remember, many countries have access to open source social intelligence and already know the answers to some of the questions they ask you.

Use Your Brain

Depending on what country you’re trying to protect yourself in, it’s most important to use your brain and know what the country’s laws are. It’s easy to poke your chest out and refuse to give up any information, but that’s not always the path of least resistance. Disavowing yourself of your ability to access your own data, temporarily, may give you better results on a security level, but remember the beaten-with-a-wrench policy typically overrides a lot of your politics. If you’re going to be serious about protecting your data, then that means you also need to consider and weigh the consequences of that.

DISCLAIMER: You accept all of the liability yourself in taking any of this advice.

 

 

 

Slides: Crafting macOS Root Kits

Here are the slides from my talk at Dartmouth College this week; this was a basic introduction / overview of the macOS kernel and how root kits often have fun with the kernel. There’s not much new here, but the deck might be a good introduction for anyone looking to get into develop security tools or conduct security research in macOS. Note: Root kits aren’t exploits; there’s no exploit code in this deck. Sorry!

Crafting macOS Root Kits

Resolving Kernel Symbols in a Post-ASLR macOS World

There are some 21,000 symbols in the macOS kernel, but all but around 3,500 are opaque even to kernel developers. The reasoning behind this was likely twofold: first, Apple is continually making changes and improvements in the kernel, and they probably don’t want kernel developers mucking around with unstable portions of the code. Secondly, kernel dev used to be the wild wild west, especially before you needed a special code signing cert to load a kext, and there were a lot of bad devs who wrote awful code making macOS completely unstable. Customers running such software probably blamed Apple for it, instead of the developer. Apple now has tighter control over who can write kernel code, but it doesn’t mean developers have gotten any better at it. Looking at some commercial products out there, there’s unsurprisingly still terrible code to do things in the kernel that should never be done.

So most of the kernel is opaque to kernel developers for good reason, and this has reduced the amount of rope they have to hang themselves with. For some doing really advanced work though (especially in security), the kernel can sometimes feel like a Fisher Price steering wheel because of this, and so many have found ways around privatized functions by resolving these symbols and using them anyway. After all, if you’re going to combat root kits, you have to act like a root kit in many ways, and if you’re going to combat ransomware, you have to dig your claws into many of the routines that ransomware would use – some of which are privatized.

Today, there are many awful implementations of both malware and anti-malware code out there that resolve these private kernel symbols. Many of them do idiotic things like open and read the kernel from a file, scan memory looking for magic headers, and other very non-portable techniques that risk destabilizing macOS even more. So I thought I’d take a look at one of the good examples that particularly stood out to me. Some years back, Nemo and Snare wrote some good in-memory symbol resolving code that walked the LC_SYMTAB without having to read the kernel from disk, scan memory, or do any other disgusting things, and did it in a portable way that worked on whatever new versions of macOS came out. 

The __LINKEDIT segment and LC_SYMTAB weren’t loaded into kernel memory util around Snow Leopard, and so prior to that a number of root kits had no choice but to read the symbol table off disk by opening up /mach_kernel, which of course has also been moved around. Today’s versions of macOS make it much easier for a developer to skirt around the privatized kernel symbols, and this is a positive thing, because developers don’t have to be so dangerous with their resolving code.

Nemo and Snare’s code has gotten a bit old and stale, so I thought I’d freshen it up a bit under the hood. Two things in particular needed some work to get the engine to turn over. There were some pointer offsets in LC_SYMTAB that weren’t being used right which broke on any recent version of macOS, and it also didn’t handle kernel ASLR, which made it unusable. I fixed the symbol table pointers so that we’re reading the right parts of LC_SYMTAB now, and I’ve also come up with a novel way to deduce the kernel base address by using some maths and a command that Apple has exposed to the public KPI to unslide memory, which subtracts vm_kernel_slide out for you.

The functon vm_kernel_unslide_or_perm_external was originally added to expose an address to userspace from the kernel or heap. Exposing kernel address space to userspace seems like a really awful idea, but the function can be used for just that; if you feed it the usual kernel load address (0xffffff8000200000), it will subtract vm_kernel_slide for you, which isn’t exposed to the KPI, and give you the base kernel address in memory – really quite simple and elegant. No ugly hacks required. You don’t have to back-read memory to find 0xfeedfacr or anything else. Apple’s code is pretty intentional, so this isn’t a hack either; they’ve provided you with a way to unslide kernel ASLR from within the kernel, which is a lot safer than some of the ways devs were doing it before.

In addition to these fixes to the code, I’ve also added a simple usage example to demonstrate how to call a function once you’ve actually found the symbol. There are a few different conventions that are possible, I used a less old school and more implicit technique to invoke proc_task to obtain the task for launchd in this example.

Click the link below to read the full source of the new and improved version of Snare’s kernel resolver. Special thanks to Snare for making his original code available.

https://www.zdziarski.com/KernelResolver.c

Open Letter to the Law Enforcement Community

To my friends in law enforcement, and many whom I don’t know serving our country:

First, thank you. You do an incredibly difficult job that often goes unseen, and you put your life at risk to make this great country safer. For that, I am deeply grateful.

Many of you have suddenly found yourselves on the wrong side of history. Our country has what, by many appearances, seems to be an illegitimate president who may be the product of the Russian intelligence community, and possibly also the head of the FBI, both of whom played a key role in manipulating or defrauding our election system. Within one week of taking office, Trump has shown himself a madman who uses racism and personal prejudice to fill in the gaps that his incompetence affords. Within just one week, our country has been transformed from what many had considered a free country struggling to overcome their indifferences, now into a place of fear through the threatening of human rights and enabling of racists deeply rooted in our own country, igniting hostility against anyone who is different from the majority in skin tone, religion, or sexual orientation.

With the stroke of a pen, livelihoods and families have been discarded, as many who have lived legally in our country for years now risk being violently deported, or banned from re-entering the country they call home – not for committing a crime, but are now counted among the enemy merely for existing. Science and technology are also being harmed through this racist practice, many of whom are scientists, engineers, or other productive human beings working for large technology innovators or defense contractors within this country. All of them went through several layers of vetting far beyond what the president has even submitted to, just to be in this country and get the jobs they have. We’re in very troubling times – times that frighten everyone, except those in power.

The key to abusing power, as has been done throughout history, and that we are no doubt in the midst of in this country, is the ability to control the chain of command from a high position. I’ve worked with many different individuals from  various agencies, and I know most of you to be good people at the core who got into law enforcement to make a difference in the world; to make our country safer. I also know that there’s a chain of command and that is held in high regard. When that chain of command is corrupted from the top down, as has already begun happening, there will come a time when you will have to choose between the brotherhood that holds your agency together, and the brotherhood that is your fellow man (and woman).

Our country has a long history of holding the line, whether it’s the thin blue line, the protest line, or other alliances within federal government. Over the next four years, you will very likely be forced to choose between doing your job, or doing what’s moral and humane. If you allow one small compromise, eventually you’ll make more, and like a frog boiling in a pot, will find yourself to have lost all that you believed in when you took this job.

Quitting your job isn’t going to fix anything, and that’s not what I’m asking you to do; any government will always find somebody to take your place. What our country needs is for men and women like you – those who still believe in our constitution, in freedom, and in human rights – to continue filling these positions, but to demonstrate firm and public acts of disobedience when given orders that violate our constitution and your conscience, and to hold your superiors accountable to unconstitutional orders that violate human rights. When you’re  ordered by your chain of command to violate the freedoms that your relatives died to protect, no matter how small of a compromise it may feel at the time, it is not only your decision, but it is your duty to refuse such orders.

I understand the camaraderie and the sense of family you have in the many different areas of law enforcement. When I first started in forensics, many of you took me in like a brother, had me to your homes, introduced me to your wonderful families, and told me your stories. I understand that you would literally take a bullet for other agents or officers in the field. But there are other ways to protect your brothers too – namely, by saving them from the corruption that comes from being tainted by those among you that follow after racism, prejudice, and hate. The only way to do that is to hold the bad ones accountable, whether they’re your partner or a superior. You have an internal affairs department for a reason – it’s to protect your agency from corruption and from destroying the lives of both your brothers and their families. If you fail to use that when necessary, you run the risk of something far worse than betraying a brother… you run the risk of allowing many more of them to become corrupt, and eventually destroy the fabric of your agency, and the values that led you to take this job in the first place and put your life in danger daily for.

What I’m asking you to do is simple: I’m asking you to stand up for our constitution, and when you’re asked to do something that violates human rights, to refuse and publicly disclose. Refuse to carry out the orders you’re given, and immediately go to the press or at least to your internal affairs department to report what you’ve been ordered to do. As the White House continues to black out the media in order to establish their own state run propaganda, public disclosure will become more and more vital to the world learning of such atrocities. This may cost you your job, but it may also save your soul. It’s cost many others that came before you far more. I hope you will find it your duty and your privilege to stand among those that fought to defend our country, rather than just go quietly along with new marching orders that will no doubt trickle down into your agency over the next four years.

We are in uncertain times, and it’s hard to tell just where the landslide begins, or if it exists at all. I see the writing on the wall, that we’re in store for some really bad human rights violations. I urge you to choose not to play a role in it, no matter how small. The slipper slope argument is one that’s been used in the legal world for a very long time. It’s no cliche; the small decisions you make now will ultimately affect the much bigger decisions you may have to make down the road. I hope you’ll be on the right side of history when whatever plays out finally does.

 

Technical Analysis: Meitu is Crapware, but not Malicious

Last week, I live tweeted some reverse engineering of the Meitu iOS app, after it got a lot of attention on Android for some awful things, like scraping the IMEI of the phone. To summarize my own findings, the iOS version of Meitu is, in my opinion, one of thousands of types of crapware that you’ll find on any mobile platform, but does not appear to be malicious. In this context, I looked for exfiltration or destruction of personal data to be a key indicator of malicious behavior, as well as performing any kind of unauthorized code execution on the device or performing nefarious tasks… but Meitu does not appear to go beyond basic advertiser tracking. The application comes with several ad trackers and data mining packages compiled into it – which appear to be primarily responsible for the app’s suspicious behavior. While it’s unusually overloaded with tracking software, it also doesn’t seem to be performing any kind of exfiltration of personal data, with some possible exceptions to location tracking. One of the reasons the iOS app is likely less disgusting than the Android app is because it can’t get away with most of that kind of behavior on the iOS platform.

Over the life span of iOS, Apple has tried to harden privacy controls, and much of what Meitu wishes it could do just isn’t possible from within the application sandbox. The IMEI has been protected since very early on, so that it can’t be extracted from within the sandbox. Unique identifiers such as the UDID have been phased out for some years, and some of the older techniques that Meitu’s trackers do try and perform (such as using the WiFi or Bluetooth’s hardware address) have also been hardened in recent years, so that it’s no longer possible.

Tracking Features

Some of the code I’ve examined within Meitu’s trackers include the following. This does not mean these features are turned on, however many features appear to be managed by a configuration that can be loaded remotely. In other words, the features may or may not be active at any given time, and it’s up to the user to trust Meitu.

  1. Checking for a jailbreak. This code exists in several trackers, and so there are a handful of lousy, ineffective ways that Meitu checks to see if your device is jailbroken, such as checking for the presence of Cydia, /etc/apt, and so on. What I didn’t find, however, was any attempt to exploit the device if it was found to be jailbroken. There didn’t appear to be any attempts to spawn new processes, invoke bash, or exfiltrate any additional data that it would likely have access to on a jailbroken device. Apple’s App Review team would have likely noticed this behavior if it existed, also. Apple 1, Meitu 0.
  2. Attempts to extract the MAC address of the NIC (e.g. WiFi). There were a few different trackers that included routines to extract the MAC address of the device. One likely newer tracker realized that this was futile and just returned a bogus address. Another performed the sysctl calls to attempt to obtain it, however the sandbox would similarly return a bogus address. Apple 2, Meitu 0.
  3. Meitu uses a tool called JSPatch, which is a very sketchy way of downloading and executing encrypted JavaScript from the server. This is basically an attempt to skirt iOS’ unsigned code execution by downloading, decrypting, then eval’ing… but isn’t quite evil enough that Apple thinks it’s necessary to forbid it. Nonetheless, it does extend the functionality of the application beyond what is likely visible in an App Store review, and by using Meitu you may be allowing some of its behavior to be changed without an additional review. No points awarded to either side here.
  4. The various trackers collect a number of different bits of information about your hardware and OS version and upload that to tracker servers. This uses your network bandwidth and battery, so using Meitu on a regular basis could consume more resources. There wasn’t any evidence that this collection is done when the application is in the background, however. If the application begins to use a lot of battery, it should gravitate towards the top of the battery usage application list. Apple 3, Meitu 0.
  5. Code did exist to track the user’s location directly, however it does not appear to be active when I used it, as I was never prompted to allow access; if it does become active, iOS will prompt the user for permission. Apple 4, Meitu 0.
  6. Code also existed to indirectly track the user’s location by extracting location information from the EXIF data in images in the user’s photo album. Any time you take a photo, the GPS position where the photo was taken is written into the metadata for the picture, and other applications have access to read that (if they’re granted access to the photo album or camera). This can be aggregated to determine your home address and potentially your identity, especially if it’s correlated with existing data at a data mining company. This is a very clever way to snoop on the user’s location without them being prompted. It was not clear if this feature was active, however the hooks did exist to send this data through at least some trackers compiled into Meitu, and appeared to include MLAnalytics and Google AdMob trackers. Apple 4, Meitu 1.

Other Observations

  1. Code existed to use dlopen to link directly to a number of frameworks, which can often be used by developers to invoke undocumented methods that are normally not allowed by the App Store SDK. Chronic reported this in his own assessment, but indicated that it was never called. I have since discussed some of my findings with him – namely, suspicious long jumps in the code involving pointer arithmetic that indicate the calls may have been obfuscated. It is very likely, however, that these calls no longer work in recent versions of iOS due to ASLR. The entire issue is a moot one anyway, as I’ve been informed that weak linking in this fashion is now permitted in the App Store, so long as the developer isn’t using it as a means to call unsupported methods. I did not see evidence of that happening.
  2. Meitu does obtain your cellular provider’s name, using an authorized framework on the device, as well as observes when that carrier changes (possibly to determine if you’re switching between WiFi and LTE). This appears to be permitted by Apple and does not leak information beyond what’s stated here.
  3. Code was found that makes private framework calls, but as Chronic pointed out, they no longer work. This was likely also old code lying around from earlier versions of the app or trackers.

A number of these trackers were likely written at different times in the iOS life cycle, and so while some trackers may attempt to perform certain privacy-invading functions, many of these would fail against recent versions of iOS. A number of broken functions no longer used likely also were at one point, until Apple hardened the OS against them.

Summary

Meitu, in my opinion, is the quintessential data mining app. Apps like this often provide menial functionality, such as fart and flashlight apps do, in order to get a broad audience to use them and add another data point into a series of marketing databases somewhere. While Meitu denies making any money off of using these trackers, there’s very little other reason in my mind to justify seeing so many built into one application – but that is a judgment call for the user to make.

Because of all of the tracking packages baked in, Meitu is a huge app. I cannot vouch for its safety. There may very well be something malicious that I haven’t found, or perhaps something malicious delivered later through their JSPatch system. It’s a big app, and I’m not about to give them a free complete static binary analysis.

At the end of the day, using Meitu isn’t likely to adversely affect your system or steal your data, however it’s important to understand that there is a fair bit of information that could be used to track you as if cattle in some marketing / data mining system used by advertisers. Your adversary here isn’t China, it’s likely the department store down the street (or perhaps a department store in China), but feel free to insert your favorite government conspiracy theory here – it could possibly be true, but they have better ways to track you. If you don’t mind being tracked in exchange for giving yourself bug eyes and deleting your facial features, then Meitu might be the right app for you.

Configuring the Touch Bar for System Lockdown

The new Touch Bar is often marketed as a gimmick, but one powerful capability it has is to function as a lockdown mechanism for your machine in the event of a physical breach. By changing a few power management settings and customizing the Touch Bar, you can add a button that will instantly lock the machine’s screen and then begin a countdown (that’s configurable, e.g. 5 minutes) to lock down the entire system, which will disable the fingerprint reader, remove power to the RAM, and discard your FileVault keys, effectively locking the encryption, protecting you from cold boot attacks, and prevent the system from being unlocked by a fingerprint.

One of the reasons you may want to do this is to allow the system to remain live while you step away, answer the door, or run to the bathroom, but in the event that you don’t come back within a few minutes, lock things down. It can be ideal for the office, hotels, or anywhere you feel that you feel your system may become physically compromised. This technique offers the convenience of being able to unlock the system with your fingerprint if you come back quickly, but the safety of having the system secure itself if you don’t.

To configure this, we’ll first add a sleep button to the Touch Bar, then look to command-line power management settings to customize its behavior.

Adding a sleep button to the Touch Bar is pretty straight forward. Launch System Preferences, then click on Keyboard. At the bottom of the window is a button labeled Customize Control Strip.

 

To add a sleep button to the Touch Bar, choose which of the four existing buttons you can live without. Most people choose the Siri button, because it’s accessible from both the dock and the menubar as well. Drag the icon labeled Sleep from the window onto the Siri button on the Touch Bar, and the button will turn into a sleep button. If you would also like a screen lock that does not perform any lockdown function while on AC power, you can also drag the Screen Lock button onto the Touch Bar, and use that for when you don’t want lock down (it may still lock down on battery, as the system will sleep whenever it’s on battery). Once you’re finished customizing the Touch Bar, click Done.

OK! So we’ve got a sleep button on the Touch Bar – this is our future lockdown button; it can be triggered a lot faster than holding in the power button, and even better, will be able to lock down the system without losing all your work.

By default, however, putting the machine to sleep on its own doesn’t really lock anything down, and you can still unlock it with your fingerprint when it wakes, so next we’re going to need to change the system’s sleep behavior. There are a number of hidden knobs that can be set on the command-line to change how power management behaves on sleep.

We need to set a few different options. First, we need the system to go from sleep mode into what’s called hibernate mode after a preset period of time. In our example, we’ll use 300 seconds (five minutes). Hibernate mode is a deep sleep, where the system commits its memory contents to disk and shuts down the processor. Until the system is in hibernate, you’ll be able to unlock the device with your fingerprint, which we don’t want. From a terminal window, run the following commands to adjust the various sleep and hibernate timers:

sudo pmset -a autopoweroffdelay 300
sudo pmset -a standbydelay 300
sudo pmset -a standby 1
sudo pmset -a networkoversleep 0
sudo pmset -a sleep 0

Next, there is a parameter named hibernatemode that alters the behavior of hibernate in a wonderful way. When set to the value 25, this parameter will cause macOS to remove power to the RAM, which thwarts future cold boot attacks against the system (a few minutes after the power is removed, at least).

sudo pmset -a hibernatemode 25

Lastly, a hidden setting named DestroyFVKeyOnStandby can be set that will cause hibernate mode to destroy the File Vault keys from memory (or stored memory), effectively locking the encryption of the system.

sudo pmset -a DestroyFVKeyOnStandby 1

With all of these put into place, you can now put your system on a timed lockdown. Here’s how it works:

  1. The user presses the sleep button on the Touch Bar
  2. The screen immediately locks, the system goes to sleep and a five minute timer starts
  3. If the user unlocks the machine within the five minute period, all services are restored and they can use their fingerprint to authenticate
  4. Once the timer expires, the system transitions from sleep mode to hibernate mode
  5. Upon entering hibernate mode, power is removed from the RAM and the File Vault keys are destroyed in memory
  6. When the user wakes the machine, they will be prompted for their password in order to unlock File Vault
  7. Once the user has authenticated with a password, they will be prompted a second time to authenticate with their fingerprint (or password); this is the restored state from when the system was first locked

This type of setup works well in the workplace, where you may walk away from your machine often, or while in public or any other venue where you may temporarily leave your system for a short period, but are concerned about physical security. If you are a political dissident or someone else who may be targeted, using this system provides a convenient way to manage your system to keep the fingerprint reader useful, but also lock down if an unexpected event occurs and your devices are physically compromised.

You can restore all power management defaults in System Preferences if you decide to back out of this configuration, and of course depending on your level of paranoia, you may wish to adjust the hibernate timer to one minute or ten, to your liking.

On Christianity

I’ve often been asked why an intellectual type guy such as myself would believe in God – a figure most Americans equate to a good bedtime story, or a religious symbol for people who need that sort of thing. Quite the contrary, what I’ve discovered in my years of being a Christian is that it is highly intellectually stimulating to strive to understand God, and that my faith has been a thought-provoking and captivating journey.  I wasn’t raised in a Christian home, nor did I have any real preconceived notions about concepts such as church or the Bible. Like most, I didn’t really understand Christianity with anything other than an outside perception for the first part of my life – all I had surmised was that he was a religious symbol for religious people.

Today’s perception of Christianity is that of a hate-filled, bigoted group of racists, a title that many self-proclaimed Christians have rightfully earned for themselves. This doesn’t represent Christianity any more than the other stereotypes do, and most people know enough about the Bible to know that such a position is hypocritical. Since 1993, I’ve been walking in the conviction that the God of the Bible is more than just a story, that he’s nothing like the stereotypes, and that it takes looking outside of typical American culture to really get an idea of what God is about. In this country, I’ve all of the different notions of what a church should be; I think most people already know in their heart who God is, and that’s why they’re so averse to the church.

The term born again Christian is a difficult term to figure out. The question most people ask is what in the world do they believe?.

This largely depends on who you ask.

Ironically, it’s become quite difficult to get past the dogma and the cultural facade to truly understand the Christian faith in this country. It can be a life long process to try and truly understand the questions of not only what we believe, but also desire to know whom it is we believe in and why. Much of the church tends to go off-course and read too deeply into things, leaving a lot of churches representing less than what most would consider the basic tenets of Christian behavior; the ones you rarely hear about, though, are the ones who usually got it right – they’re out there doing what they’re supposed to do in loving people, trying to live right, and live their life with God in mind and heart. They’re not judging people, pushing political agendas, or the sort. The Bible teaches to know whom we believe, and in today’s world, there is an overwhelming amount of information available to accomplish this. I believe qualifying one’s own faith is critical to having real faith, and the basis of what I believe marks true believers – a strong desire to know their creator. If God really is the most important thing to us, shouldn’t we be taking every opportunity to study him?

A Little Background

I began with a simple faith, just like everyone, and was at an early point believing solely out of conviction as a teenager. Over time, my experience and faith began to build as I watched God work in my life; I’d undergone a dramatic personal change, and directly credited that to my faith. As I matured, I developed a strong and healthy curiosity in wanting to know just why my beliefs were qualified from a purely intellectual (as opposed to experiential) point of view. I spent several years studying textual criticism, apologetics, ecclesiology, eschatology, apocryphal manuscripts, writings of early church fathers, and the writings of great historians such as Josephus, Pliny, and other sources. I got a bit miffed at some of the academics in the field who came off as knowing better than the rest of us, so I taught myself the Greek language and put my hands on copies of the manuscripts we base our canon on (such as the Codex Sinaiticus, Codex Vaticanus, and many smaller digital fragments), and observed firsthand what was written about God without all the messy English translation to get in the way. Unlike most students who study this in seminary and later abandon their faith, I found my studies to only strengthen mine.

All of this information eventually led me to put together a solid context to better understand what it was that I believed, and how it reconciled to science. All of the historical evidence eventually began to paint a context around this collection of books we call a Bible, and granted me a deeper understanding, reconciling what God has said with a deeper understanding.

Ironically, I received a rather significant amount of pushback from certain other Christians about studying history – something you’d think everyone should be doing. I don’t quite understand why most evangelical Christians fail to study anything beyond the store-bought Bible they have, which, in English, is quite possibly the most poorly translated version of scripture in existence. Many argue that because the Bible is inspired directly by God that it is the only relevant text to read. That statement makes a lot of dangerous assumptions though – namely that God’s inspired word depends largely upon what time period and geographical location you happen to live in. Even today, there are many different scriptural canons – they can’t all be right. The Ethiopic Canon, for example, includes the book of Enoch as well as many other books not present in the Bible we use, and the Catholic Bible also includes additional books (old testament apocrypha). Throughout history there have been councils, debates, criticisms, and crimes committed over the issue of what is God-inspired. To simply trust that everyone throughout history was directed by the hand of God in putting together a Biblical canon is simply naive and leaves truth as relative. Our canon is in better condition than I believe it ever has been, but it’s still imperfect. Even our Bibles go through periodic rewrites (such as the latest NIV translation, which was redone during the short time I have been a Christian).

But more importantly, without digging deeper and studying what was written in history, what was written by early church fathers and by other sources, you can’t really establish a true context around what it is the Bible really is saying, and where many events originated from. A more well known example of this is Josephus’ documentation of possibly the earliest use of the terminology behind “binding” and “loosing”; he used it to describe punishing and liberating people. Many completely misappropriate that terminology when used in the Bible to imply some sort of “name it and claim it” scam. He also set the tone in explaining historical events that help to explain much of the context around many areas of the New Testament.

Reading the Bible today is much like looking through a series of foggy windows. The first window may be the translation from the original Greek language, or possibly even further back to the oratories in which many manuscripts were copied by ear. The next is the indoctrination of the translation, followed by the historical context. Each window further distorts what God actually said, bending the light just enough to miss important concepts. Other fogged up shards of glass might involve understanding the Gnostic and Separatist movements, and their attempts to redefine Christianity, or Apocryphal texts by early church fathers or forgeries from pseudonymous authors. The point is, the finished product in the store went through several processes to get there, and each process slightly bleached out a little of the meaning.

In my own journey to understand God, my beliefs have stemmed from the sum total of what I’ve learned to be inspired scripture, on a personal conviction, in the context of historical and literary knowledge – as far separated from orthodoxy and indoctrination as this knowledge can get without losing its meaning. My conclusions of the true meanings behind scripture is pretty on par with what many modern-day theologians also believe, but it is at the very least tempered with some sobriety and discernment over and above typical dogmatic church folk.

You’re probably already starting to surmise that my beliefs are not merely rays of sunshine and ginger snaps, void of intellectual reflection. While it’s true that Christianity is ultimately based on faith, I’ve found that I didn’t need to commit intellectual suicide to accept Jesus as God, and the claims about him as true. After a bit of introspection, I’ve arrived to some characteristics about Christianity that I have considered in the intellectual part of my faith.

Christianity Defined

Christianity is sometimes obscure to outsiders, and with the many different subcultures inside of Christianity, it can be difficult to get more than a looking-through-wet-glass resolution of what it really is. The focus of Christianity (and its root word) is obviously Christ. Christianity is focused around the man / God Jesus Christ and what Christians believe as a reconciliation between man and God through His resurrection from the dead.

Christians believe that God created the world for man.

No, we can’t agree on how he did it, and the truth is many of us don’t care.

Most of us believe that God had a direct hand in the design and manifestation of life on Earth. Some take Genesis to refer to literal days. Others use a scriptural loophole to change it into a thousand years per day. Yet others believe that the Biblical account of creation was an allegory altogether, and even have reasonable claims from the Septuagint to back it up. One thing most everyone believes, at least, is that it makes perfect sense that, as the chief architect and scientist behind the universe, whatever design he used would be fantastic and ingenious. Do I believe God used science to create the world? Of course. He probably created the science the world now studies, and had a hand in everything from the laws of physics to the natural elements we now use to dismiss His own existence.

What’s more important than how God created the world is why he did it. To have a people to call his own. The New Testament makes no mistake about our being grafted into the family of God upon our conversion. The second reason we were created is to thrive. Christians believe that we were put here with purpose, and science is beginning to reveal to us that we were put in an ideal place in the universe to discover, to learn, and to thrive. The key issue surrounding our origin on this planet isn’t so much as the how, as it is the purpose with which it came to be.

Shortly after man came about, man also fell into sin very early on – during a time where it is written that God walked among his people. Sin could be defined as rebellion toward God. This left the rest of the human race subclassing a sinful lineage, and led God to hide his face from us because he can’t be in the presence of sin. Christians believe that we (mankind) fall short of the perfection and honor God originally created us for.

Christianity teaches that we’re all deserving of death and separation from God because of our sinful nature. Christianity is based on Jesus (as God’s son) voluntarily sacrificing his life to absorb our share of the death penalty we earned from sin, freeing us from the slavery we were born into. This was qualified when Jesus rose from the dead three days after his death accompanied by a full verification by the brutal Roman military. Jesus was mocked, beaten beyond recognition, and then crucified. History writes of his miracles and accounts of Roman guards initially ordered to remain silent. The death of Christ was for the purpose of absorbing the full penalty for our sins (on our behalf) so that we didn’t have to suffer the fate of a very real hell we all deserve. All of this was prophesied in what were, at the time, Jewish manuscripts 400 years before his coming. These ended up forming much of our Old Testament today.

Jesus’ ministry set into play the notion that human beings have value simply because they’re human; that we all have intrinsic value and deserve love. In spite of the bigotry many act with today, loving your neighbor is still 50% of the entire gospel.

With that said, the looming question is why believe any of this. There are countless religions out there, and then of course there’s the religion of not having a religion. There are many observations I’ve made about Christianity over my life as a Christian that make compelling intellectual arguments, or at least things I myself have strongly considered.

Christianity’s Timeline is Complete

The books of the Bible are, by far, the oldest and most reproduced documents in existence. Not only are they extremely old, but more importantly they claim to cover history, sometimes through allegory, from the beginning of human life. Having these qualities, the scriptures are most likely to be authoritative in explaining why we ended up where we are,  even if they don’t attempt to tackle the how. If God is true, then he would have been around when everything started, and that is expressed throughout the literary works of Genesis, and acknowledged in the later historical books.

It’s much more difficult to give credit to a religion established centuries later. Take Islam for example. We didn’t see it rise until seven hundred years after Christ had already come; if the Hebrew God was a false god, then certainly any religion to show up seven hundred years later is also false. If the Judeo-Christian God is real, he has bragging rights that he was here first.

NOTE: It’s interesting to note that many Muslims claim that the qur’an is a pure text, and that the Bible is the corrupted text. As a point of fact, the qur’an is widely known to have had many different manuscript variants, just as the OT and NT do, however they were believed to be later redacted into a final copy, the originals burned. This, in a post-constantine era where reproduction of manuscript had become much more reliable (in fact, block printing was starting to be introduced). In contrast, many variations of Biblical scripture still exist today, and are reconciled through a process called textual criticism, explained next.

Textually, the Scriptures are Reliable

At some point you’ve got to validate the credibility of the manuscripts themselves, and not just take it on someone’s word that they’re reliable or that they say what other people believe they say. What is and isn’t the inspired word of God has been a debate we’ve been having for hundreds of years. What we have today isn’t the word of God – it’s a critical text put together by scholars reconciling thousands of variants of manuscripts of the word of God to reflect our best assessment of what we think the real word of God was.

Reconciling hundreds of manuscripts is difficult enough, however the English language is one of the least precise languages to translate into. The Greek doesn’t translate cleanly into English like many Germanic languages do, and so the translators are often forced to compromise by substituting a more dumbed down word or phrase to prevent a passage from being misread. Those meanings are decided by scholars who apply various indoctrinations to translate words the way they believe they were intended based on current orthodoxy (which is, of course, ultimately based on past translations), and so you end up with a slowly degrading feedback loop over time; a garbage-in-garbage-out problem of sorts. If this doesn’t seem bad enough, many publishers, such as Zondervan, have gone to great pains to make their version of the Bible easy to read at the expense of castrating what the original manuscript intended to say.

This kind of accidental (or reckless) indoctrination happens all the time. To give one example of this, Dr. Peter Williams outlined slavery in a lecture; the word “slave” rarely ever appeared in Bibles until the 1980s, but is ubiquitous in modern Bibles today of all languages. If you watch his lecture, you’ll see just how that kind of leap was made, and the intricacies about how biblical meanings intertwine with social understanding – a dangerous way to treat scripture.

So in light of all this, the obvious question is: how reliable is scripture? Well, if you can cut through all of this by applying some critical thinking, and look at scripture as a literary source, there are very few theology-shattering differences among the more trusted manuscripts we use. That doesn’t stop many churches from believing what they want, based on tradition, rather than digging into the deep theology of manuscript. Unfortunately, there are many churches that choose to remain lay and leave that kind of research to the same scholars who have been recklessly adding random words to the Bible for the past 30 years.

So when I talk about reliability of scripture, I’m speaking directly to its integrity, as opposed to its literary interpretation or its translation. Much of the Bible’s integrity was confirmed with the discovery of the Dead Sea scrolls between 1947 and 1956. Although most of these scrolls were written in Hebrew, they provided samples of scripture written before AD100, and were surprisingly close to the manuscripts we already had in our posession.

These, along with hundreds of other manuscripts, helped in building a critical text of high integrity. The most widely accepted New Testament critical text is called the Nestle-Aland text. This critical text incorporates hundreds of different manuscripts and papyrus fragments from all over the world from dozens of languages, and documents notable variations. From this is where a majority of translations ultimately stem from, which is why I bought my own copy. The old testament masoretic text has also gone under heavy review and new research in this field is particularly promising in restoring the origins of a language which has become convoluted due to many mistakes and lack of vowels in the language’s infancy.

NOTE: The original King James was based on translations traced back to Erasmus’ Textus Receptus, which was later discredited as a corrupt translation, due to the fact that Erasmus was unable to find a high quality source copy to work from. Ironically, many “tweaks” originally made by King James remain in today’s translations; for example, the book of ‘James’ is really the book of ‘Jacob’ in the Greek, but many believe it was renamed to ‘James’ to make it sound more English in nature. For some reason, scholars are too afraid to fix the name in our English bibles for fear that it will cause a revolution.

The Nestle-Aland critical text of the New Testament is fairly solid, although errors have been found and some controversial decisions have been made. For example, the verse in Paul’s letter to Timothy about women remaining silent in the synagogue was suspiciously moved around in several different manuscripts, suggesting that it may have been added at some point by a scribe, and then moved around by other scribes to make it fit grammatically. Most Christians explain the logic away anyway, or pass it off as Jewish culture (although there are a few misogynistic sects that take it literally), but it’s entirely possible the verse might not have even been part of the original manuscript! Most larger issues have been resolved in the past decade. The newest release of the NIV has included additional warnings about passages which were not found to have strong witnesses, such as the story of the adultress being stoned. The mad rush towards Gnosticism spurned by literary works such as the Da Vinci Code seems to have made Bible scholars more honest and forthcoming in recent years. What’s nice is that you can read the critical text and see footnotes containing the many different variants from manuscript to manuscript. It’s very easy to see how things originally got out of whack by having all of the information right there to review.

What essentially decides whether a piece of manuscript is reliable is how many larger witnesses and root texts it has, and where those texts originated from. I spent a lot of time researching scriptures in the New Testament regarding wine (as many of the Christians in the south have adopted a modern-day version of asceticism). I grabbed some electronic copies of the Codex Vaticanus and Codex Sinaiticus and studied the verses in their uncial character format (all caps, no spaces). It’s impressive to cross-reference 4th century and 13th century uncial manuscript, when you find that the text is either perfect, or nearly perfect.

I continued to perform several different examinations of key verses in both the manuscripts and digital copies of fragments I had available, as well as my copy of Nestle-Aland (which is much more thorough), and pleasantly found that not only were they consistent with the manuscripts we had, but that decisions made throughout the criticism process appeared to be quite sound, and almost noneventful. I’m fully convinced in my own mind that the critical text we have today is by far the best we’ve ever had.

To add some geek-worthiness to my endeavor, I’ve recently fed much of the Greek New Testament into a Markovian-based language classifier. This is a technique used in machine learning that allows the computer to identify and weigh the presence of syntactic patterns across various texts. It can effectively compare different types of documents with precise accuracy. I used it for a different purpose here, which was to extract critical patterns of authorship. I found that, on a syntactic level, even a computer found a significant consistency between various manuscripts of the same author. In other words, even a computer is capable of seeing such a striking resemblance between various books, that it believes they are consistent with their purported authors. Cool stuff – I’ll write a paper about it some day. (Note to scholars: the first critical pattern that popped out was “the kingdom of God”, in Greek of course.)

Once you get out of the critical Greek text, things get to become a little dicey however. As I mentioned, the English language simply isn’t a very good one, and on top of that many publishers now improvise the language to make it more readable. Mind you, there are no significant doctrinal changes (unless you use a completely broken text like the CEV, ESV, GW, etc.), but there’s a level of clarity that goes missing. For example, a verse in the Greek says, “bind up the loins of the thoughts of your minds”. In Greek, loins literally means “reproductive parts”; so the scripture is saying to bind up the reproductive parts of your mind – significant imagery there, and lots of power. One English translation replaced it with a mere, “be sound minded”. What was wrong with the Greek version? Granted, there is room for idiomatic phrases, but at some point you’ve got to draw the line. If you’re studying Greek, I’d advise picking up a lexicon that isn’t indoctrinated (such as Oxford’s Greek-English Lexicon – the big, heavy one) to get the full realm of meaning.

As for historical integrity, it is remarkable how well the texts have survived.

Harmony in Logic

It seems odd to equate a God who became man, then died and was resurrected for the purpose of reconciling man to God, as logical. To most, logical is an amalgamation of science, mathematics, and reasoning – like the Matrix or a good quality toaster. I’ve come to find a beautiful harmony of logic in many parts of the big picture of the story. Logic is constructed in layers, just as any great architecture is. If a single piece of the framework doesn’t fit, the entire structure falls.

There are countless examples of beautiful logic in the Bible. Some of them have taken me years to understand, others have taken scholars lifetimes. Now consider that the Bible, as we know it, was written through more than 40 different authors from all walks of life, over a period of about 2,000 years and through times of peace and war. It should be a mess, but it’s not. It’s relatively easy to forget this when we see the polished leather-bound book sitting at the book store. In spite of the Bible’s unique and diverse background, it has some of the most beautiful logic to be found transcending hundreds of years of writing, surpassing any philosophical wisdom of others. Someone once said, “If Jesus was made up by someone, I want to know who so I can worship him.”

If indeed God is real, then you’d expect that he’d also be quite smart, with a firm grasp on logical concepts – something most don’t normally picture in God. If God really is God, then it certainly doesn’t make any sense that he’d be an all-powerful, but stupid God. He must surely have invented the blueprints for the atom, particles, matter, energy, and biology. It’s very non-traditional in our culture to view God as a really smart individual, but if he’s real, then genius and science are a part of his makeup. With that, comes logic.

If Man Made God, He’d Be More Passive

I often hear individuals dismiss Christianity as a belief created by man. If this is true, then I wonder why we didn’t create a God that was more passive and catered to our emotional needs, who would tell us that everything was going to be alright and let us live for ourselves.  The Bible isn’t for the weak minded or the emotionally incomplete. It speaks of a powerful, perfect God and explains that if one is to follow Christ, they must deny themselves of their own wants and desires and take up their cross. It tells us that God hates sin and that we should deny our own personal pleasures if they are sinful. It guarantees trials, tribulation, and persecution – and that’s for the ones who are living right. Christianity is one of the hardest things to live out – especially if you live in a country that persecutes Christians.

God’s gift of salvation is free, but it costs everything to follow Christ – our entire life. If man created God, we certainly went to a lot of trouble to inconvenience ourselves. I don’t know anyone who, if given the option, would choose to share their money, live a moral lifestyle, and constantly deny their own desires – except, of course, those who have been changed by the power of God.

In America, it’s very easy to become a mediocre Christian, go to church once in a while and try not to swear too much. If anyone is guilty of creating a God, it seems that this dubious honor goes to the individuals who have taken the Bible and made themselves a pansy, self-serving God that serves their own desires. That’s not the God of the Bible, however. The American god of comfort is undoubtedly a god created by man, but the God of the Bible is most definitely not a work that would have been voluntarily created by any human, let alone 40.

Scientifically, Evidence Points to a Master Architect

You don’t have to believe in literal creation or intelligent design to be a Christian. You can even believe in Darwinian evolution if you want. I used to accept many theories blindly. Once I began researching them, I realized that the evidence we have today overwhelmingly contradicts some of them – including Darwinism, in my personal opinion. Now, I’ll be the first to admit, I have no idea how we got here – I just wish more people of science would admit the same thing, and I wish more Christians would admit that we don’t need to know the answer to that in order to be a Christian.

Personally, and for me, macro-evolution theories are lacking, but so are many of the Christian theories. Both have become just as much a religion, and just as un-falsible. Now if, by some amazingly sound evidence and scientific method, Darwinian evolution were somehow proven in a lab, it would by no means trainwreck my faith. I’m not naive enough to suppose that there may be more to God’s way of getting things done than what Sunday School teaches. After all, Genesis gave us the end results and the purpose, it didn’t give us a method, and many scholars even argue that the book was an allegorical account of an event later revealed. As I said before, I believe that the science we study today was inspired by God; a belief that is in good company with many respected mathematicians. I see God’s fingerprints all over what look like could be design patterns and anthropic tuning of physics and life in a way that would have gone any other way, but didn’t. The natural processes that we observe, and many use to explain away God, could have just as easily been the scientific building blocks used by a master architect.

Unfortunately, though, modern science isn’t always about finding truth, and anyone who’s been in the scientific community knows that papers need to get published, and data needs to get used to confirm findings. Sometimes, these things take precedence over better science. Sometimes not. Scientific authority is subject to the same signal-to-noise problem in its feedback loop as theology is. For many reasons, science can sometimes be married to an agenda. It’s not the scientific process of Darwinian evolution that many Christians take issue with, but rather the philosophical conclusions seemingly married to it by its constituents.

Stephen J. Gould’s theory of punctuated equilibrium ironically uses the same basic observations that Christians use in their creation theories, but with an atheist’s spin on things. The problem, I suppose, is that sometimes science can be predisposed to philosophically discount the divine, and so therefore no matter how compelling the evidence of God is, the conclusions of any evidence already presume atheism.  As I said, I have no idea how we got here… intelligent design? Darwinian evolution? Literal creation? Punctuated equilibrium? I haven’t seen a theory yet that doesn’t have some big holes in it. We might as well believe that God really did bury dinosaur bones if we’re going to accept any of those without criticism.

God Became One of Us

Usually when we think about God, we think about an all-powerful king in richly colored clothing who would most likely want to be served on earth, and contribute to amazing social advances that would solve world problems – all from the comfort of his bathtub. It’s very difficult to imagine a God that became one of us and washed our feet, or took on our sin to become as dirty as us. The climactic peak of the Gospel for me is not the resurrection (although that’s the most important aspect), but rather when Jesus spoke these words while on the cross:

Eloi, Eloi, Lama Sabachthani?

Which translate to “My God, My God, why have you forsaken me?”. These words bring a rush of images in thinking about Jesus as the bearer of all sin. Jesus himself never sinned – He was the perfect human. He overcame sin. When he was on the cross, however, the weight of the sin of the entire world was upon him, which parallels the sin offering required in the old testament (where the sin of the camp would be transferred onto the sacrifice). The undeniable realization is that God (the father) had to turn His back on His son because of the sin that was on Him – our sin.

And so we get a brief glimpse into the relationship between God and his son and observe just how much of a stench the sins of the world are to God. The act of bearing the sins of the world illustrates the love Jesus had for his people, as he (God) was willing to be separated (for a time) from His father by bearing our sin. Never before has anyone else’s god shown such a love to the point where they would themselves become just as dirty and detestable as the world for the purpose of redeeming us. In fact, no other god written about has ever been interested in redeeming us at all, but rather more in controlling us. They hold our souls for ransom, where Jesus became a ransom for our souls. Other gods offered ascetecism while Jesus offered freedom. He knew we would never be worthy and for a time stooped to our level of filth so that we could be raised to His level of purity.

One can’t even begin to imagine the sorrow that must have been on Jesus as he felt our dirt on him for the first time. A righteous and holy God picked up our filth and wore it as clothing. Our best fiction authors can’t come up with stuff this good. At the very least, Jesus did what no other gods worshipped by the people ever did: dwelled among the people, became common among the people, and sacrificed himself for the people.

He’s Nothing We Would Have Expected God to be

Jesus was a different savior than anyone had expected, as evidenced by many people’s hatred of him during that time and the pharisees’ plots to kill him. The Jews were expecting a savior who would deliver them from the Roman government, who would validate all of their religious traditions, and who would reward those who had kept the law – much like the other gods written about and worshipped that required obedience over faith. If Jesus had a publicist, they probably would have told him this is the god he needed to become. He could have lived a very rich and worshipped life, but he decided to take the road he was destined for.

Jesus came without political motivation. He did not destroy the government, and he did not set himself up as an idol to worship. Jesus’ mission was completely incomprehensible to even his own disciples until the opportune time, yet was completely logical. What may have seemed like the important issues of the time were really not very important in the grand scheme of eternal salvation. Jesus came with a single purpose: to offer himself up as a sacrifice to God for our reconciliation. He stuck to the plan without marketing it, without asking for recognition, and without even taking his rightful place as a king on the earth.

Before Jesus came, keeping God’s law was the way to get closest to God. Many zealots such as the apostle Paul had been schooled in Judaism and could recite the scripture from memory. Great pride was taken by the pharisees who kept the letter of the law and performed their due service to God.

Nobody thought we needed someone like Jesus.

When Jesus came and fulfilled prophesy, it was only apparent after the fact that he was doing something even more important than putting a social band-aid on the political issues of the day. The sacrifice Jesus made would end up setting generations of the world free and provide for the eternal needs of people rather than their immediate comfort needs while temporarily on the earth.

He’s Everything We Expected God to be

At the same time that Jesus was fulfilling a mission that nobody understood, He fulfilled over 300 prophesies that were written about the Messiah. This started with the time and place of his birth, which involved a massive astronomical event, came from the lineage of King David, came out of Egypt, was praised on a colt on the way to Jerusalem, crucified between two thieves, and eventually rose from the dead, as prophesied. He had no control over many of these. Others, he fulfilled intentionally. I found one (somewhat tacky) site with many of these references documented: here.

Along with over 300 prophecies being fulfilled, this amazing man was like no other. He spent his days healing people of disease and ministering to the needs of everyone he came in contact with. He illustrated the kind of wisdom and temperance only God would have, and the supernatural knowledge and ability that could only be expected from God.

At the same time, Jesus was filled with love and compassion for people. Something that modern Christianity lacks. Thousands followed him around and Jesus made sure they were all fed. He physically touched the sick. He showed love to people that many today consider refuse. The all powerful God of the universe established physical contact with diseased, rotting lepers and healed them, others sinners that our society would mock. People came to him and out of his compassion, he healed their sick children and even raised loved ones from the dead. Even though he was on Earth with a much more important purpose, Jesus cared enough to take the time to make people’s lives better on a personal level.

Jesus was everything we would expect in a perfect human being, and what you’d expect God to have for character. He had human emotion enough to weep when His friend Lazarus died, even though he knew he would later raise him from the dead. He washed his disciples feet and served the people he cared for. Never once did he do anything out of personal gain, but continued to give from the day he was born – as is the sign of a true Christian this same love for humanity.

He Overcame Death

If someone can overcome death, I’m inclined to listen to whatever they have to say. The most important characteristic of Jesus, and what gives power and authority to the Gospel, isn’t Jesus’ death on the cross, but his resurrection from death 3 days later. In spite of Jesus’ amazing life, the Gospel just wouldn’t be very compelling if it had ended with Jesus dying on a cross, and his disciples going back to tend sheep.

Jesus was crucified until dead and then speared through the side by a Roman soldier to make certain (there was much pressure by the Jewish pharisees to make sure of this). In spite of this, historical accounts (including non-biblical sources) tell us that 3 days later (as Jesus prophesied), he moved away the stone and gloriously appeared to his disciples (after scaring a few Roman soldiers half to death). Before departing into heaven, Jesus had eaten with His disciples, allowed them to touch the wounds in his hands and feet, and appeared to more than 500 people.

Jesus proved his deity in allowing us (humans) to kill him, make sure he was dead, and lock him in a tomb with the government guarding it. By rising from the dead, Jesus has proven that he is surely God, and has authority even over death.

God is Still Working

What has made the truth of the Bible the most evident to me is that God isn’t dead. The story doesn’t end 2,000 years ago with some abstract performance, but rather God is alive and working in people today.

Revelation is far better than philosophy. Philosophy changes in this country every few years. Many believe people like Martin Luther were great philosophers, but in reality they had revelation – not theory; revelation that came directly from God. I for one would rather be certain about something and not have to go back and wonder if I’ve made a mistake in my thinking. It is revelation that opens the eyes of many individuals to see just how perfect God is, which immediately makes you see how deficient even his most devout followers are. If philosophy is seeking answers, revelation is the God-given answer. He gives it out willingly, and is still giving it out today.

I know there’s a lot of hate in this world, and many people claiming to be Christians who don’t act like what even atheists could tell you about Christianity. There’s a lot of pain, judgment, bigotry, and we’re in uncertain times. You’ll know the real Christians by their love through all of this. They’re not the ones holding up “God hates fags” signs. They’re not the ones trying to put Muslims in concentration camps. They’re the quiet ones out there doing the two things that Jesus told them to do: love God, and love others. Unconditionally. No matter what their religion or sexual orientation. If there’s one premise the entire Bible can be summarized into it’s this: God is love. It’s his followers that fall short of that, not him.

If a huge cosmic mistake has been made, and there really was no God, then I haven’t missed much. I’ve benefited greatly from the wisdom of the Bible and have lived a good life because of it. I’ve prospered both spiritually and financially from trusting God and applying Christian principles to my life, which has allowed me in turn to support my church financially and raise normal children. The quality of my life is much higher than its ever been, and I will some day die a very fulfilled individual. I have no doubt, however, that the God in me is real, and is doing some amazing things in this world.

On NCCIC/FBI Joint Report JAR-16-20296

Social media is ripe with analysis of an FBI joint report on Russian malicious cyber activity, and whether or not it provides sufficient evidence to tie Russia to election hacking. What most people are missing is that the JAR was not intended as a presentation of evidence, but rather a statement about the Russian compromises, followed by a detailed scavenger hunt for administrators to identify the possibility of a compromise on their systems. The data included indicators of compromise, not the evidentiary artifacts that tie Russia to the DNC hack.

One thing that’s been made clear by recent statements by James Clapper and General Roberts is that they don’t know how deep inside American computing infrastructure Russia has been able to get a foothold. Rogers cited his biggest fear as the possibility of Russian interference by injection of false data into existing computer systems. Imagine the financial systems that drive the stock market, criminal databases, driver’s license databases, and other infrastructure being subject to malicious records injection (or deletion) by a nation state. The FBI is clearly scared that Russia has penetrated more systems than we know about, and has put out pages of information to help admins go on the equivalent of a bug bounty.

Everyone knows that when you open a bug bounty, you get a flood of false positives, but somewhere in that mess you also get some true positives; some real data. What the government has done in releasing the JAR is made an effort to expand their intelligence by having admins look for (and report) on activity that looks like / smells like the same kind of activity they found happening with the DNC. It’s well understood this will include false positives; the Vermont power grid was a great example of this. False positives help them, too, because it helps to shore up the indicators they’re using by providing more data points to correlate. So whether they get a thousand false positives, or a few true ones in there, all of the data they receive is helping to firm up their intelligence on Russia, including indicators of where Russia’s interests lie.

Given that we don’t know how strong of a grasp Russia has on our systems, the JAR created a Where’s Waldo puzzle for network admins to follow that highlights some of the looser indicators of compromise (IP addresses, PHP artifacts, and other weak data) that doesn’t establish a link to Russia, but does make perfect sense for a network administrator to use to find evidence of a similar compromise. The indicators that tie Russia to the DNC hack were not included in the JAR and are undoubtedly classified.

There are many good reasons one does not release your evidentiary artifacts to the public. For starters, tradecraft is easy to alter. The quickest way to get Russia to fall off our radars is to tell them exactly how we’re tracking them, or what indicators we’re using for attribution. It’s also a great way to get other nation states to dress up their own tradecraft to mimic Russia to throw off our attributions of their activities. Secondly, it releases information about our [classified] collection and penetration capabilities. As much as Clapper would like to release evidence to the public, the government has to be very selective about what gets released, because it speaks to our capabilities. Both Clapper and Congress acknowledged that we have a “cyber presence” in several countries and that those points of presence are largely clandestine. In other words, we’ve secretly hacked the Russians, and probably many other countries, and releasing the evidence we have on Russia could burn those positions.

Consider this: Perhaps we have collection both from DNC’s systems, located in the United States, but also other endpoints inside Russia (or other countries) from C2 servers, or even uplinks directly back to the Kremlin. Perhaps we can account for the entire picture based on global collection of traffic, but releasing evidence of that will directly hamper our ability to perform these types of collections in the future. It’s no doubt that Clapper is being very careful what he says. If we can intercept comms of Russian leaders celebrating Trump’s election, we likely can also intercept the network traffic coming back to the Kremlin.

Looking at how various agencies are in agreement on this subject, and given the FBI’s recent and obvious agenda to influence the elections themselves in the republicans’ favor, it will not surprise me at all to find that there is credible evidence linking Russia to all of this. While possible, I don’t get the impression that FBI is simply trying to wag the dog to distract from their own proclivities. CrowdStrike’s involvement certainly helps to make their findings believable. At the same time, we’ll probably never hear about much of it directly. What the government could do, and should do, however, is an independent peer-review of both CrowdStrike’s findings and their own; this would allow them the luxury of continuing to compartmentalize the classified indicators and artifacts they’ve established, but also build the confidence of the public in general. There are a number of third party research arms capable of doing this. To name a few, MITRE Corporation has a long history of working with the intelligence community, has the experience in-house to peer-review these findings, and the clearances already in place to make sure that data is never leaked. MIT Lincoln Labs also has a cyber arm more than capable of reviewing this data, as do a number of universities that are actively doing this type of work for the government already.

We don’t ever need to see the data, at least until the indicators and the capabilities behind them become obsolete. In fact, even if we saw the data today, most information security experts still wouldn’t be able to agree on it. To interpret this data correctly, you need not only expert cyber warfare experience, but also years of intelligence on Russia (and maybe other countries), full knowledge of our capabilities and where our points of presence are, and a lot of other intel that will likely always remain classified. Giving the evidence we have on the DNC attack to security experts, without the rest of the intelligence to go with it, would be like giving spaghetti to a baby. That’s why we both need and are benefitting from a Director of National Intelligence on this matter.

What we do need to see, however, are independent reviews by people with the experience. Look to the FFRDCs for that kind of expertise. Many of the experts in this space are seasoned career intelligence people, detached enough from government to be impartial in their research, but close enough to government to be able to review the intel that the security community at large will never see.

Three Recommendations to Harden iOS Against Jailbreaks and Malware

Apple has been fighting for a secure iPhone since 2007, when the first jailbreaks came out about two weeks after the phone was released. Since then, they’ve gotten quite good at keeping the jailbreak community on the defensive side of this cat and mouse game, and hardened their OS to an impressive degree. Nonetheless, as we see every release, there are still vulnerabilities and tomhackery to be had. Among the most notable recent exploits, iOS 9 was patched for a WebKit memory corruption vulnerability that was used to deploy the Trident / Pegasus surveillance kit on selected nation state targets, and Google Project Zero recently announced plans to release a jailbreak for iOS 10.1 after submitting an impressive number of critical vulnerabilities to Apple (props to Ian Beer, who should be promoted to wizard).

I’ve been thinking about ways to harden iOS against jailbreaks, and came up with three recommendations that would up the game considerably for attackers. Two of them involve leveraging the Secure Enclave, and one is an OS hardening technique.

Perfect security doesn’t really exist, of course; it’s not about making a device hack proof, but rather increasing the cost and time it takes to penetrate a target. These ideas are designed to do just that: They’d greatly frustrate and upset current ongoing jailbreak and malware efforts.

Frustrating Delivery and Persistence using MAC Policy Framework

The MAC Policy Framework (macf) is a kernel-level access control framework originally written into TrustedBSD, and made its way into the xnu kernel, used by iOS and macOS. It’s used for sandboxing, SIP, and other security functions. The MAC (mandatory access control) framework provides granular controls over many aspects of the file system, processes, memory, sockets, and other areas of the system. It’s also a component of the kernel I’ve spent a lot of time lately researching for Little Flocker.

Rooting iOS often requires getting kernel execution privileges, in which in most cases all bets are off – you can, of course, patch out macf hooks. Gaining kernel execution, however, can be especially tricky if you’re depending on an exploit chain that performs tasks that you can thwart using macf, before you get your kernel code off the ground. It can also force an attacker to increase the size and complexity of their payload in order to successfully disable it, all which take time and increase cost. Should an attacker still succeed, disabling macf will leave sandboxes and a number of other iOS features broken, which jailbreaks want to leave intact. In short, it would require a much more complex and intricate attack to whack macf without screwing up the rest of the operating system.

For example, consider a kernel level exploit that requires an exploit chain involving writing to the root file system, injecting code into other processes (task_for_pid), loading a kext, or performing other tasks that can be stopped with a macf policy. If you can prevent that task_for_pid from ever happening, then that exploit chain might not be able to get off the ground to make the rest possible. Should the attack succeed in spite of this added security, you’ve now forced the attacker to go digging pretty deep in the kernel, find the right memory addresses to patch out macf, and invest a lot of time to be sure their jailbreak doesn’t completely break application sandboxing or other features. In other words, it takes a lot of work to break macf without also breaking third party apps and other features of the iPhone. Adding some code to sandboxd to test macf would also be extra gravy; if macf is compromised and that causes sandboxd to completely break, the user is going to notice it and perhaps find their phone unusable (which is what you’d want if a device is compromised).

Apple understands that if you can keep an exploit chain from getting off the ground, you can frustrate attempts to gain kernel execution. For example, Apple mounts the root partition as read-only; it’s trivial to undo this, as is demonstrated by any jailbreak. All you need is root – not even kernel.  But what about macf? Using the MAC Policy Framework can prevent any writes to the root file system at a kernel level, and can even prevent it from being mounted as read-write except by a software update. MAC is so well written that even once you’re in the kernel, opening a file (vnode_open) still invokes macf hooks; you’ll have to go in and patch all of those hooks out first in order to disable it. This means that lower down on your exploit chain, your root user won’t be able to gain persistence without first performing surgery in the kernel (and likely breaking the rest of the OS).

But wait, macf can do a heck of a lot more than just file control. Using macf, you can prevent unauthorized kexts from loading, you can prevent processes (like cycript and mobile substrate) from attaching to other processes (task_for_pid has hooks into macf), you can even prevent signals, IPC, sockets, and a lot more that could be used to exploit the OS… there’s a whole lot you can do to frustrate an exploit chain before it even gets off the ground by adding some carefully crafted macf policies into iOS that operate on the entire system, and not just inside the sandbox.

Apple has yet to take full advantage of what macf can do to defend against an exploit chain, but it could greatly frustrate an attack. Care would have to be taken, of course, to ensure that mechanisms like OTA software updates could still perform writes to the file system and other such tasks; this is trivial to do with macf.

Leveraging the SEP for Executable Page Validation

The Secure Enclave (SEP) has a DMA path to perform very high speed cryptography operations for the main processor. It’s also responsible for unwrapping class keys and performing a number of other critical operations that are needed in order to read user data. Leveraging the SEP’s cryptographic capabilities could be used to ensure that the state of executable memory has not been tampered with after boot. I’ll explain.

As I said earlier, the system partition is read only and remains read only for the live of the operating system (that is, until it’s upgraded). Background tasks and other types of third party software don’t load until after the device has booted, and usually also authenticated. That means that somewhere in the boot process is a predictable machine state that is unique to the version of iOS running on it, at least as far as executable pages are concerned.

Whenever a new version of iOS is loaded onto the device, the update process could set a flag in the SEP so that on next reboot, the SEP will take a measurement of all the executable memory pages at a specific time when the state of the machine can be reliably reproduced; this is likely after the OS has booted but before the user interface is presented. These measurements could include a series of hashes of each page marked executable in memory, or possibly other types of measurements that could be optimized. These measurements get stored in the SEP until the software is updated again or until the device is wiped.

Every time iOS boots after this, the same measurements are taken of all executable pages in memory. If a root kit has been made persistent, the pages should not match and the SEP could refuse to unlock class keys, which would leave the user at a “Connect to iTunes” screen or similar.

This technique may not work on some tethered jailbreaks that actively exploit the OS post-boot, but nobody really cares about those much anyway; the user is aware of them, root kits or malware can’t leverage those without the user’s knowledge, and the user is effectively crippling their phone to use a tethered jailbreak. It does, however, protect against code that gets executed or altered while the system is booting, including detecting kernel patches made in memory. An attacker would have to execute their exploit after the measurements are taken in order for the code to go unnoticed by the SEP.

Care must be taken to ensure that the technique to flag a software update not be reproducible by a jailbreak; this can be done with proper certificate management of the event.

Encrypt the Root Partition and Leverage the SEP’s File Keys

One final concept that takes control out of the hands of the kernel is to rely on the SEP to prevent files from being created on the root partition by encrypting the root partition with a quasi-like class key in a way that the SEP could refuse to wrap new file keys for files on the root partition. Presently, the file system’s keys are all stored in effaceable storage however if rooffs’ keys were treated as a kind of class key inside the SEP, the SEP could refuse to process any new file keys for that specific class short of a system update. Even should Trident be able to exploit the kernel, it theoretically shouldn’t be able to gain persistence in this scenario without also exploiting the Secure Enclave, as it couldn’t create any new files on the file system; it may also be possible to prevent file writes in the same way, with some changes.

Conclusion

The SEP is one of Apple’s most powerful weapons. Two of these three solutions recommend using it as a means to enforce a predictable memory state and root file system. A third recommendation could lead to a more hardened operating system where an exploit chain could potentially become frustrated, and/or require a much more elaborate kernel attack in order to succeed.

 

 

 

 

Can We Put the 16GB “Pro” Myth to Rest?

Apple’s latest MacBook Pro line is limited to 16GB due to energy (and likely heat) constraints, and that’s gotten a lot of people complaining that it simply isn’t enough for “real pros”. Ironically, many of the people saying that don’t quite fall into what many others would consider a “real pro” themselves; at least based on the target demographic of Apple’s “pro” line, which has traditionally been geared toward working professionals such as photographers, producers, engineers, and the like (not managers and bloggers). But even so, let’s take a look at what it takes to really pin your MacBook Pro’s memory, from a “professional’s” perspective.

I fired up a bunch of apps and projects (more than I’d ever work on at one time) in every app I could possibly think of on my MacBook Pro. These included apps you’d find professional photographers, designers, software engineers, penetration testers, reverse engineers, and other types running – and I ran them all at once, and switched between them, making “professionally-type-stuff” happen as I go.

Here’s a list of everything I ran at once:

  • VMwarei Fusion: Two running virtual machines (Windows 10, macOS Sierra)
  • Adobe Photoshop CC: Four 1+gb 36 MP professional, multi-layer photos
  • Adobe InDesign CC: A 22 page photography-intensive project
  • Xcode: Four production Objective-C projects, all cleaned and rebuilt
  • Microsoft PowerPoint: A slide deck presentation
  • Microsoft Word: A 20+ page document with graphics
  • MachOView: Analyzing a daemon binary
  • Mozilla FireFox: Viewing a website
  • Safari: viewing a different website
  • Preview: Three PDF books
  • Hopper Disassembler: Performing an analysis on a binary
  • WireShark: Performing a live network capture as I do all of this
  • IDA Pro 64-bit: Analyzing a 64-bit intel binary
  • Apple Mail: Viewing four mailboxes
  • Tweetbot: Reading all the flames and trolls in my mentions
  • iBooks: Currently viewing an ebook I paid for
  • Skype: Logged in and idling
  • Terminal: A few sessions idling
  • iTunes
  • Little Flocker
  • Little Snitch
  • OverSight
  • Finder
  • Messages
  • Veracrypt
  • Activity Monitor
  • Path Finder
  • Console
  • Probably a lot I’ve missed

The result? I ran out of things to do before I ever ran out of RAM. I only ever made it to 14.5GB before the system decided to start paging out, so I didn’t even have the change to burn up all that delicious RAM.

screen-shot-2016-11-04-at-4-39-37-pm

I got most of this running, except for Adobe InDesign, before the system hit the warning zone and began paging. Once I ran Adobe InDesign, macOS did what it was supposed to do and started paging out before I hit a hard limit. After InDesign finished loading my project, I then ended up with even less physical RAM in use.

screen-shot-2016-11-04-at-4-41-54-pm

I would have had to open a dozen or more additional projects in order to start redlining to the point of using up all my RAM, but even that likely wouldn’t have gotten me there. The manuscripts for all of my books put together are only maybe 20 MB in size. More PowerPoint slide decks only consume a few MB a piece. I’d be hard pressed to burn another gig and a half unless I opened up every last one of my books and presentations. And if I’m that serious about writing several books at once, chances are I’m not interested in using half the other apps I had open.

A couple apps you won’t see on this list are Chrome and Slack. Both of these applications have widespread reports of being memory pigs, and in my opinion you should boycott them until the developers learn how to write them to play nicer with memory. You can’t fault Apple for poorly written applications, and if Apple did give you 32 GB of RAM just for them, it wouldn’t matter. Poorly written apps are going to continue sucking down as much memory as possible until you’re out. So it’s reasonable to say that if you’re running poorly written applications, your mileage will definitely vary. RAM is only one half the equation: programmers need to know how to use it respectfully.

Many users, though not all, who might see themselves sucking down 16GB+ of memory might consider they could have a lot of unnecessary crapware running at startup that they don’t need. Check your /Library/LaunchDaemons and /Library/LaunchAgents folders as well as your own LaunchAgents folder in ~/Library, and check your login items too. You might also check your system for malware, adware, and bloatware. Lastly, make sure you’ve updated your applications to the latest versions. Memory leaks are common bugs, and if you’re running an older, leakier version of an application, no amount of RAM upgrade is going to make things better.

I am sure there are some genuinely heavy users who will undoubtedly chew down more than 16GB of RAM, and this is by no means an attempt to minimize their concerns. Working with video and audio production is one area I can see this becoming a reality, but I don’t own Final Cut Pro or Logic Pro to demonstrate. I have used them though and can say this much, while they do in fact use a lot of resources, Apple has designed them to be pretty good about keeping a number of operations disk bound. This is where the MacBook Pro’s migration to solid state storage plays in concert with their RAM decisions. Both swap and file based resources are now much faster than they used to be. Often times, your applications may be swapping (or using a scratch disk) and you won’t even be able to tell with an SSD. Solid state storage has a number of other obvious benefits, and quite frankly, I’d rather have an SSD and 16GB RAM limit over 64GB and a spinning platter disk any day.

I have no doubt that there will be some edge cases where a user legitimately uses up more than 16GB of RAM, and Apple really should consider refreshing their line of Mac Pros for such needs; the MacBook Pro is designed to be portable and energy conscious first, and I think that makes a lot of sense. It’s not a desktop machine, and it’s not going to act like a desktop machine as long as it’s operating within these constraints. With that said, I think many (not all) of the arguments about people using up all of their 16GB RAM are caused by factors that are within their control – whether it’s running crummy software, not adequately maintaining their startup items, not properly configuring their applications, or possibly even malware. Get those things out of the way first, and even if you’re still a high memory user, I bet your performance will be a lot more tolerable than it is now.

The MacBook Pro, as I’ve demonstrated, is more than capable of running a ridiculous number of “pro” apps without crossing the 16GB limit. It is, without a doubt, capable of adequately serving a vast majority of resource-hungry professionals such as myself, without breaking a sweat. The only thing, incidentally, breaking a sweat, are the people complaining about the number 16 on social media without actually understanding just how far that number gets you.

San Bernardino: Behind the Scenes

I wasn’t originally going to dig into some of the ugly details about San Bernardino, but with FBI Director Comey’s latest actions to publicly embarrass Hillary Clinton (who I don’t support), or to possibly tip the election towards Donald Trump (who I also don’t support), I am getting to learn more about James Comey and from what I’ve learned, a pattern of pushing a private agenda seems to be emerging. This is relevant because the San Bernardino iPhone matter saw numerous accusations of pushing a private agenda by Comey as well; that it was a power grab for the bureau and an attempt to get a court precedent to force private business to back door encryption, while lying to the public and possibly misleading the courts under the guise of terrorism.

Just to give you a little background, I started talking to the FBI on a regular basis around 2008, when I pushed my first suite of iPhone forensics tools for law enforcement. The FBI issued what they called a “major deviation” allowing their personnel to use my forensics tools on evidence. The tools were fast tracked through NIST/NIJ (National Institute of Justice is NIST’s law enforcement facing arm), and findings were validated and published in 2010. During this time, I assisted some of the FBI’s RCFLs (regional computer forensics labs), including the lab director for one of them, who had informed me my tools had been used to recover crucial data in terrorism and child exploitation cases. I’ve since developed what I thought was a healthy working relationship with the FBI, and have had a number of their examiners in my training classes, testified with some of them (as an expert) on criminal cases, and so on. The reason I’m giving this background is that one would have thought that when someone with this relationship with the FBI called up a few of the agents who have been working on the San Bernardino case, that they’d be interested in having my help to get into the phone.

Initially, they were. I spoke to one individual (whom I knew personally) and he had helped set up a conference call with a couple of the agents who were working on the case. This was maybe a week in advance, and very early on in the case. The meeting was scheduled, and the agenda was to discuss some details about the device and a couple potential techniques that I believed might get them into the device. One of the techniques was the NAND Mirroring approach, which I later demonstrated in a video and was later definitively proven as a viable method by another researcher from University of Cambridge. He took sort of the elegant way of doing it, but a quick and dirty dump-and-reball would have gotten the desired result too. Other techniques that we were going to discuss were possible screen lock bypass bugs that existed in the device’s operating system and collaborating possibly with a few other researchers who had submitted code execution bugs affecting that particular version of firmware. I already had a tested and validated forensic imaging process developed, so it was just a matter of finding the best way to bolt that onto our point of entry.

The day before the conference call was scheduled, it had gotten killed from powers on high. I was never given a detailed reason for it, and I don’t think my contacts knew either except that they were told they weren’t allowed to talk to anyone about the device – apparently including me, a forensics expert that had helped them to get into phones before. I don’t know if the call came down from lawyers, or if it went higher than that – it’s irrelevant, really. It was understood that nobody at FBI could talk to me about the case or even have a one-way conversation to give them a brain dump.

The reason I bring this up is that Comey’s public facing story was that “anyone with an idea” can come to the FBI and help them out. This clearly wasn’t true, and what was going on behind the scenes was quite the opposite…. and I’m not some crazy anon either approaching FBI with some crack pot solution; I had a working relationship with them, and had assisted them many times before, usually pro-bono (as I did with many other agencies). The people knew me, we had each other in our phone books, and every professional level of trust you would expect in cases such as this.

Comey’s public story about accepting help on the SB iPhone was entirely false, and he pushed hard over the next month for a court precedent. When it became evident that Comey wasn’t going to win this case in court, suddenly a solution from out of nowhere manifested. We paid a million dollars of our tax money for an unlock that FBI could have done for about $100 with the right equipment.

There were, at the time, a number of other questionable statements made by Director Comey that have led me to believe he wasn’t completely forthcoming in his testimony before Congress.

In a letter emailed from FBI Press Relations in the Los Angeles Field Office, the FBI admitted to performing a reckless and forensically unsound password change that they acknowledge interfered with Apple’s attempts to re-connect Farook’s iCloud backup service. In attempting to defend their actions, the following statement was made in order to downplay the loss of potential forensic data:

 “Through previous testing, we know that direct data extraction from an iOS device often provides more data than an iCloud backup contains. Even if the password had not been changed and Apple could have turned on the auto-backup and loaded it to the cloud, there might be information on the phone that would not be accessible without Apple’s assistance as required by the All Writs Act Order, since the iCloud backup does not contain everything on an iPhone.”

This statement implied only one of two possible outcomes:

Either they were wrong about that, and were reckless…

It is true that an iCloud backup does not contain everything on an iPhone. There is a stateful cache containing third party application data that is not intended to come off of the phone. This is where most private content such as Wickr, Telegram, and Signal databases would live. However, this information also does not come off the phone in a direct backup either. Similarly, all commercial forensics tools use the same backup facility as iTunes for iOS 9, meaning none of them can get the stateful cache either.

The backup conduit provides virtually the same data as an iCloud backup. In fact, an iCloud backup arguably provides more data than a direct logical extraction because they are incremental, and contain older backups. Desktop backups can sometimes even contain less content, as they exclude photos that have already been synced to iCloud. There are a few small exceptions to this, such as keychain data, which will only come off the phone in a direct backup if backup encryption is turned on. Ironically, if Farook’s phone has backup encryption turned on (which is likely), the FBI won’t be able to get anything at all from a direct copy, because the contents will be encrypted. Even if they found the device to have backup encryption off (and turned it on), they’re still not going to get the data they actually need off of the device (e.g. the cached third party application data); getting passwords doesn’t mean much when you can just subpoena every content provider for the data anyway.

…or the government wanted to compel more assistance, and mislead the courts about it.

As I said, there is in fact more data available on the device than comes off in any backup. The only way to get to this data, however, would be for Apple to digitally decrypt and extract the contents of the file system, and provide them with a raw disk image. This is similar to what Apple had done in the past, except they would now also have to write a new decryption and extraction tool specifically for the new encryption scheme that was introduced in iOS 8, and carried into 9.

This second possibility is far more sinister than simply being wrong about the quality of iCloud data. If the government actually did intend to get a hold of this “extra” data that only Apple can provide, then that means they would be following their original AWA order with a second AWA order, requiring Apple to build a tool to decrypt and extract this content from the device. Their original order required Apple to build a backdoor brute force tool. It did not require Apple to perform any kind of extraction of the raw disk for them. If a second order was in the works, this would have meant two important things:

  1. The attorneys for the FBI provided an incomplete, and misleading explanation of assistance to the courts, which intentionally hid the extra assistance that Apple would later be required to provide in order to finish this task – assistance which, when combined with the original list of work, may have been considered unreasonable by the court.
  2. Requiring Apple to break into and image the phone for them anyway would completely obsolete the necessity of designing a backdoor tool from the first order, but would have gotten them their encryption precedent for future use.

In other words, if the FBI had been planning to have Apple perform a physical extraction of this extra data, as seems hinted by in the FBI’s comments, then they were forcing Apple to create this backdoor tool for an undisclosed reason. It would also mean that all of this extra work was being hidden from both the courts and from Apple, possibly because the combination of the two AWA orders would have constituted “unreasonable” assistance in the court’s view. It completely modified the purpose of the first order as well; we’ve now gone from having a single tool with a very specific purpose to having two separate tools to create a modular platform for the government to use (via the courts) as each piece becomes needed. The middle overlap for these two components would have been entirely redundant and useful only for a law enforcement agency looking for a modular forensics toolkit at their disposal, and such work would never have been necessary if Apple simply broke the PIN and delivered a disk image as a lab service.

The motives, then, for forcing the creation of this backdoor tool, would of course have been to create a tool that they can compel for use in the future, and had very little to do with the device they were trying to get into. This was, based on my best guess, the real agenda that the FBI was planning to push, not only back door level access into encryption, but a court precedent to force a manufacture to deliver all of the data on any device they desire to acquire in the future.

Whatever the real reasons were for the FBI’s actions during San Bernardino, one thing was for certain: FBI Director Comey’s publicly stated agenda did not match the events that were unfolding behind the scenes. The FBI clearly wasn’t interested in getting into this phone at first. They canceled meetings with at least one expert about it, there are no reports of them ever reaching out to security researchers who had submitted Apple security bugs, there is no record of them ever checking surveillance for Farook to input his PIN anywhere; there’s a significant lack of evidence to support the notion that FBI ever wanted into the phone. At the very least, it was about setting precedent. At the very worst, further abuse of the All Writs Act were in the works.

It seems as though the same type of private agenda is happening now with our presidential election. The effects of this have already become evident: Many are arguing that NC may have been swayed by Comey’s letter and the FBI’s recent public disclosures of what is portrayed in the released documents as a corruption investigation. The FBI has violated their own procedures by releasing all of this on the bleeding edge of an election. There is no question in my mind that the FBI’s publicly stated agenda doesn’t match their private one here either. As I said, there is a pattern emerging that FBI Director Comey seems to mislead the public about his real agenda, and at this point, I think there’s enough smoke that Congress should be looking into his entire history with the agency to see where else this pattern might have existed.

On The State of Open Source

screen-shot-2016-10-03-at-11-40-10-amI was just a teenager when I got involved in the open source community. I remember talking with an old bearded guy once about how this new organization, GNU, is going to change everything. Over the years, I mucked around with a number of different OSS tools and operating systems, got excited when symmetric multiprocessing came to BSD, screwed around with Linux boot and root disks, and had become both engaged and enthralled with the new community that had developed around Unix over the years. That same spirit was simultaneously shared outside of the Unix world, too. Apple user groups met frequently to share new programs we were working on with our ][c’s, and later our ][gs’s and Macs, exchange new shareware (which we actually paid for, because the authors deserved it), and to buy stacks of floppies of the latest fonts or system disks. We often demoed our new inventions, shared and exchanged the source code to our BBS systems, games, or anything else we were working on, and made the agendas of our user groups community efforts to teach and understand the awful protocols, APIs, and compilers we had at the time. This was my first experience with open source. Maybe it was not yours, although I hope yours was just as positive.

It wasn’t open source that people were excited about, and we didn’t really even call it open source at first. It was computer science in general. Computer science was a brand new world of discovery for many of us, and open source was merely the bi-product of natural curiosity and the desire to share knowledge and collaborate. You could call it hacking, but at the time we didn’t know what the hell we were doing, or what to call it. The environment, at the time, was positive, open, and supportive; words that, unfortunately, you probably wouldn’t associate with open source today. You could split hairs and call this the “computing” or “hacking” community, but at the time all of these things were intertwined, and you couldn’t tease them apart without destroying them all: perhaps that’s what went wrong, eventually we did.

Over the last decade, the open source movement has been in a slow migration from people doing hard work to a mixture including a large non-developer or non-contributer user base, and much of that base comes with a sense of entitlement. No, it wasn’t always like that, but it has been moving in that direction for a while. The writing was on the wall from the late 1990s to early 2000s, after Linus Torvalds helped to transform his community into what, in my opinion, had become a toxic environment for years, fueled by intellectual elitism and a perverse sense of do-it-yourselfism. This community demotivated and disparaged developers for years, and it just wasn’t worth it to contribute to his project at a certain point. He wasn’t the only one, unfortunately; the Perl community, which had significant overlap with the Linux community, and much likely to Larry Wall’s dismay, seemed to have fallen in lock-step with this toxic sense of elitism, devolving into the same. Having spent a lot of time in the Perl community myself, I had to eventually abandon the language not because of the usefulness of Perl, but because of the awful fan club that had built around it. While we had open exchange of knowledge and enthusiasm in the 80s, the 90s brought something new to the community: the curmudgeon.

Over time, and into present day, the open source community somehow devolved into two disproportionate parts: a small core of developers who still share the enthusiasm I’m talking about and start a new project loving what they’re doing, and the rest of the community that 1. does not contribute any useful code 2. makes demands, arguments, or disparages the project 3. considers that to be their contribution. The result is inevitable, and has played out over and over again: the developer either becomes exhausted and burned out trying to take care of his or her undeserving user base, or the developer eventually becomes such a curmudgeon that they push away their user base, and are either viewed as an apathetic jerk, or they actually become an apathetic jerk. Every developer (unless they’re lucky) has, at some point, looked at page after page of open issues on their project, and wondered why they’re the only person contributing any source code. There’s only one conclusion to draw: Open source is incredibly broken. The cavalry isn’t coming. Call it laziness, call it the human condition, this isn’t how things are supposed to be.

No one person, of course, is to blame for the toxicity in the open source community today. If I were to conduct an autopsy on it, I’d say the community has eroded because open source was only ever a bi-product, and not the philosophy that many youngsters today are touting it as. The deep drive to discover, learn, and share is the real mindset that backed open source in the beginning, and no matter how much you believe in fairness, licensing, copyleft, security, or other peripheral movements, you can’t fully participate in open source unless you have those three things engrained into you: that’s the philosophy to adopt; open source merely follows naturally after that. You completely miss the point of open source unless you have a strong, selfless drive to learn and share knowledge. The problem is, it’s the people who don’t share that enthusiasm of the latter who are making all of the demands today for the former.

The philosophy that much of the community is holding fast to today isn’t the same philosophy that we shared back in the 80s, when the community got into full swing, and a big part of that is probably because much of the community is too young to remember what it was like. It wasn’t about forcing other people to conform to your belief system of open source through means of licensing; it was about creating something to share; your own ideas through code, and sharing them with others who had the same thirst for knowledge. People didn’t steal each other’s ideas, because there was such a strong impetus to come up with your own original ones. I watched the iOS jailbreak community self-destruct over this: even inside inner-circles, you had those who were discovering and sharing, and those who were just media whoring, stealing source code, and contributed little to the effort in the end.

Licensing was thought to fix this, but it never did. The GPL started with good intentions: to be welcoming (and even to try and evangelize) those who didn’t share the belief system of those within the community, by offering tasty treats but protecting the baker’s recipes. More often than not, unfortunately, the GPL has been used to blindly say, “we believe in open and free sharing of knowledge… but only if you buy into our belief system”. That’s a far cry from the original community, who would’ve given you the shirt off their back, and licensed everything BSD or MIT. It was never about control, and those who try to control people today usually don’t have the enthusiasm for community that originally pushed that. What we ended up with, as a result of licensing, was a community where a majority of people today do not share in our belief system, and that’s a big part of what’s wrong with it.

What we are left with in the open source community are vain philosophies that have no substance behind them anymore. For example, the philosophy that “we need open source so we can make sure your software is secure”; a cruel irony in that those who say this don’t have much experience with vulnerability research. Those who do are saying the exact opposite, “everything is open source if you try hard enough”; they’re in it for the challenge, not to be spoon fed, and often times I can glean more from your object code than is obvious in your source code. Thank goodness bash was open source, it only took us twenty years to discover Shellshock, not to mention the almost weekly barrage of critical OpenSSL vulnerabilities. Let’s not forget TrueCrypt, which the community was incapable of auditing themselves and had to pool enough money to hire somebody to do it. Security depends much less on open source and much more on financial resources today. Sad fact.

In the same vein, there are many who take the philosophy of not running anything on their Mac that isn’t open sourced, except of course for half of their operating system, which is not, oh and Little Snitch, and just one or two other tools. Maybe Photoshop because GIMP still sucks. It’s hard to have strong convictions when they’re based on the bi-product of a value system that they don’t necessarily hold or understand. Even if you do run exclusively open source software, have you bothered auditing it? Have you read the entire source tree of the Linux kernel or of Firefox? Because it only takes one obscure line of code to turn it all to dust. You’d better get on that.

I am writing with tongue-in-cheek, of course. Source code is, naturally, beneficial for a number of things, but the reality of the situation today is this: because the community has degraded so much, you no longer need everybody looking at your source code to make it better or secure, you only need a few select qualified individuals. Those qualified individuals (the now 1%) are also typically the last people to look at your source code, because of how toxic the community has become. That leaves the other 99% to demand new features, tell you how your software should have been designed, submit twenty issues about line spacing, send you makefiles you don’t want, and argue about code semantics – all without contributing a single line of code to the project. This will thoroughly drain your resources as a developer, and should that 1% ever come along, you’ll be too burned out to care about it.

This article isn’t to chastise the open source community today: I believe we can do better; I’ve seen it do better. I know there are many who don’t fall into this description of selfish, unproductive, and entitled. Those of you who still feel the way I do are the only ones who can affect change. I’m not so sure the current open source community can be saved; the only way to save it may be to start a new community… one of devs who still have the same motivations that fueled computing as a whole some decades ago.

So I have decided not to write any more open source software for now, because the community we have today isn’t really even the open source community. The community today isn’t about sharing or discovery; it’s about people being cheap and demanding things from you. That’s not how we started out, and I hope that’s not where we end up. Instead, what I’m going to do is create my own community. All of my new projects are going to become private repos, and anyone else that either I know personally or that someone can vouch for, who’s laying down code and working in this community can have access to them on request, whether they just want to look at it, or if they want to audit it, I don’t care. I don’t plan on using any GPL code in my projects, as I believe the system it’s pandering to is now broken. My code is only going to be available to the people who actually work in this community, are productive, actively share knowledge, and collaborate. GitLab allows you to do this at no cost, whereas GitHub does not. You can open as many private repos as you like on GitLab, and give as many people as you like read-only access (or full contributor access).

Open source was never intended to be “user friendly”; it’s a working class; it’s a cooperative. If you could be fired for being unproductive in open source, the community would be a lot smaller than it is today. There is definitely a place for users, however that’s not inside the community (unless they’re also contributing devs). The developers doing good work need to stick together, and we also need to form a new community of others who share those values. Let users be users. Let hackers be hackers.

I want a new open source community, please.

WhatsApp Forensic Artifacts: Chats Aren’t Being Deleted

Sorry, folks, while experts are saying the encryption checks out in WhatsApp, it looks like the latest version of the app tested leaves forensic trace of all of your chats, even after you’ve deleted, cleared, or archived them… even if you “Clear All Chats”. In fact, the only way to get rid of them appears to be to delete the app entirely.

whatsapp

To test, I installed the app and started a few different threads. I then archived some, cleared, some, and deleted some threads. I made a second backup after running the “Clear All Chats” function in WhatsApp. None of these deletion or archival options made any difference in how deleted records were preserved. In all cases, the deleted SQLite records remained intact in the database.

Just to be clear, WhatsApp is deleting the record (they don’t appear to be trying to intentionally preserve data), however the record itself is not being purged or erased from the database, leaving a forensic artifact that can be recovered and reconstructed back into its original form.

A Common Problem

Forensic trace is common among any application that uses SQLite, because SQLite by default does not vacuum databases on iOS (likely in an effort to prevent wear). When a record is deleted, it is simply added to a “free list”, but free records do not get overwritten until later on when the database needs the extra storage (usually after many more records are created). If you delete large chunks of messages at once, this causes large chunks of records to end up on this “free list”, and ultimately takes even longer for data to be overwritten by new data. There is no guarantee the data will be overwritten by the next set of messages. In other apps, I’ve often seen artifacts remain in the database for months.

The core issue here is that ephemeral communication is not ephemeral on disk. This is a problem that Apple has struggled with as well, which I’ve explained and made design recommendations recently in this blog post.

Apple’s iMessage has this problem and it’s just as bad, if not worse. Your SMS.db is stored in an iCloud backup, but copies of it also exist on your iPad, your desktop, and anywhere else you receive iMessages. Deleted content also suffers the same fate.

The way to measure “better” in this case is by the level of forensics trace an application leaves. Signal leaves virtually nothing, so there’s nothing to worry about. No messy cleanup. Wickr takes advantage of Apple’s CoreData and encrypts their database using keys stored in the keychain (much more secure). Other apps would do well to respect the size of the forensic footprint they’re leaving.

Copied to Backups

Simply preserving deleted data on a secure device is not usually a significant issue, but when that data comes off the device as freely as WhatsApp’s database does, it poses a rather serious risk to privacy. Unfortunately, that’s what’s happening here and why this is something users should be aware of.

The WhatsApp chat database gets copied over from the iPhone during a backup, which means it will show up in your iCloud backup and in a desktop backup. Fortunately, desktop backups can be encrypted by enabling the “Encrypt Backups” option in iTunes. Unfortunately, iCloud backups do not honor this encryption, leaving your WhatsApp database subject to law enforcement warrants.

Turning off iCloud and using encrypted backups for your desktop doesn’t necessarily mean you’re out of the woods. If you used a weak password that can be cracked by popular forensics tools, such as Elcomsoft’s suite of tools, the backup could be decrypted. Other tools can be used to attack your desktop keychain, where many users store their backup password.

What does this mean?

  • Law enforcement can potentially issue a warrant with Apple to obtain your deleted WhatsApp chat logs, which may include deleted messages. None of your iCloud backup content will be encrypted with your backup password (that’s on Apple, not WhatsApp).
    • NOTE: This is “iCloud backup” I’m referring to, and is independent of and irrelevant to whether or not you use WhatsApp’s built-in iCloud sync.
  • Anyone with physical access to your phone could create a backup with it, if access is compelled (e.g. fingerprint, passcode, or simply seizes it unlocked). This content will be encrypted with your backup password (if you’ve set one).
  • Anyone with physical access to your computer could copy this data from an existing, unencrypted backup, or potentially decrypt it using password breaking tools, or recover the password from your keychain. If passwords are compelled in your country, you may also be forced to assist law enforcement.

Should everybody panic?

LOL, no. But you should be aware of WhatsApp’s footprint.

How can you mitigate this as an end-user?

  • Use iTunes to set a long, complex backup password for your phone. Do NOT store this password in the keychain, otherwise it could potentially be recovered using Mac forensics tools. This will cause the phone to encrypt all desktop backups coming out of it, even if it’s talking to a forensics tool.
    • NOTE: If passwords are compelled in your country, you may still be forced to provide your backup password to law enforcement.
  • Consider pair locking your device using Configurator. I’ve written up a howto for this; it will prevent anybody else who steals your passcode, or compels a fingerprint from being able to pair or use forensics tools with your phone. This is irreversible without restoring the phone, so you’ll need to be aware of the risks.
  • Disable iCloud backups, as these do not honor your backup password, and the clear text database can be obtained, with a warrant, by law enforcement.
  • Periodically, delete the application from your device and reinstall it to flush out the database. This appears to be the only way to flush out deleted records and start fresh.
    • NOTE: This will not delete databases from existing iCloud backups from the cloud.

How WhatsApp Can Fix This

Software authors should be sensitive to forensic trace in their coding. The design choices they make when developing a secure messaging app has critical implications for journalists, political dissenters, those in countries that don’t respect free speech, and many others. A poor design choice could quite realistically result in innocent people – sometimes people crucial to liberty – being imprisoned.

There are a number of ways WhatsApp could mitigate this in future versions of their application:

  • The SQLite database does not need to come off in a backup at all. The file itself can be marked in such a way that it will not be backed up. The manufacturer may have set this behavior so that restoring to a new device will not cause you to lose your message history. Unfortunately, the tradeoff for this feature is that it becomes much easier to obtain a copy of this database.
  • In my book Hacking and Securing iOS Applications, I outline a technique that can overwrite the SQLite record content “in place” prior to deleting a record. While the record itself will remain on the free list, using this technique will clear the content out.
  • A better solution is setting PRAGMA secure_delete=ON prior to issuing the delete; this will cause the deleted content to be overwritten automatically. (thanks to Richard Hipp for sending me this information).
  • Using an alternative storage backing such as raw files, or encrypted CoreData, could be more secure. The file system is easy to implement, and Apple’s encryption scheme would drop the file encryption key whenever a file is deleted. It may not be as pretty as SQLite, but Apple’s file-level encryption is very solid in handling deleted files. Apple uses a binary property list for archival, which is sometimes used to store live message data too on the desktop. Wickr’s encrypted CoreData approach is similarly quite secure, so long as the database keys remain on the phone. Simply using a separate SQLite file for each thread, then deleting it when finished, would be a significant improvement, even if incorporating some of the other techniques described above.

Moving Semi-Auto Rifles into the National Firearms Act

I’m a long time responsible gun owner and enthusiast who, like many, would like to see more controls on semi-automatic rifles; particularly, what many refer to as “assault rifles”. Indeed, I’m well aware of all the Kool-Aid on both sides surrounding assault weapons, and I think both sides have some rather questionable notions about them. The extreme left seems to have developed an irrational fear and hatred of all guns and the extreme right believes the only solution to guns are more guns (the recent shooting in Las Vegas has shown that “more guns” couldn’t have saved anyone). I used to drink the gun rights Kool-Aid. I’d spent over 15 years shooting and smithing guns, gotten the NRA certifications to supervise ranges and to carry concealed weapons, and have been an avid long range shooter. Up until a few years ago, when I sold the rights to it, I produced the #1 ballistics computer in the App Store, and made a very comfortable revenue off of the gun community with just a few hours a month of coding. I’ve watched both the community and the ranges change dramatically over the years, and the current state of extremism in this country has become more than evident.

I’ve spent over 15 years in the gun community total, bumped elbows with the gun industry on several occasions, listened to all of the political arguments surrounding “assault weapons”, and have met some intelligent people. What’s bothered me is that I’ve met even more ignorant and zealous people, who don’t even know how their firearms work let alone have any sense of the world, diplomacy, or a sense of serving something greater than one’s self. In place of the values I’m used to seeing, I instead see fear mongering and propaganda from the gun industry – effective propaganda, even though it doesn’t hold any water.

If we really believe that guns are keeping the government in check, for example, then they’ve quite failed at that task. If guns supposedly make for a more peaceful society, then why is our country’s gun violence worse than most third world countries, and why are virtually all other first world countries statistically safer on a high order of magnitude? Perhaps if more gun owners traveled and saw how other societies lived – many without the distrust and predisposition to violence that we hold even towards our own neighbors – they might see how sorely mistaken this mindset has been all this time. Instead, the multi-billion dollar gun industry continues to sell a militia image that unfortunately many are too weak minded to see through. The fact remains that the world isn’t as vicious and violent as they’re told it is by the NRA, the top marketing agency for the gun industry.

Sandy Hook was really the point of no return for our country. As we watched our government and gun owners’ non-response to the murder of 20 school children, I knew then and there that we’d never do anything meaningful to control mass murder in this country. Sandy Hook was the day many demonstrated they had no soul.

Different From Any Other Rifle?

One of the key points the gun industry (and the NRA) have put forth is that an assault rifle is functionally no different than any other rifle – this is fundamentally flawed. They are quite different, as are other rifles that have been designed as weapons systems for combat. AR-15s, as is the case with many other types of infantry weapons, are designed as modular weapons systems in order to make parts easily field-replaceable, and can be easily modified into many different configurations, making them very difficult to define by features, but also very easy to modify into more lethal (and often outlawed) forms. They are chambered for high velocity, lightweight bullets (5.56 NATO, later domesticated to .223 Remington) originally designed to wound at medium range rather than kill (because a wounded soldier is more expensive than a dead one), but also over-penetrate at close-range (making them terrible for home-defense). Most such rifles are designed to cycle quickly to allow for rapid fire, and like most infantry weapons, do so without a significant risk of overheating. This, unlike hunting rifles, which often overheat after a small number of rounds, and lose accuracy quickly. Infantry rifles have barrels that are designed to dissipate heat faster to withstand rapid fire without degrading. Techniques such as chrome-lining help further lengthen life and reduce the corrosion and wear caused by high rates of fire and subsequent overheating combined with the high pressure of the round. Hunting rifles often don’t incorporate such processes, because it diminishes accuracy slightly, increases weight and cost, but mainly because hunting rifles aren’t designed for prolonged rapid fire. Lastly, assault rifles accept a detachable magazine that allows for quick reloading in combat, but serves no practical purpose elsewhere. In spite of the propaganda to the contrary, military rifles have quite different characteristics from your average rifle, and if they weren’t more deadly than a hunting rifle, there wouldn’t be such an abundance of rednecks running out to panic-buy them to overthrow their government or society some day, or defend against imaginary societal problems that we haven’t had in this country for over 130 years.

My gun collection is a lot smaller than it used to be. I’m down to a bolt action rifle, a shotgun, a lever rifle, and a handgun. I used to own several of the AR-15s I’m referring to, as well as AR-10s (the .308 version of an AR-15). Some people buy them either because they’re ex-military and comfortable with the platform, but many more mistakenly for home defense, or for various domestic war scenarios (militia, civil war, invasion, disasters, and other paranoid delusions – you’re kidding yourself if you think you can go up against an entire SWAT team with an AR). I bought mine because they’re fun to shoot at the range, and I was fascinated by the platform. I have extensive experience with how they work, how they don’t work, how to rebuild one, how to be safe with one, and even how to disarm someone being reckless at the range with one. Unfortunately, most assault rifle owners aren’t this well versed when they purchase an infantry rifle, and probably don’t know very much about them, or how to be safe with them. Just like someone expressing their First Amendment right looks like an idiot if they haven’t learned how to exercise restraint and communicate well, someone expressing their Second Amendment “right” is very much an idiot if they haven’t learned how their firearms work or how to be responsible and demonstrate restraint with them. I still shock many AR-15 owners when I explain to them that their ARs can fire out of battery and kaboom if they don’t run wet.

Small arms in general (even fully automatic ones) aren’t any good for “defending” a country against a tyrranical government, in spite of what the fear mongering gun industry has sold for an image. Our government has taken down countries with armies of full automatics, ground to air missiles, and much heavier weaponry than civilians own in the US. Our military swats those armies like flies. American gun owners today have bought into the marketing image by the gun industry that somehow assault weapons will prevent another genocide… yet, as we’ve seen just this past few weeks with Puerto Rico, all the government has to do to wipe out a class of people in 2017 is to simply pull the plug on humanitarian aid – passive genocide. Whether it’s through advanced weaponry or simply establishing economic dependence, the government today has become far greater at deciding who lives and who dies, regardless of your guns.

Obviously I don’t hate guns, but I do hate that any deranged person can easily get access to them. I don’t believe that having sensible controls on access to firearms constitutes tyranny, nor will lack of controls prevent it. I do believe that the nature of society will dictate how much control needs to be applied. Ben Franklin (who gun owners love to misquote), once said, “Only a virtuous people are capable of freedom. As nations become more corrupt and vicious, they have more need of masters.” Much to the chagrin of gun owners, many of our founding fathers would not have supported the notions that access to weapons should be given to the vicious that roam society. Franklin would be rolling over in his grave if he saw the narrative of today’s NRA. We’ve become a vicious society. We’ve been caught by surprise to be quite different today from many other societies in that regard, who are more peaceful. We’re driven by the greed and self-centeredness of our capitalist system, and bought into the notion of “me” as greater than “we”, and that’s trickled down into how little we regard human life or concern for our brother. Meeting violence with even more violence has now created an arms race in our society that’s begotten more violent generations, and more ignorant people who think that guns are the solution when they’re really a big part of the cause.

Many argue that assault rifles are statistically insignificant. It’s quite true, the number of people killed with rifles is much lower than handguns. What the statistics don’t show, however, is the ratio of random homicides between handguns and rifles. Handguns are at the top of the list of homicides in general, but I suspect a lot of the reason for this is simply that it’s what drug dealers and gangs are using to kill each other, and the vast majority of other gun crimes that involve either suicide or murder of someone the subject was targeting. When it comes to random domestic mass murders, however, more and more of these seem lately to involve an assault rifle – and these random massacres are what we should be analyzing, not the majority of other homicides that have no bearing on the public at large.

The Assault Weapons Ban

The question, of course, is how you can control access to assault weapons and do it effectively, and in a way that will matter in light of all of the panic-buying that occurs after every tragedy. The Federal Assault Weapons Ban of 1994 was a miserable failure, primarily because democrats are, by nature, terrible at writing firearms legislation. Future attempts to renew the ban were just as embarrassing, watching our country’s representatives completely fail to even explain the scary features they were banning. As a result of the AWB’s poor construction, gun manufacturers ended up designing slight variants of the popular firearms covered in the ban. For example, an AR-15 with a muzzle brake instead of a flash hider, fixed stocks, and without a bayonet lug. While the legislation may have cut back on drive-by-bayonetings, it did virtually nothing to remove any of the firearms it banned from circulation. Magazine bans were a similar embarrassment. Due to loopholes in the legislation, large caches of 30rd magazines (and 90rd drums) were easily imported and sold in virtually every gun shop during the ban. During the entire period of the AWB, gun owners sat comfortably with either pre-ban AR-15s or post-ban XM-15s that were identical in functionality, and with a safe full of legally owned 30rd magazines, laughing at the senators who wrote the legislation, who wouldn’t know an assault weapon if they sat on one. Should the same ban be reinstated today, things like magazine bans are even less likely to succeed with 3D printing, and the industry has become much wiser in how to skirt around the “scary features” laws.

Therein lies the core issue: there’s no legal definition for the term “assault weapon” or “assault rifle”; it’s difficult to define, too, because of their modular platform. Outside of the legal world, gun rights activists will tell you that this term is exclusive to machine guns, but even this is simply not true. Consider the AR-15 again: While the “AR-15” is the semi-automatic version of the popular full-auto M-16 rifle (the AR actually stood for “ArmaLite”, the original manufacturer), the military also got quite sick of their soldiers wasting so much ammunition (without hitting anything), and began issuing rifles with either tri-burst mode (instead of “full” auto) or in some cases exclusively semi-auto, along with teaching better marksmanship. All three configurations have been used in combat, and all three are assault rifles by any reasonable definition. Other than minor variations between manufacturers, the only parts that are mechanically different are the fire control components: an auto-sear, an M-16 bolt, and the spur (“J” hook) on the trigger. These components determine whether you get one bullet per pull, tri-burst, or a spray. Gun owners often seek out the Colt 6920 because it’s closest to the milspec of the M-16, and even has the cutout in the lower receiver for an auto-sear, to be converted to fully automatic. To call one of these an assault rifle and not the others because of its configuration is childish, and more Kool-Aid circulated among gun owners.

The Challenge

As I said, my main point is that those who are the most opposed to assault weapons are also the people who know the absolute least about them, and that has led to terrible legislation in the past, that has only ended up banning “scary features” that can easily be restored to a modular weapons system like the AR-15. Now I will propose a more effective means to control them; if you’ll allow an assault weapons owner to help you draft something that could actually be effective, I think perhaps we might reach some intelligent legislation.

The biggest issues we have in terms of access to assault weapons, or really firearms in general are:

  • Very little identity collected about buyer and/or false identities
  • Very little background required to pass a NICS check
  • Nearly instantaneous turnaround
  • No record of private transfers
  • Ban legislation is not retroactive (won’t affect panic buying, 3D printing, or 80% receivers)
  • Don’t want to accidentally ban certain hunting/sporting rifles

The issue with assault rifles isn’t so much ownership; it’s a matter of who’s owning them. There are certainly a large number of responsible gun owners out there who are not committing mass murders. At the same time, there are many disturbed individuals, probably many of whom are already under investigation, or have been in and out of mental institutions, indoctrinated by conspiring militia groups, or have other issues that most of society wouldn’t think should have access to an assault weapon. A handgun to protect themselves? A shotgun? A bolt action? Perhaps (depending on a case-by-case) – but an assault rifle? That’s in a different class of its own…. yet we don’t treat it like it is.

A Class of Its Own

And that’s the problem: Firearms like the AR-15 really aren’t in a class of their own, from a legal perspective, like machine guns are. The same person who can buy a shotgun for home protection can also buy an AR-15 – a combat rifle (with the exception of the full auto) – capable of killing a lot of people much faster than a shotgun.

In the 1920s and 30s, it was legal to simply buy a machine gun off the shelf, and you could order rifles out of the Sears catalog… organized crime had adopted machine guns like the Thompson Submachine Gun, and they were used in a number of violent massacres, the most famous being the Valentine’s Day Massacre, which killed seven people. This was addressed with a piece of legislation called the National Firearms Act (NFA). This was augmented over time, including in 1968 (with an import ban), and again in 1986 (banning newer registrations). Essentially where things stand now are that you can legally purchase a machine gun manufactured prior to 1986, but must go through a rather rigorous process to demonstrate that you are a law abiding citizen, allow certain information to be collected about you, obey certain transportation rules, and essentially register the firearm with BATFE. Wikipedia does a good job explaining the process:

All NFA items must be registered with the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF). Private owners wishing to purchase an NFA item must obtain approval from the ATF, obtain a signature from the Chief Law Enforcement Officer (CLEO) who is the county sheriff or city or town chief of police (not necessarily permission), pass an extensive background check to include submitting a photograph and fingerprints, fully register the firearm, receive ATF written permission before moving the firearm across state lines, and pay a tax.[22] The request to transfer ownership of an NFA item is made on an ATF Form 4.[23] Many times law enforcement officers will not sign the NFA documents. There have been several unfavorable lawsuits where plaintiffs have been denied NFA approval for a transfer. These lawsuit include: Lomont v. O’Neill,[24] Westfall v. Miller,[25] and Steele v. National Branch.[26] In response, fourteen states have enacted laws which require the CLEO to execute the NFA documents, including Utah, Kansas, Arizona, Alaska, North Dakota, Oklahoma, Louisiana, Arkansas, Kentucky, Tennessee, Ohio, West Virginia, North Carolina and Maine.[27][28][29]

In other words, there is already a system in place to perform stringent checks of individuals looking to own firearms that were at one point considered too dangerous to arbitrarily just sell to anyone at a gun shop. The NFA also applies to silencers, sawed off shotguns, and other types of firearms.

The alleged bump-fire components used in Las Vegas were, at one point, controlled by the BATFE under a 2006 decision labeling one such device (the Akins Accelerator) as a machine gun. Why the BATFE didn’t continue to aggressively pursue these newer devices as such is a mystery. When the Akins Accelerator was ruled a machine gun, BATFE went after the customer list and owners were forced to surrender the springs that made them work. Because it’s such a modular platform, however, any new devices to assist with fully automatic fire can easily be developed and attached to an otherwise semiautomatic rifle; 3D printing makes it even easier to create such a bump-fire device. The problem at the core of controlling full automatic fire, then, is to control semi-automatic fire.

The NFA provides the groundwork for what could serve as a central point of regulation of semi-automatic firearms, and once in place for semi-autos, could be built upon to impose better background checks or other much needed inspections.

Banning vs. Reclassification

Instead of banning any firearms, classifying any semi-automatic long gun under the blanket of the NFA would cause the same NFA process to be mandated in order for gun owners to own or possess them. If you wanted to be more specific, infantry rifles should be identified with these characteristics:

  • Long gun
  • Semi-Automatic fire control components
  • Accepts either [a detachable magazine] or [ a fixed magazine holding more than five rounds ]
  • Centerfire (optional)

You may also choose to only include centerfire rifles, rather than rimfire (such as little .22 plinker guns). On the other hand, a .22 is all too often treated like a toy when it is lethal as well. There are too many gun owners, in my opinion, who treat .22 like a lollipop and let their kids shoot them in the backyard, when in reality they can do a lot more than put an eye out.

Resolving the Shortcomings of Another AWB

Going the NFA route instead of a ban would resolve the shortcomings of assault weapons legislation. You wouldn’t be banning scary configurations of the same rifle, or trying to fit firearms into a specific taxonomy (which we know is a futile effort). Instead, we’d address the problems of ownership:

  • Very little identity collected about buyer and/or fake identification

Under NICS, you provide only very basic info, and it’s easy to get around a background check if you use a fake ID. Since the gun shop is responsible for taking that information down, fake identification goes a long way. But even if they’re not using a fake identity, there’s very little information given; no fingerprints, no photo – the only thing that goes to NICS is a name and address (even social security number is optional).

Under NFA, you also submit a photo, fingerprints, have an extensive background check to either purchase an assault rifle, or to register an existing one during an initial “amnesty” window. In many cases, one would also require a letter from the chief of police etc). To help ensure that the identity is not fake, there is also a transfer fee which requires a payment, and so a check, credit card, or other paper trail will have to exist tying to the individual’s identity.

Here is the link to the ATF Form 4 and the FD-258 fingerprint card.

  • Very little background required to pass a NICS check

The NFA system is more extensive, but can be made even more so on background checks by opening up mental health records and tying into a number of federal databases that NICS presently isn’t very well tied into. NICS checks only go to the feds for long guns purchases, and goes to the state for handguns. This is why you can buy a long gun in any state, but can only buy a handgun from your home state. Information sharing isn’t so great between the two, from what I’ve heard, and there have been attempts to rectify that.

  • Nearly instantaneous turnaround

NFA takes 6-12 months on average, so your new assault rifle will just have to wait at the gun shop until you receive your stamp. This is what happens with machine guns already when purchased from class-3 dealers. Those planning terrorist attacks, such as in Orlando, are going to have to plan well in advance.

Since the NFA system doesn’t run at light speed, like NICS does, the time is there to interface with investigatory agencies so that they are made aware when someone on a terrorist watch list is looking to purchase an assault weapon. This would also result in a mandatory waiting period for assault rifles, but not affect other firearms.

I won’t attempt to address the debate about whether those on terrorist watch lists should be allowed to buy an assault weapon; there are good arguments on both sides. One thing is for certain, reclassifying semi-autos as NFA items would provide the freedom to treat them as more highly controlled, and deny purchases to those under investigation (perhaps with a judge’s approval) without preventing them from purchasing other firearms for protection. Even if you don’t use the terrorist watch list at all, though, the time for the NFA process to conclude would give investigative agencies enough of a heads-up that they could act on whatever evidence they do have, or step up their efforts to prevent another tragedy. Love it or hate it, this is what the FBI did with the Playpen/Tor investigation; they feared some subjects were imminently going to act on murdering or raping children and decided to make arrests.

At the moment, NICS is insufficient for information sharing: it’s instantaneous, giving no preparation time, and in many cases, the law forbids the information from NICS to be forwarded to investigative agencies.

  • No record of private transfers
  • Ban legislation is not retroactive (doesn’t affect panic buying)

The Assault Weapons Ban made the mistake of grandfathering rifles. When the NFA first went into effect, however, machine gun owners were given a window where they could register their machine guns. Once that window closed, any unregistered MGs were considered illegal, and there are stiff penalties for possessing an unregistered machine gun. You can’t even bring such a gun to the range because you have to carry it around with the ATF form and a stamp. Many ranges check these if they see a machine gun, and there are also a number of ATF stings / monitoring going on at many ranges. In other words, that gun will have to move completely underground, and most gun owners hate that idea. Within half a generation, unregistered assault weapons will end up in the hands of the owners’ children, but they will be in the same boat – subject to prison if they do not turn it in or have it destroyed. This also snuffs out private sales of unregistered assault weapons, as there would be no legal way to register it outside of the initial registration window. In other words, once that window closes, if you don’t register a rifle, it turns into a stolen car – it’s a hot item, and very few are likely to touch it. There is a significant financial incentive for gun owners to register existing firearms, in this case, as all of their panic buying will lose considerable value otherwise.

An all-out ban does nothing to address the millions of firearms already owned, however going the direction of the NFA forces all of them to be accounted for, or the owner risks the change of criminal prosecution if they’re ever caught with an unregistered “assault weapon”.

As personal, private transfers go, those of machine guns are also illegal, unless they go through the NFA process for the new owner. In other words, there’s a paper trail now any time an assault weapon is sold, and the government is aware of who is in possession of it, as well as has their prints, photo, and other information. Today, you aren’t required to give any account for where a gun came from, whether it was purchased legally, whether a background check was done – a gun could literally show up on the street, and there is no accountability at all.

3D Printing and 80% Builds

As far as 3D printing and 80% builds go, today pretty much anyone can legally build their own semi-automatic rifle today. The gun industry manufactures 80% receivers, which are lower receivers without final machining, just to keep them legal enough to where they don’t constitute a firearm. They can be purchased over the counter (or mail order) without pesky criminal background checks, then completed into final firearms with minor machining. All of the other components of the rifle can then be purchased and assembled into a working rifle, skirting NICS or any other safeguards in place, just like a private sale. They have the added benefit of not needing to be registered (or even engraved with a serial number) unless you sell them; only when you sell them do they require an ATF Form 1. Gun owners have stocked up both on complete lower receivers (which are traceable and engraved) as well as 80% receivers (which are not usually traceable or engraved) in the event of an all out “ban” to eventually build out into complete rifles.

The way that NFA restrictions are structured, it prevents gun owners from arbitrarily building their own restricted firearms (for example, machine guns or short barreled rifles) without approval from the ATF (and only manufacturers are ever allowed to build new machine guns). By classifying semi-auto rifles under NFA, the same restrictions get applied to AR-15s (or other semi-automatic rifles), forcing registration of existing, complete stripped lowers, and banning home-brew builds without registration. All that poor-man’s panic buying will have served no little purpose, as every lower receiver will ultimately need to be registered just as a complete rifle would, otherwise it becomes a worthless, unsellable unregistered firearm that the owner can’t even take to the range.

  • Don’t want to accidentally ban certain hunting/sporting rifles

The BATFE has the ability to issue rulings and make exceptions for specific hunting / sporting rifles they don’t want to consider assault weapons. Their ruling process occurs in writing and they’ve handed down a number of specific rulings as new products are introduced. For example, a butt-stock with a spring was once productized to use the recoil from “bump fire” to “legally” make a rifle fully automatic without modifying its fire control components. The ATF eventually ruled these as machine guns and forced owners to remove the springs.

Paying For It and Funding Mental Health

The NFA originally started the same way most controls in this country does: taxes. A $200 transfer tax is paid when a machine gun is transferred… every time. If even a $100 tax was assessed per transfer, and a registration window were opened, the revenue generated from retroactive taxation of registration would likely be more than enough to pay for the additional resources. Since the system is already in place, it’s really more of an expansion of personnel than anything.

Future revenue from this could be used to help fund mental health, another big piece of the puzzle in America that’s been starving for decades.

Would it Have Mattered?

Paddock, the Las Vegas shooter, bought 33 guns in the 12 months leading up to the shooting; some reports suggest he made a bulk purchase in 2016. The NFA process is slow and often takes six months or more to obtain a single tax stamp; along with a $200 tax stamp fee, fingerprints for every application, extensive background check, and other delays, it’s likely Paddock would have been able to only transfer one, maybe two semi-automatic rifles within that time frame assuming he passed all checks and wanted to go through the hassle. He obviously didn’t want to go through the hassle, though: Paddock chose a low budget means of obtaining full auto fire; a means that the ATF had flirted with outlawing, but didn’t follow through on. He didn’t purchase (to our knowledge) any machine guns through the NFA program. Had he made several purchases, it would have put him on BATFE’s radar. Instead, he stayed off the grid by “hacking” together a full auto. If semi-automatic rifles had fallen under the NFA program, there’s a good chance he would have had the same aversion to the NFA process as he obviously had with buying machine guns and not purchased semi-auto rifles.

Even if Paddock had decided at some point to go through the NFA process, it would have been considerably expensive – possibly enough to turn him off to the idea, and in many other cases of mass shootings, the shooter wouldn’t have been able to afford a semi-automatic assault rifle. During and after machine gun registration under NFA, the prices of those firearms were driven up substantially to 10-20 times their original price. A full auto machine gun today costs anywhere from $10,000 – $20,000 to purchase simply due to supply and demand. Semi-automatic rifles, had they fallen under the NFA, would have fallen to the same economics of supply and demand, and been exceedingly expensive. This would have likely dissuaded Paddock from purchasing 33; he may have even balked at the price of one and moved on to other firearms that would not have been able to cause as many fatalities in such a short period of time.

Because Paddock’s behavior seems to have been completely unpredictable, there’s no way to guarantee whether or not the NFA would have stopped the mass shooting – but it would have definitely frustrated it and possibly minimized the number of fatalities significantly. The NFA would have unquestionably prevented several other mass murders from occurring, including (in my opinion) Sandy Hook, based on the circumstances we know about. Until our country has enough brains to flat out ban semi-automatic rifles, random shooter who carefully plans for years will always be a risk to some degree, however the NFA can help both minimize the extend of the damage as well as provide BATFE with enough insight to know what threats are out there.

Summary

A good starting point for legislation would be to define an assault rifle as any long gun with a semi-automatic fire control, and either [ a detachable magazine ] or [ a fixed magazine holding more than 5 rounds ], perhaps centerfire only. This would avoid a majority of semi-auto hunting rifles and plinker guns, but also classify just about every other semi-automatic rifle (as it should). Perhaps this can be fine-tuned. I say five rounds because many fish and game rules limit hunters to 3 or 5 rounds. Again, we’re only talking semi-automatic long guns here – pump shotguns, bolt rifles, lever guns, and most other hunting firearms are largely unaffected.

The end result is that you are not denying gun owners what they perceive as their right to own these firearms, nor do you have to care about all of the panic buying that happens every time there is a massacre. It can be made fully retroactive, make no attempt to force confiscation (which would fail miserably), and give gun owners a window to decide whether they want their assault rifles enough to submit to reasonable accountability.

The far right in the gun community will argue that this is tyranny, and that the government doesn’t have the right to know who owns an assault weapon. I disagree. We are not living in Hitler’s Germany, and if you’re going to possess an infantry rifle, you ought to be subject to some accountability. The gun community has enjoyed a long history of freedom from accountability in this country, however it has also evolved from responsible gun owners into an anarchist organization. More people have died from gun violence in this country than in all of our major wars combined; I find that unacceptable. I do believe it’s time to start holding people to at least more accountability for purchasing a semi-automatic firearm than we do to buy over-the-counter Sudafed. If the gun community were effective at educating their own, and creating a generation of non-violent gun owners, they would have done so by now. If the answer to guns was really “more guns”, we would be a safer country today… it is very clear that this has not been the outcome.

This is not tyranny under any definition, nor is it a potential infringement on rights like an all-out ban would be. As a gun owner, I personally support this legislation, and I believe many other gun owners would. This will not create a Hitler-esque roll, as there are still plenty of other firearms out there that are not so heavily controlled. Just as we control access to prescription medication vs. over-the-counter, there are some firearms that require tighter controls.

Additional Thoughts

I’ll add to this my personal belief that any NICS or NFA check should also require that the buyer has completed a gun safety course in the last ten years, and this could probably help us reduce the number of ignorant gun owners that both sides hate to see out there. This would help address another core problem: accidental deaths. Rifle safety classes are offered by NRA, Sig Arms, and many other non-government organizations. Similar classes are required in order to obtain your concealed carry permit in some states, so the infrastructure is already there, along with a list of approved classes. You can obtain training from a number of private organizations you choose, and only need provide a certificate. If you’re going to own a semi-automatic rifle, you ought to be educated in how to be safe with it, unlike the vast majority of owners I’ve seen at the range who are beyond ignorant. If the “well regulated” in the 2nd Amendment simply means “proficient”, as many gun rights activists claim, then there’s even a constitutional basis to expect that gun owners will be educated.

This article covered assault weapons specifically, but I still believe wholeheartedly that the much broader problem with gun violence requires us to heavily fund mental healthcare in this country, and with deep pockets. A majority of all gun deaths are still suicides, or murder-suicides. While we must do something to prevent the string of massacres we’ve seen with assault rifles over the past few years, we’d be sorely mistaken not to address the grave mental health problem in our country. This is a separate issue from domestic terrorism, but is definitely where this discussion needs to go.

General Motors 2015-2016 Safety Issue w/Cruise Control [Ignored by Chevrolet]

I’ve filed the following safety issue with the NHTSA, after spending considerable time attempting to explain this safety issue to Chevrolet only to get incoherent answers by people who don’t appear competent enough to understand the problem. If you’ve been in an accident caused by GM’s speed control, it’s possible that this may potentially have come into play. I’ve been able to reproduce this glitch in 2015-2016 Silverado models, however it’s likely to affect any vehicles with the same speed control. It most likely affects the GMC Sierra, as well as other trucks and vehicles using the same speed control system (possibly Yukon, Suburban, Escalade, and Tahoe).

In the case below, speed control acts directly contrary to the way it is stated in the user manual, and how the driver expects it to behave. Chevrolet doesn’t appear to either understand or has dismissed the safety implications below. If you’ve been affected by this, I recommend you contact your attorney.

The final response I received from Chevrolet is to hold the “set” button in rather than press it multiple times – in spite of the fact that their own owner’s manual specifically states that pressing it briefly multiple times will lower the speed:

“To slow down in small increments, briefly press the SET– button. For each press, the vehicle goes about 1.6 km/h (1 mph) slower”

So Chevrolet’s “solution” is, rather than fix cruise control so that it behaves the way it’s documented in the manual, instead to have me change my driving habits to use cruise control in a way that is counter-intuitive and not standard to other vehicles, including other Chevrolet models. It is sad that software bugs like this are among the easiest to fix and issue a recall for, yet also appear to often be the most likely types of problems to be dismissed or rationalized by Chevrolet. In the event this costs someone their life, I wanted this to be documented publicly since Chevrolet has expressed no interest in correcting the problem or issuing a recall.

A CONDITION EXISTS WHERE, AFTER THE DRIVER HAS USED THE GAS PEDAL TO ACCELERATE, THEN HAS REMOVED THEIR FOOT FROM THE PEDAL, THEN PRESSES THE CRUISE “SET” BUTTON IMMEDIATELY OR A BRIEF MOMENT LATER, AND THEN IMMEDIATELY ATTEMPTS TO DECELERATE BY REPEATEDLY PRESSING MINUS “-” ON THE CRUISE CONTROL, THAT THE SPEED CONTROL BECOMES CONFUSED AND DISPLAYS MULTIPLE DIFFERENT SPEEDS, WHILE MAINTAINING THE ORIGINAL SPEED, EVEN THOUGH THE DRIVER BELIEVES THEY ARE DECELERATING. THIS CAN BE REPRODUCED ON ANY 2015-2016 SILVERADO MODEL BY FOLLOWING THESE STEPS: THROTTLE UP AND ACCELERATE (TO PASS, FOR EXAMPLE), REMOVE FOOT FROM ACCELERATOR, THEN IMMEDIATELY PRESS THE “SET” BUTTON, FOLLOWED BY 5-10 PRESSES ON THE DECELERATE “-” BUTTON; THE SPEED WILL SET AT 65, FOR EXAMPLE, THEN FLIP BETWEEN 64, 65, 63, 65, 62, 65, 61, 65, 60, 65, AND SO ON, MAINTAINING SPEED AT 65 EVEN THOUGH THE DRIVER IS INSTRUCTING THE VEHICLE TO DECELERATE AND THE REDUCED SPEED IS TEMPORARILY DISPLAYED. IT MAY TAKE 5-10 SECONDS FOR THE SPEED CONTROL TO CLEAR ALLOWING THE DRIVER TO MAKE CHANGES, HOWEVER THEY WILL STILL BE CRUISING AT 65. DURING THIS PERIOD, THE DRIVER DOES NOT REALIZE THAT THEY WERE NOT DECELERATING AT WHICH POINT THEY MAY TAP THE BRAKES TO DISENGAGE CRUISE, BUT HAVE LOST 5-10 SECONDS OF REFLEX TIME. THIS HAS PRESENTED A DANGEROUS CONDITION WHERE THE DRIVER BELIEVES THEY’RE DECELERATING WHEN TOO QUICKLY APPROACHING ANOTHER VEHICLE, RISKING COLLISION.

Backdoor: A Technical Definition

A clear technical definition of the term backdoor has never reached wide consensus in the computing community. In this paper, I present a three-prong test to determine if a mechanism is a backdoor: “intent”, “consent”, and “access”; all three tests must be satisfied in order for a mechanism to meet the definition of a backdoor. This three-prong test may be applied to software, firmware, and even hardware mechanisms in any computing environment that establish a security boundary, either explicitly or implicitly. These tests, as I will explain, take more complex issues such as disclosure and authorization into account.

The technical definition I present is rigid enough to identify the taxonomy that backdoors share in common, but is also flexible enough to allow for valid arguments and discussion.

Abstract

A clear technical definition of the term backdoor has never reached wide consensus in the computing community. In this paper, I present a three-prong test to determine if a mechanism is a backdoor: “intent”, “consent”, and “access”; all three tests must be satisfied in order for a mechanism to meet the definition of a backdoor. This three-prong test may be applied to software, firmware, and even hardware mechanisms in any computing environment that establish a security boundary, either explicitly or implicitly. These tests, as I will explain, take more complex issues such as disclosure and authorization into account.

The technical definition I present is rigid enough to identify the taxonomy that backdoors share in common, but is also flexible enough to allow for valid arguments and discussion.

1.0    Introduction

Since the early 1980s[1], backdoors and vulnerabilities in computer systems have intrigued many in the computing world and the government, and have both influenced and been influenced by popular culture. Shortly after the movie Wargames was released, Ronald Reagan discussed the plot, which revolved around a backdoor in a defense computer system, with members of Congress and the Joint Chiefs of Staff[2], which led to research into the government’s own risk assessments. Before the Internet was largely in place globally, computers at Lawrence Berkeley National Laboratory were compromised by an unknown vulnerability in the popular Emacs editor[3], the story of which led to a New York Times bestseller. Since the 1980s, and with the now global scale of the Internet, remote access trojans (RATs), root kits, and numerous other types of backdoors have been discovered from some of the world’s largest data breaches[4], and have made it into popular culture such as the CSI: Cyber television series, movies, and books. All of these events have included what has been referred to as a backdoor, but without any clear definition to test the validity of that statement.

While backdoors have become a significant concern in today’s computing infrastructure, the lack of a clear definition of a backdoor has led the media (and some members of the computing community) to misuse or abuse the word. System vulnerabilities that are clearly not backdoors are often reported as such in many media news articles[5][6], helping to spread confusion among the general public. This has the capacity to cause not only a disconnect with non-technical readers, but can also engender distrust, misplaced attribution, and even panic.

By misappropriating the use of the term backdoor, media and/or the entertainment industry can incite the panic that all computer systems are as vulnerable and open to attack as the fictional NORAD defense center in Wargames, and that physical safety is always subject to imminent danger due to such widespread vulnerability. Modern day paranoid has led to many conspiracy articles about power grids, dam computers, and other SCADA systems, painting a bleak picture of numerous doomsday scenarios[10]. While such systems are susceptible to real world attacks, with the help of the media and a little fiction, the public’s fears can escalate beyond a healthy concern for security into paranoid delusions leading to stockpiling weapons, food, and even building underground bunkers. While backdoors are not necessarily the root cause of all of this paranoia, the attribution and conspiracy undertones of the word can help to fuel them.

1.0.1 Need for a Definition

In addition to public panic due to cyber threat “fan fiction”, the lack of a clear technical definition of a backdoor stands to affect technical analysis and possibly attribution of newly discovered implants in computing devices, which has become increasingly regular. By defining a backdoor, the technical community has a framework by which it can identify, analyze, and attribute new weaknesses as they are discovered.

On a legal front, privacy legislation is anticipated in Congress, and pending legal cases already exist within the court system in which a technical definition of backdoor would be beneficial. Without a clear definition, these proceedings pose the risk of misinformation in criminal prosecution, search warrants, warrants of assistance, secret court proceedings, and proposed legislation – all by preventing a duly appointed legal body from adequately understanding the concept.

In February 2016, the Federal Bureau of Investigation sought an order under the All Writs Act to force Apple Inc. to assist them in bypassing the security mechanisms of their own firmware in a terrorist investigation. Throughout proceedings and Congressional hearings to follow, the term backdoor had been strongly used by both sides to describe the FBI’s order, as well as different scenarios describing future orders or proposed legislation. It is crucial, then, that there be an accepted definition of the term as the very definition of backdoor stands to influence justice and legislation on a national, and possibly worldwide stage. Any future attempts at a legal definition of a backdoor must clearly begin with a technical definition, one of which is presented here.

1.0.2 Prior Attempts at Definitions

Some attempts have been made to define backdoor, however all fall short of being both specific enough to cover the intentional and subversive nature of backdoors, and general enough to avoid covering mechanisms that the computing community does not consider a backdoor.

The Oxford Dictionary defines “back door” as “a feature or defect of a computer system that allows surreptitious unauthorized access to data”[13]. This definition makes several incorrect technical assumptions. First, by labeling the backdoor as either a feature or defect, the definition makes two incorrect assumptions about the mechanism itself: namely, that it is either designed to improve functionality (a feature), or that any unintentional vulnerability in a system could be considered a backdoor. Either falls short of what I propose as a definition, which rests somewhere in between: the mechanism is not a feature, but an intentionally placed component that is not disclosed to the user, and not the result of a programming error; otherwise any computer vulnerability could be considered a backdoor. The definition also fails to sufficiently cover the purpose of the backdoor, implying that its purpose is only about access to data. There are many backdoors into system that do not provide access to data whatsoever, but rather surrender control of a system to an actor. There are many backdoors placed in security boundary mechanisms that do not protect data. The definition fails to acknowledge such backdoors.

The Linux Information project (LINFO) defines “backdoor” as “any hidden method for obtaining remote access to a computer or other system”[14]. This definition fails to sufficiently identify a backdoor as a specific mechanism within the software, but rather defines it as a method, suggesting that using any technique to obtain remote access should be considered a backdoor. This is too general, then, and could consider anything from hacking methods to social engineering, as a backdoor. It also allows worms, viruses, exploits, or other means to gain unauthorized access a backdoor, even though many in the computing community would not share that opinion. The second part of the definition requires that remote access be a mandatory requirement for a backdoor, however this paper will demonstrate that backdoors can exist for purposes including local (non-remote) access, or even access by a different user on the same machine. In this paper, I argue that it is the actor and not the origin of access that matters. Lastly, this definition is so broad that it would suggest that a software update mechanism (such as the kinds distributed with Linux distributions themselves) or other similar mechanisms could constitute backdoors.

Aside from generalized definitions, it is surprising that no academic papers could be found that specifically attempted to define the term. There are countless papers in which backdoors are documented, and their taxonomy analyzed, however no clear definition of backdoor was found that could be applied to a general component of a computer system. It would seem that for decades, the computing community at large has taken a “know it when I see it” approach to the term, without ever accepting a clear definition or test.

1.0.3 General Taxonomy

While backdoors have become increasingly complex and vary in design over time, all backdoors share the same basic taxonomy. They affect security mechanisms (more specifically, boundary mechanisms) in the following ways:

  • They operate without consent of the computer system owner
  • They perform tasks that avert disclosed purposes
  • They are under the control of undisclosed actors

1.1    Purpose

The purpose of the three-prong test in this paper is to provide a basis for technical argument: to be able to effectively argue that a component within a security boundary mechanism constitutes a backdoor, or does not constitute a backdoor, and support that argument with consistent facts.

Essentially, this framework is designed to enable one to point at something, in a technical context, and argue, “this is a backdoor, and here are my facts to support that argument”, while also enabling someone else to argue, “no it isn’t, and here are my supporting arguments”, with the expectation that after thorough analysis, consensus may be achieved.

1.2    Definitions

Throughout this paper, the term mechanism is used to describe a boundary enforcement mechanism (security mechanism) that is being evaluated. A mechanism (or security mechanism) can be any piece of software, firmware, or hardware that establishes, either explicitly or implicitly, a security boundary. For example, an authentication mechanism explicitly establishes a security boundary by controlling user access. A software update mechanism implicitly establishes a security boundary by means of code control; that is, controlling the code introduced into a computer based on the user contract, which I will explain in depth. An encrypted channel establishes an implied security boundary by controlling who, or what, can communicate over a privileged channel of communication. All of these are referred to as security mechanisms throughout this paper.

When evaluating a mechanism, components of that mechanism may be explored; for example, an authentication mechanism with a component that allows “golden key” access. In this context, the mechanism is said to be “backdoored” if the component itself satisfies the requirements of a backdoor. Here, the malicious component, a mechanism in and of itself, is the overall subject of the evaluation within the context of the computer, however it must be explored in the context of the larger security mechanism.

Throughout this paper, the term owner or computer’s owner is referenced. Because ownership is complex, this term is intended to mean one who has entitlements and authorization to control access on a computer. This is sufficient to address complex ownership models such as employer owned equipment.

2.0    Three-Prong Test

This section identifies three specific requirements a security mechanism must satisfy in order to meet the definition of a backdoor, and proposes three crucial questions that must be satisfied to meet these requirements.

2.1    Intent

The intent requirement determines whether or not the actions performed by the security mechanism, as intended by its manufacturer, were adequately disclosed to the owner of the computer. Typically, a backdoor can exhibit malicious behavior related to subverting a security boundary that is expected by the user; the requirement allows for this to satisfy the intent of the manufacturer, however also leaves the requirement broad enough so as to accommodate other mechanisms that violate a security boundary that may not be as straightforward, such as the software controls in a software update service.

Does the mechanism behave in a way that subverts purposes disclosed to the computer owner?”

The concept of intent plays closely to user trust and perception. In other words, does the mechanism perform only the tasks that the user expects it to perform (or support tasks that the user expects the larger components to perform), or does it, by design, subvert these purposes? If the mechanism can exhibit undisclosed behavior that is contrary to the intents disclosed to the user by the manufacturer, then this satisfies the intent requirement.

Consider the code controls of a typical software update mechanism. The intent, as disclosed by the manufacturer, is to prevent unauthorized updates. Disclosure by the manufacturer about what types of updates are authorized establishes a user contract.

A user contract, as I refer to herein, is an abstract construct whereby the manufacturer and the user have developed a mutual understanding and expectation of the intention and proper function of a mechanism. This construct allows the user to manage consent, which will be discussed in the next section. For example, the manufacturer will state that the purpose of software updates are to fix bugs and introduce new code into the system that the user would not find objectionable, according to the manufacturer’s privacy policies and end-user agreement.

An example of a user contract can be considered with the example of an authentication mechanism inside of a router that provides maintenance access with a “golden key” password. Here, the intent of the manufacturer for the authentication module was disclosed to the user as a function understood to allow only authorized users into the system (understood to mean having a password that matches a known password created by an administrator). By allowing for a maintenance password to bypass the administrator’s user list, the mechanism has subverted its original purpose in providing a security boundary, and violated the user contract.

Such discussions about intent can quickly become complicated. A manufacturer’s intent can change over time, and with that the user contract must be re-evaluated. For example, consider a software update that updates itself and adds a licensing component that the user may find objectionable. This new purpose must be expressed to the user prior to an update; otherwise the mechanism can be seen as having broken its user contract.

When new functionality is added to an existing user contract, additional disclosure is required in order to modify the user contract; this is frequently seen in practice. For example, release notes are displayed or published by software manufacturers prior to a software update. If intent has changed, the disclosed changes can continue to revise the user contract by again implying consent through disclosure, so long as the user continues to have an informed decision about the mechanism running on their computer. Software that initially obtained the user’s consent, but then revised its intent without disclosure to the user has violated the user contract, and therefore invalidated consent.

2.1.1 Subverted versus exploited

Lastly, consider the term intent, as well as the term subvert, incorporated into the definition. The term subvert takes into account a certain level of intentionality by design. This framework does not attempt to address the matter of negligence, but rather leaves that up to existing legal remedies to explore. It does, however leave open the argument that the manufacturer, for the purpose of exploitation, can leave a mechanism intentionally vulnerable; this is difficult to demonstrate in practice.

Whether or not the manufacturer’s intent for the mechanism is clear can determine whether or not stated purposes were subverted. There is enough freedom here to use other arguments to suggest intent through negligence, but also enough depth to this to ensure that a mechanism is not a backdoor simply because it is vulnerable. Some such arguments may overall be semantic ones, rather than technical ones.

2.2    Consent

The second requirement to determine if a mechanism meets the definition of a backdoor is a test of consent. This determines whether or not the owner of the computer has authorized the mechanism in question, based on the user contract established by means of disclosing their intent.

“Is the mechanism, or are subcomponents of the mechanism, active on a computer without the consent of the computer’s owner?”

This requirement provides enough room to be sufficiently satisfied in cases where consent is compelled (therefore, not truly consent), as well as cases where consent cannot be revoked (such as a service that cannot be turned off, which is also not consent).

Consider the controls of our automatic software update service from the prior section. Software update services typically behave in such a way as their capabilities rest in the hands of owner consent, however consent of the controls themselves are more or less implied. By activating software updates, the user is granting consent to the underlying security mechanisms with the pretense that they will behave according to their stated intent; in other words, they will only permit authorized code to be introduced into the system.

By enabling software updates, the owner implicitly grants consent to the underlying security mechanisms to place controls on the kind of software that is installed, but only to the extent of their disclosed intent. As long as these security controls are performing their disclosed tasks (in accordance with the user contract), such a mechanism would not satisfy this requirement to meet the definition of a backdoor, because it has the user’s consent to control the introduction of code accordingly.

In contrast, backdoors are mechanisms that are active without consent (e.g. that is, “unauthorized”), or cannot be disabled by means made available to the owner. For example, consider a subcomponent of the software update mechanisms that permits unauthorized software to be introduced into the system. The owner did not authorize this subcomponent (since it was not part of the user contract), and therefore did not grant consent for it to be active on the system – this satisfies the consent requirement to meet the definition of a backdoor. On the other hand, if the security mechanism was compromised (“hacked”), then it does not meet the consent requirement to meet the definition of a backdoor, because it was still running with the user’s consent. In this case, it is a compromised mechanism, but not a backdoor. This concept of compromised mechanisms will be explored in more detail throughout the paper. The intention of the manufacturer, which ultimately effects the user contract, plays a key role in determining the difference between the two.

Consider the following examples that would satisfy the consent requirement to meet the definition of a backdoor:

  • A software daemon that is installed when the computer owner runs a new application for the first time, and is not capable of being disabled through the user interface. Here, the owner is not given the opportunity to grant or revoke consent from its underlying mechanisms. (Note: legitimate software may also satisfy the consent requirement, but will not satisfy the intent requirement, or the access requirement, discussed next).
  • An authentication mechanism for router firmware that includes an undocumented subcomponent granting “golden key” access; that is, grants access if the given password matches a built-in maintenance password. Without knowledge of or the ability to disable this mechanism on the router, the user can be said to have not given consent.
  • An undocumented diagnostics service allowing the manufacturer to bypass user-level encryption to make repairs easier. Here, the mechanism is undocumented, and therefore cannot have the user’s consent.

As demonstrated by these examples, consent is inherently tied to the manufacturer’s intent, and ultimately the concept of a user contract established between the manufacturer and the user.

2.3    Access

The access requirement determines two factors:

  • Whether or not the mechanism can be controlled (or accessed) at all
  • Whether or not the mechanism is subject to control (or accessible) by an undisclosed actor.

“Is the mechanism under the control of an undisclosed actor?”

The access requirement establishes both whether the mechanism is under control (that is, can be controlled or accessed at all by anyone other than users explicitly authorized by the computer owner), and whether or not it can be controlled or accessed by any undisclosed actors, such as unknown third parties. Here, the term undisclosed actors means anyone other than the computer owner, any users he or she has authorized to access the mechanism, and any disclosed external actors, such as the software manufacturer (delivering updates).

This is probably the most crucial requirement of all three tests because it contrasts the difference between a backdoor and other types of malicious code, such as malware, trojans, viruses, and adware. All of these can be backdoors, if they include a command-and-control (C2) component, however not every instance of these are in fact backdoors. A destructive piece of malware, for example, that is not controllable by the malware’s creator, is not a backdoor because it does not satisfy this requirement. A botnet payload that is controlled by a bot-master, however, does satisfy this requirement.

A piece of software that queues up information for future access is considered to be accessible by an actor. For example, a piece of malware that caches personal data, to be sent in batch, would satisfy this requirement towards meeting the definition of a backdoor.

This requirement also covers mechanisms involving access by a third party by means of proxy, for example a piece of ransomware in which the actor controls the software through decryption keys entered into the software by the computer owner. Such a mechanism satisfies this requirement towards meeting the definition of a backdoor.

This test is where the rubber meets the road when discussing government backdoors, such as the concept of pushing malicious software through an update mechanism without the computer owner’s knowledge. A software update mechanism, when behaving healthy, may not appear to satisfy the requirements of a backdoor, however if the mechanism can be controlled by a third party (either directly, or indirectly via court order) to subvert its stated intent, then it has violated the user contract, invalidated consent, and satisfies the definition of a backdoor.

2.4    Liability

Does a legitimate software update service that is attacked and used to push malware to the computer system constitute a backdoor? On a technical level, no, because the intentions of the software have not changed (unless malicious intent by the manufacturer can be demonstrated). There is an argument to be made, however, that the service has effectively been backdoored; i.e. “a hacker turned the service into a backdoor”, or, “a hacker backdoored the service”, however this is not a technical argument, only a semantic one. I make no attempt in this paper to define the proper use of backdoor as a verb.

3.0    Three-Prong Test Thought Examples

This section will examine a number of thought exercises, applying the three-prong test to various scenarios and expanding on the different ways it may be applied. These examples are intended for guidance only, and not to demand a specific technical conclusion about the examples used. The reader has the freedom to make counter-arguments and to test their interpretation of the three-prong test to arrive at their own conclusion.

3.1 Three-Prong Test Applied to Legislation

Considering recent “backdoor” legislation as it pertains to a legislated backdoor into end-user computer systems, legislation to allow the government to compel a manufacturer to install malicious software through a software update mechanism alone would not necessarily constitute a backdoor, unless this information was withheld from customers. If a manufacturer were to fully disclose that specific government agencies had control over a software update mechanism, and that the mechanism could install software whose intent was to introduce code deemed to be objectionable to the user, then the mechanism no longer satisfies the intent or the access prongs of the test.

In other words, in order for the government to legislate a mechanism that would no longer meet the definition of a backdoor, they must disclose to the owner that the government can install functionality through auto-update (the third prong), or disclose that functionality that can introduce code deemed objectionable by the owner (the second prong). If the user chooses to still update their software, then this is not a backdoor because it’s been disclosed, and either its intent or its origins have been fully stated. It is, in fact, much worse than a backdoor at this point; it is a surveillance tool and should be treated as such in law.

Some may wish to use a definition such as “government backdoor”, implying a disclosed form of a backdoor, however this is a semantic argument and not a technical one; it also does little justice to describe the civil rights issues that are raised by compelling such a surveillance tool.

Consider the following more realistic scenario, however. If the government were to misrepresent or hide the intent and the origins of their capabilities to subvert the auto-update software controls, ordering this functionality in secret, then this has not been disclosed to the user, and pushing malware or spyware (under the direction of an actor undisclosed to the user) would meet the definition of a backdoor, invalidating consent given by means of a user contract. This will be explored in the next section of this paper.

The construction of the three-prong test provides enough flexibility for technical arguments to be made of what constitutes consent and disclosure on a national stage. Because this framework has led us to such arguments (which are out of scope), the framework itself has done its job in providing a construct in which these arguments can be explored.

3.2    Three-Prong Test Applied to Secret Court Orders

In today’s legal landscape, secret court orders are a possibility. In such scenarios, we are no longer discussing disclosed actors or intent, but rather secret orders such as those going through a FISA court, such as section 702 orders or secret orders under the All Writs Act. In these cases, our hypothetical software update service could unwittingly become a backdoor if the government chose to quietly control it without any disclosure to the user.

In the same way, for the manufacturer to be ordered to keep such capabilities a secret would be to turn the manufacturer into an arm of government for the express intent of creating a backdoor, and the manufacturer could be considered partially liable for the consequences of doing so. Those that control the mechanism dictate the intent, and so if the government is partially in control of the mechanism, then their intentions must become part of the overall test. In such a case, the functionality of the software would likely subvert the intent disclosed to the user. Consent would similarly become invalidated, resulting in a software update mechanism that qualifies as a backdoor by definition.

3.3    Three-Prong Test Applied to Apple File Relay

Shortly after the release of research in 2014[7], Apple’s file relay service became the subject of debate among the information security community, and whether or not it constituted a backdoor.

File Relay was an Apple service that ran without the user’s knowledge on millions of iOS devices. The service was a mechanism that bypassed the backup password encryption on versions of iOS 7 and below, and was accessible by either using an existing pair record from a desktop machine, or by creating one by pairing an unlocked device on-the-fly. It was not a backdoor into the device’s operating system itself, but rather I argued that it was an encryption backdoor; it provided a way to bypass a critical security boundary against data theft, and subverted the backup password’s own stated purpose: to allow paired devices to be better secured against data theft. Applying the three-prong test to this service, my arguments are as follows.

3.3.1 Intent

“Does the mechanism behave in a way that subverts purposes as disclosed to the computer owner?”

The actual use for the file relay service is still unclear, however none of its functionality was ever disclosed to the user, nor was it disclosed that by allowing a device to pair with a desktop machine, this enabled the capability to bypass backup encryption. After the research was made known, Apple argued its intent was diagnostics, and that the user (by simply using iOS) had consented to diagnostics, however this was only made known after the fact.

My counter-argument was that a diagnostic service constantly running on the device was arguably not within the scope of the user contract. In fact, all other known diagnostic services in iOS were on-demand (with user consent), whereas file relay was “always on”. Its purposes included defeating a security mechanism that was explicitly provided to the user. Once the research made the existence of this service known to the average user, Apple promptly disabled it in all future firmware updates due to public outcry, demonstrating that the user did not consider this part of Apple’s stated intent.

3.3.2 Consent

“Is the mechanism, or are subcomponents of the mechanism, active on a computer without the consent of the computer’s owner?”

The existence of the file relay service was not disclosed to any user, and it was active on millions of iOS devices without the user’s consent. The user had no way to disable this mechanism, even after research made its existence known.

3.3.3 Access

“Is the mechanism under the control of an undisclosed actor?”

The mechanism was subject to access by anyone with a pair record; it did not require any authentication by the manufacturer to be used, nor did Apple provide any means of controlling this feature otherwise.

Remaining undisclosed to the user, I argue that ignorance of this capability modified the user’s perception of pairing relationships, since the user believed that backup encryption protected a paired device from dumping the device’s content. This may have caused some users to be more lax with which devices they chose to pair with (such as community devices, work devices, or family and friend devices).

The service was available to any USB or WiFi connection capable of using or generating a pair record. I argued here that by defeating the security mechanism explicitly given to the user (backup encryption), file relay extended what would have been more restrictive access practices through pairing, as well as physical access, where a pair record could be created on-the-fly.

3.3.4 Discussion

So was file relay a backdoor? It clearly failed the consent test and, in my opinion, the access test. The real question is whether or not the mechanism satisfied the intent test, and this is where many such technical conversations will end up. To Apple, it may very well not have been their intention to create a mechanism that subverts their own security; this could have been a programming oversight. On the other hand, the mechanism was very clearly designed to not take backup encryption into account. The intent of the security boundary is what matters most here, and the security boundary arguably was that of defeating backup encryption. The real question is whether or not defeating backup encryption was intentional, or a design error.

To law enforcement, file relay was most definitely exploited, as it was used widely in a number of commercial forensic tools to bypass user encryption for prosecuting crimes – a capability the user was not expecting from Apple. Under this definition, however, the manufacturer’s intent is what’s important in satisfying this requirements.

Depending on how you interpret the intent test, its broader intention to subvert encryption could make it a backdoor, or its more narrow intention as a poorly designed diagnostics tool could conclude that it was just poor engineering. One might argue that law enforcement forensics tools backdoored the file relay service, which would make it a backdoor belonging to such product manufacturers, and not Apple. That is more of a semantic argument than a technical one.

3.4    Three-Prong Test Applied to Clipper chip

The Clipper chip was a cipher chipset developed and endorsed by the United States National Security Agency[8]. The chip itself was designed to provide a key escrow enabling law enforcement the capability of decrypting any communication as a third party to any messages sent through the chip. It is considered by most to be the quintessential definition of a “backdoor”.

The three-prong test I have proposed here analyzes implementation in the context of a mechanism inside a computer. There are, therefore, different contexts to analyze the Clipper chip in. The standalone chip could be considered, as it incorporates a subcomponent that provides a “backdoor” key escrow through use of a Law Enforcement Access Field, however it is much more important to have a discussion about the Clipper chip in the context of a computer (such as the AT&T TSD-3600) it has been installed in, in order to correctly apply the three-prong test. Since the three-prong test deals specifically with the context of a mechanism inside a computer, the chip itself (the mechanism) arguably does not constitute a computer on its own. We will analyze the chip from the perspective of an installation inside the AT&T TSD-3600.

3.4.1 Background

The Clipper chip was designed as a drop-in replacement for the DES [NBS77] cryptographic chipset[8]. The only product that it made it into was the AT&T TSD-3600 Telephone Security Device. While the capabilities of the chip had become made known to the public, the AT&T TSD-3600 itself was never sold to the public. It was, however, used internally inside various government agencies.

The user manual itself only made vague disclaimers about cryptanalytic attacks and did not forwardly state in any way that one intent of the device included surveillance capabilities[9]. It also did not make any mention of the government, or any agencies, as actors that had control or access to the device[9].

In the analysis to follow, consider the three-prong test as applying from the perspective of a government employee who has been given a TSD-3600 to use, or a hypothetical end consumer (such as a CEO) who has purchased the device, had it been available for public consumption.

It’s important to note that any peripheral disclosure outside of that by the manufacturer is irrelevant to our purposes. Simply because the Clipper chip was known, by way of media, to allow for government surveillance does not satisfy the demands of disclosure in general; this, however, is a technical position to be argued outside of the scope of the backdoor test. For our purposes here, we will consider external factors such as the media irrelevant to disclosure.

3.4.2 Intent

“Does the mechanism behave in a way that subverts purposes as disclosed to the computer owner?”

Disclosed purposes, according to the TSD-3600 manual[9] did not include a surveillance mechanism for law enforcement that bypassed the user privacy boundary. The manual made no attempt to notify the user of this capability, and there is nothing on record to demonstrate that AT&T made this capability known in any other way to the end-user. The intent requirement is satisfied.

3.4.3 Consent

“Is the mechanism, or are subcomponents of the mechanism, active on a computer without the consent of the computer’s owner?”

In the case of the TSD-3600, the user did not give consent for the Clipper’s LEAF mechanism to be active. The user also does not have any ability to disable it. Therefore, the consent requirement is satisfied.

3.4.4 Access

“Is the mechanism under the control of an undisclosed actor?”

The mechanism was under the control (or accessible) by an undisclosed actor (namely any government agency capable of sending the correct signal to the device). The access requirement is satisfied.

3.4.5 Analysis

The TSD-3600 incorporated a chip designed to give surveillance capabilities to the government. While the chip itself, out of context, is merely a surveillance-backdoor “kit” of sorts, its undisclosed use in a product demonstrates its implementation as a backdoor. The “mechanism” in question here is the LEAF mechanism that subverts the user privacy contract.

One might argue that court proceedings that garnered attention on the national stage may constitute disclosure. In this scenario, a government employee who actively used the TSD-3600 being fully aware of its surveillance capabilities was not necessarily using what they considered a backdoored product, but to them it may better fit the definition of an internal surveillance device.

Had the implementation been different, it is possible that the Clipper could conceivably be used in ways that would not constitute a backdoor, but rather a disclosed surveillance tool, for much the same reasons that modern Mobile Device Management is not considered a backdoor. For example, if AT&T had published in the manual that the surveillance capabilities of the Clipper existed in the device, or made known that government actors had control/access to the cipher routines inside the TSD-3600, then the user would have been informed of the intent of the unit, and could have made an informed decision to use a different solution.

On the other hand, had legislation passed that mandated Clipper’s use in all secure telephony products, then the user arguably would have no way to revoke consent of the mechanism. Disclosure would then determine whether the Clipper was a secret backdoor, or a mass surveillance tool without user consent, had its purpose been plainly disclosed to all end-users. I will reiterate, the civil rights atrocity for a government to compel the use of a mass surveillance tool on its citizens goes far beyond the malicious intent of a backdoor, and should be looked upon with more serious consequences than a backdoor.

3.5    Three-Prong Test Applied to Computer Worms

In this section, we examine two of the most destructive worms in history: My Doom and Code Red II.  The My Doom worm affected one million machines and reported damages of $38 billion. Code Red II affected one million machines also, and reported damages of $2.75 billion[11]. One of these two worms’ mechanisms will satisfy the definition of a backdoor, while one will not.

3.5.1 Intent

“Does the mechanism behave in a way that subverts purposes disclosed to the computer owner?”

The My Doom payload was included in an email attachment that subverted the user’s expectations of being an attachment containing a copy of an undelivered message; it had not established a user contract of any kind, and delivered its payload by means of deception. It clearly satisfies this requirement for the definition of a backdoor.

The Code Red worm was not disclosed to the user, and therefore it could not have formed a user contract of any kind with the user. It clearly satisfies this requirement for the definition of a backdoor.

3.5.2 Consent

“Is the mechanism, or are subcomponents of the mechanism, active on a computer without the consent of the computer’s owner?”

The My Doom worm was transmitted via email, masquerading as a mail delivery error message. When the user clicked on the included attachment, the worm executed and resent the worm to all email addresses found in the user’s address book and other files. Here, different arguments about consent can be made. One may argue that the user granted consent by executing the file. On the other hand, one may also argue that the user never intended to execute a file, but open a mail attachment. The latter argument suggests consent was contingent upon their understanding of the intent of the attachment, which turned out to be misrepresented, and therefore consent was not given.

Most would defer to the latter argument as the most valid, however the requirement is flexible enough in that more complex cases involving consent may be effectively argued on both sides. For the purposes of this paper, we shall conclude that it satisfies this requirement for the definition of a backdoor.

The Code Red worm affected machines running Microsoft IIS web server. The worm spread by exploiting buffer-overflow vulnerabilities, using a long buffer of characters followed by exploit code. The Code Red worm did not obtain user consent, and had no interaction with the user; it invited itself into the computer system and executed without any access being granted to it. It clearly satisfies this requirement for the definition of a backdoor.

3.5.3 Access

“Is the mechanism under the control of an undisclosed actor?”

The purpose of the My Doom worm was believed to have been to launch a distributed denial-of-service attack against SCO Group by flooding the host www.sco.com with traffic. This functionality was baked into the worm, and variants of the worm analyzed, for the purposes of this paper, did not subject the computer to any kind of remote access by a hacker or other actor. Because the worm was completely self-contained, and could not be controlled by any outside factors, the My Doom worm does not satisfy the access requirement for the definition of a backdoor, and therefore does not satisfy the definition of a backdoor.

The first strains of Code Red were self-contained, however the Code Red II variant installed a mechanism allowing a remote attacker to control the infected computer[12]. This remote access mechanism satisfies the access requirement for the definition of a backdoor, and therefore with all three tests satisfied, Code Red II includes mechanisms that fall under the definition of a backdoor.

This requirement is also satisfied by download mechanisms in other worms as well. For example, the ILOVEYOU worm downloaded an executable from the Internet, which ran on infected computers. By changing this executable, one could effectively argue that the attacker had access to infected computers by means of the remote code.

4.0    Technical Definition of a Backdoor

If you take these three prongs and parse them into a single statement, the result is a reasonable technical definition of a backdoor: 

A backdoor is a component of a security boundary mechanism, in which the component is active on a computer system without consent of the computer’s owner, performs functions that subvert purposes disclosed to the computer’s owner, and is under the control of an undisclosed actor. 

With this definition, a mechanism can be said to be backdoored if it contains a component that is a backdoor.

It is the opinion of the author that this is sufficiently specific enough not to be misused. It is not so broad that it could describe every component of a computer system as a backdoor: non-purposeful software components fall under the intent disclosed to the user as well as informed or implied consent, and fall under normal operation of the software. Such services are also not under control of any party, but autonomous components running on the system (such as a task scheduling service).

The wording is also specific enough to apply to purposeful code that is intentionally performing tasks in violation of its user contract, but also under the control of an actor. This definition provides a solid construct but also enough room to allow for interpretation and counter arguments.

5.0    Conclusions

Defining a mechanism that has been presented in many forms is no easy task, and there may never be an entirely perfect black and white definition. This paper has described the taxonomy of backdoors so as to address their commonalities in a way that provides an adequate technical structure to analyze virtually any security boundary mechanism of a computer. A good definition of a backdoor must be able to contrast a backdoor from other classes of malware or legitimate security mechanisms, and we have done so here successfully. Only time and exposure to a number of technical challenges will determine the efficacy of this three-prong test, however applying it to numerous examples has thus far demonstrated it to be robust enough for consideration within the community.

6.0    Acknowledgments

Special thanks to peer reviewers Dino Dai Zovi and Wesley McGrew, whose insight helped add definition to these concepts.

Citations

[1] Wargames (June 3, 1983), Motion Picture. MGM Pictures.

[2] Brown, Scott (July 21, 2008). “WarGames: A Look Back at the Film That Turned Geeks and Phreaks Into Stars”. Wired.

[3] Stoll, Cliff (September 13, 2005). “The Cuckoo’s Egg: Tracking a Spy Through the Maze of Computer Espionage”.

[4] Zetter, Kim (December 23, 2015). “This Year’s 11 Biggest Hacks”. Wired.

[5] Haselton, Todd (February 7, 2014). “Target Hackers Used HVAC Credentials for Backdoor Access”. Techno Buffalo.

[6] Ryge, Leif (February 27, 2016). “Most software already has a “golden key” backdoor: the system update”. ArsTechnica.

[7] Zdziarski, Jonathan (January 26, 2014). “Identifying Backdoors, Attack Points, and Surveillance Mechanisms in iOS Devices”. International Journal of Digital Forensics and Incident Response.

[8] Blaze, Matt (August 20, 1994). “Protocol Failure in the Escrowed Encryption Standard”

[9] AT&T (September 30, 1992). “AT&T TSD User’s Manual Telephone Security Device Model 3600”. Archived at http://www.beatriceco.com/bti/porticus/bell/pdf/bsp/106919921_att_tsd_3600_users_manual_09_30_92.pdf

[10] National Geographic. “Doomsday Preppers”. Television series.

[11] Pudwell, Sam (October 24, 2015). “The most destructive computer viruses of all time”

[12] CERT (August 6, 2001). “Code Red II: Another Worm Exploiting Buffer Overflow in IIS Indexing Service DLL”.

[13] Oxford Dictionary. Definition 1.1.

[14] Linux Information Project (LINFO). Definition. Archived from http://www.linfo.org/backdoor.html.

WSJ Describes Reckless Behavior by FBI in Terrorism Case

The Wall Street Journal published an article today citing a source at the FBI is planning to tell the White House that “it knows so little about the hacking tool that was used to open terrorist’s iPhone that it doesn’t make sense to launch an internal government review”. If true, this should be taken as an act of recklessness by the FBI with regards to the Syed Farook case: The FBI apparently allowed an undocumented tool to run on a piece of high profile, terrorism-related evidence without having adequate knowledge of the specific function or the forensic soundness of the tool.

Best practices in forensic science would dictate that any type of forensics instrument needs to be tested and validated. It must be accepted as forensically sound before it can be put to live evidence. Such a tool must yield predictable, repeatable results and an examiner must be able to explain its process in a court of law. Our court system expects this, and allows for tools (and examiners) to face numerous challenges based on the credibility of the tool, which can only be determined by a rigorous analysis. The FBI’s admission that they have such little knowledge about how the tool works is an admission of failure to evaluate the science behind the tool; it’s core functionality to have been evaluated in any meaningful way. Knowing how the tool managed to get into the device should be the bare minimum I would expect anyone to know before shelling out over a million dollars for a solution, especially one that was going to be used on high-profile evidence.

A tool should not make changes to a device, and any changes should be documented and repeatable. There are several other variables to consider in such an effort, especially when imaging an iOS device. Apart from changes made directly by the tool (such as overwriting unallocated space, or portions of the file system journal), simply unlocking the device can cause the operating system to make a number of changes, start background tasks which could lead to destruction of data, or cause other changes unintentionally. Without knowing how the tool works, or what portions of the operating system it affects, what vulnerabilities are exploited, what the payload looks like, where the payload is written, what parts of the operating system are disabled by the tool, or a host of other important things – there is no way to effectively measure whether or not the tool is forensically sound. Simply running it against a dozen other devices to “see if it works” is not sufficient to evaluate a forensics tool – especially one that originated from a grey hat hacking group, potentially with very little actual in-house forensics expertise.

It is highly unlikely that any agency could effectively evaluate the forensic soundness of any tool without having an understanding of how it works. The FBI’s arguments to the White House with regards to this appear to many as an attempt to simply skirt the vulnerabilities equities process.

There are only two possible conclusions to draw from all of this: either the FBI is lying to the White House (misleading the President), and actually does possess enough knowledge about the tool to warrant a review, or the FBI never evaluated and validated the safety of this tool, never learned how it worked, and recklessly used it on a piece of terrorism-related evidence so high profile that it warranted an egregious abuse of the constitution when ordering Apple to assist… yet was so inconsequential a piece of evidence that the FBI didn’t have a problem running an ordinary jailbreak tool on it. This would not fall short of misleading the court.

The Syed Farook case has been wrought with recklessness. Numerous mistakes were made early on, as I’ve written about, such as changing the iCloud password and possibly even powering down the device (or letting it die), locking the encryption. When the FBI demonstrated that only a mere 30 days was necessary in order to get into the iPhone, many interpreted this as proof that adequate due diligence had not been done prior to filing for an All Writs Act order against Apple. Beyond this case, the FBI has pulled out of their NY iPhone case after the passcode was given to them – further suggesting the FBI’s unwillingness or inability to do their job, to the degree of abusing the All Writs Act as an alternative to good police work.

Ironically, the NY case highlighted the DOJ’s reluctance to use an undocumented hacking tool named IP-BOX, which was essentially a “black box” to brute force PINs on iOS 7/8 devices, and listed as one major reason Apple’s help was required. Ironically, the FBI is claiming to have done the very thing here that they argued they shouldn’t do with regards to the NY case: Use an undocumented, opaque hacking tool that they were unable to fully understand. It would seem that situation ethics are in play here.

This sets a dangerous practice in motion: The FBI has offered this tool to any other law enforcement agencies that need it. So the FBI is endorsing the use of an untested tool that they have no idea how it works, for every kind of case that could go through our court system. A tool that was also only tested, if at all, for one very specific case now being used on a very broad set of types of data and evidence, which it could easily damage, alter, or – more likely – see thrown out of cases as soon as it’s challenged. If the FBI truly does not know how this tool works, they’ve created an extremely dangerous situation for all of us.

Social Media Auto Publish Powered By : XYZScripts.com