Microsoft has no problem taking the $10B JEDI cloud contract if it wins

The Pentagon’s $10 billion JEDI cloud contract bidding process has drawn a lot of attention. Earlier this month, Google withdrew, claiming ethical considerations. Amazon’s Jeff Bezos responded in an interview at Wired25 that he thinks that it’s a mistake for big tech companies to turn their back on the U.S. military. Microsoft president Brad Smith agrees. […]

The Pentagon’s $10 billion JEDI cloud contract bidding process has drawn a lot of attention. Earlier this month, Google withdrew, claiming ethical considerations. Amazon’s Jeff Bezos responded in an interview at Wired25 that he thinks that it’s a mistake for big tech companies to turn their back on the U.S. military. Microsoft president Brad Smith agrees.

In a blog post today, he made clear that Microsoft intends to be a bidder in government/military contracts, even if some Microsoft employees have a problem with it. While acknowledging the ethical considerations of today’s most advanced technologies like artificial intelligence, and the ways they could be abused, he explicitly stated that Microsoft will continue to work with the government and the military.

“First, we believe in the strong defense of the United States and we want the people who defend it to have access to the nation’s best technology, including from Microsoft,” Smith wrote in the blog post.

To that end, the company wants to win that JEDI cloud contract, something it has acknowledged from the start, even while criticizing the winner-take-all nature of the deal. In the blog post, Smith cited the JEDI contract as an example of the company’s desire to work closely with the U.S. government.

“Recently Microsoft bid on an important defense project. It’s the DOD’s Joint Enterprise Defense Infrastructure cloud project – or “JEDI” – which will re-engineer the Defense Department’s end-to-end IT infrastructure, from the Pentagon to field-level support of the country’s servicemen and women. The contract has not been awarded but it’s an example of the kind of work we are committed to doing,” he wrote.

He went on, much like Bezos, to wrap his company’s philosophy in patriotic rhetoric, rather than about winning lucrative contracts. “We want the people of this country and especially the people who serve this country to know that we at Microsoft have their backs. They will have access to the best technology that we create,” Smith wrote.

Microsoft president Brad Smith. Photo: Riccardo Savi/Getty Images

Throughout the piece, Smith continued to walk a fine line between patriotic duty to support the U.S. military, while carefully conceding that there will be different opinions in a large and diverse company population (some of whom aren’t U.S. citizens). Ultimately, he believes that it’s critical that tech companies be included in the conversation when the government uses advanced technologies.

“But we can’t expect these new developments to be addressed wisely if the people in the tech sector who know the most about technology withdraw from the conversation,” Smith wrote.

Like Bezos, he made it clear that the company leadership is going to continue to pursue contracts like JEDI, whether it’s out of a sense of duty or economic practicality or a little of both — whether employees agree or not.

Keeping artificial intelligence accountable to humans

As a teenager in Nigeria, I tried to build an artificial intelligence system. I was inspired by the same dream that motivated the pioneers in the field: That we could create an intelligence of pure logic and objectivity that would free humanity from human error and human foibles.

As a teenager in Nigeria, I tried to build an artificial intelligence system. I was inspired by the same dream that motivated the pioneers in the field: That we could create an intelligence of pure logic and objectivity that would free humanity from human error and human foibles.

I was working with weak computer systems and intermittent electricity, and needless to say my AI project failed. Eighteen years later—as an engineer researching artificial intelligence, privacy and machine-learning algorithms—I’m seeing that so far, the premise that AI can free us from subjectivity or bias is also disappointing. We are creating intelligence in our own image. And that’s not a compliment.

Researchers have known for awhile that purportedly neutral algorithms can mirror or even accentuate racial, gender and other biases lurking in the data they are fed. Internet searches on names that are more often identified as belonging to black people were found to prompt search engines to generate ads for bailbondsmen. Algorithms used for job-searching were more likely to suggest higher-paying jobs to male searchers than female. Algorithms used in criminal justice also displayed bias.

Five years later, expunging algorithmic bias is turning out to be a tough problem. It takes careful work to comb through millions of sub-decisions to figure out why the algorithm reached the conclusion it did. And even when that is possible, it is not always clear which sub-decisions are the culprits.

Yet applications of these powerful technologies are advancing faster than the flaws can be addressed.

Recent research underscores this machine bias, showing that commercial facial-recognition systems excel at identifying light-skinned males, with an error rate of less than 1 percent. But if you’re a dark-skinned female, the chance you’ll be misidentified rises to almost 35 percent.

AI systems are often only as intelligent—and as fair—as the data used to train them. They use the patterns in the data they have been fed and apply them consistently to make future decisions. Consider an AI tasked with sorting the best nurses for a hospital to hire. If the AI has been fed historical data—profiles of excellent nurses who have mostly been female—it will tend to judge female candidates to be better fits. Algorithms need to be carefully designed to account for historical biases.

Occasionally, AI systems get food poisoning. The most famous case was Watson, the AI that first defeated humans in 2011 on the television game show “Jeopardy.” Watson’s masters at IBM needed to teach it language, including American slang, so they fed it the contents of the online Urban Dictionary. But after ingesting that colorful linguistic meal, Watson developed a swearing habit. It began to punctuate its responses with four-letter words.

We have to be careful what we feed our algorithms. Belatedly, companies now understand that they can’t train facial-recognition technology by mainly using photos of white men. But better training data alone won’t solve the underlying problem of making algorithms achieve fairness.

Algorithms can already tell you what you might want to read, who you might want to date and where you might find work. When they are able to advise on who gets hired, who receives a loan, or the length of a prison sentence, AI will have to be made more transparent—and more accountable and respectful of society’s values and norms.

Accountability begins with human oversight when AI is making sensitive decisions. In an unusual move, Microsoft president Brad Smith recently called for the U.S. government to consider requiring human oversight of facial-recognition technologies.

The next step is to disclose when humans are subject to decisions made by AI. Top-down government regulation may not be a feasible or desirable fix for algorithmic bias. But processes can be created that would allow people to appeal machine-made decisions—by appealing to humans. The EU’s new General Data Protection Regulation establishes the right for individuals to know and challenge automated decisions.

Today people who have been misidentified—whether in an airport or an employment data base—have no recourse. They might have been knowingly photographed for a driver’s license, or covertly filmed by a surveillance camera (which has a higher error rate.) They cannot know where their image is stored, whether it has been sold or who can access it. They have no way of knowing whether they have been harmed by erroneous data or unfair decisions.

Minorities are already disadvantaged by such immature technologies, and the burden they bear for the improved security of society at large is both inequitable and uncompensated. Engineers alone will not be able to address this. An AI system is like a very smart child just beginning to understand the complexities of discrimination.

To realize the dream I had as a teenager, of an AI that can free humans from bias instead of reinforcing bias, will require a range of experts and regulators to think more deeply not only about what AI can do, but what it should do—and then teach it how.