Applied gets $2M to make hiring fairer — using algorithms, not AI

London-based startup Applied has bagged £1.5M (~$2M) in seed funding for a fresh, diversity-sensitive approach to recruitment that deconstructs and reworks the traditional CV-bound process, drawing on behavioural science to level the playing field and help employers fill vacancies with skilled candidates they might otherwise have overlooked. Fairer hiring is the pitch. “If you’re hiring for […]

London-based startup Applied has bagged £1.5M (~$2M) in seed funding for a fresh, diversity-sensitive approach to recruitment that deconstructs and reworks the traditional CV-bound process, drawing on behavioural science to level the playing field and help employers fill vacancies with skilled candidates they might otherwise have overlooked.

Fairer hiring is the pitch. “If you’re hiring for a product lead, for example, it’s true that loads and loads of product leads are straight, white men with beards. How do we get people to see well what is it actually that this job entails?” founder and CEO Kate Glazebrook tells us. “It might actually be the case that if I don’t know any of the demographic background I discover somebody who I would have otherwise overlooked.”

Applied launched its software as a service recruitment platform in 2016, and Glazebrook says so far it’s been used by more than 55 employers to recruit candidates for more than 2,000 jobs. While more than 50,000 candidates have applied via Applied to date.

The employers themselves are also a diverse bunch, not just the usual suspects from the charitable sector, with both public and private sector organizations, small and large, and from a range of industries, from book publishing to construction, signed up to Applied’s approach. “We’ve been pleased to see it’s not just the sort of thing that the kind of employers you would expect to care about care about,” says Glazebrook.

Applied’s own investor Blackbird Ventures, which is leading the seed round, is another customer — and ended up turning one investment associate vacancy, advertised via the platform, into two roles — hiring both an ethnic minority woman and a man with a startup background as a result of “not focusing on did they have the traditional profile we were expecting”, says Glazebrook.

“They discovered these people were fantastic and had the skills — just a really different set of background characteristics than they were expecting,” she adds.

Other investors in the seed include Skip Capital, Angel Academe, Giant Leap and Impact Generation Partners, plus some unnamed angels. Prior investors include the entity Applied was originally spun out of (Behavioural Insights Team, a “social purpose company” jointly owned by the UK government, innovation charity Nesta, and its own employees), as well as gender advocate and businesswoman Carol Schwartz, and Wharton Professor Adam Grant.

Applied’s approach to recruitment employs plenty of algorithms — including for scoring candidates (its process involves chunking up applications and also getting candidates to answer questions that reflect “what a day in the job actually looks like”), and also anonymizing applications to further strip away bias risks, presenting the numbered candidates in a random order too.

But it does not involve any AI-based matching. If you want to make hiring fairer, AI doesn’t look like a great fit. Last week, for example, Reuters reported how in 2014 ecommerce giant Amazon built and then later scrapped a machine learning based recruitment tool, after it failed to rate candidates in a gender-neutral way — apparently reflecting wider industry biases.

“We’re really clear that we don’t do AI,” says Glazebrook. “We don’t fall into the traps that [companies like] Amazon did. Because it’s not that we’re parsing existing data-sets and saying ‘this is what you hired for last time so we’ll match candidates to that’. That’s exactly where you get this problem of replication of bias. So what we’ve done instead is say ‘actually what we should do is change what you see and how you see it so that you’re only focusing on the things that really matter’.

“So that levels the playing field for all candidates. All candidates are assessed on the basis of their skill, not whether or not they fit the historic profile of people you’ve previously hired. We avoid a lot of those pitfalls because we’re not doing AI-based or algorithmic hiring — we’re doing algorithms that reshape the information you see, not the prediction that you have to arrive at.”

In practice this means Applied must and does take over the entire recruitment process, including writing the job spec itself — to remove things like gendered language which could introduce bias into the process — and slicing and dicing the application process to be able to score and compare candidates and fill in any missing bits of data via role-specific skills tests.

Its approach can be thought of as entirely deconstructing the CV — to not just remove extraneous details and bits of information which can bias the process (such as names, education institutions attended, hobbies etc) but also to actively harvest data on the skills being sought, with employers using the platform to set tests to measure capacities and capabilities they’re after.

“We manage the hiring process right from the design of an inclusive job description, right through to the point of making a hiring decision and all of the selection that happens beneath that,” says Glazebrook. “So we use over 30 behavioural science nudges throughout the process to try and improve conversion and inclusivity — so that includes everything from removal of gendered language in jobs descriptions to anonymization of applications to testing candidates on job preview based assessments, rather than based on their CVs.”

“We also help people to run more evidence-based structured interviews and then make the hiring decision,” she adds. “From a behavioral science standpoint I guess our USP is we’ve redesigned the shortlisting process.”

The platform also provides jobseekers with greater visibility into the assessment process by providing them with feedback — “so candidates get to see where their strengths and weaknesses were” — so it’s not simply creating a new recruitment blackbox process that keeps people in the dark about the assessments being made about them. Which is important from an algorithmic accountability point of view, even without any AI involved. Because vanilla algorithms can still sum up to dumb decisions.

From the outside looking in, Applied’s approach might sound highly manual and high maintenance, given how necessarily involved the platform is in each and every hire, but Glazebrook says in fact it’s “all been baked into the tech” — so the platform takes the strain of the restructuring by automating the hand-holding involved in debiasing job ads and judgements, letting employers self-serve to step them through a reconstructed recruitment process.

“From the job description design, for example, there are eight different characteristics that are automatically picked out, so it’s all self-serve stuff,” explains Glazebrook, noting that the platform will do things like automatically flag words to watch out for in job descriptions or the length of the job ad itself.

“All with that totally automated. And client self-serve as well, so they use a library of questions — saying I’m looking for this particular skill-set and we can say well if you look through the library we’ll find you some questions which have worked well for testing that skill set before.”

“They do all of the assessment themselves, through the platform, so it’s basically like saying rather than having your recruiting team sifting through paper forms of CVs, we have them online scoring candidates through this redesigned process,” she adds.

Employers themselves need to commit to a new way of doing things, of course. Though Applied’s claim is that ultimately a fairer approach also saves time, as well as delivering great hires.

“In many ways, one of the things that we’ve discovered through many customers is that it’s actually saved them loads of time because the shortlisting process is devised in a way that it previously hasn’t been and more importantly they have data and reporting that they’ve never previously had,” she says. “So they now know, through the platform, which of the seven places that they placed the job actually found them the highest quality candidates and also found people who were from more diverse backgrounds because we could automatically pull the data.”

Applied ran its own comparative study of its reshaped process vs a traditional sifting of CVs and Glazebrook says it discovered “statistically significant differences” in the resulting candidate choices — claiming that over half of the pool of 700+ candidates “wouldn’t have got the job if we’d been looking at their CVs”.

They also looked at the differences between the choices made in the study and also found statistically significant differences “particularly in educational and economic background” — “so we were diversifying the people we were hiring by those metrics”.

“We also saw directional evidence around improvements in diversity on disability status and ethnicity,” she adds. “And some interesting stuff around gender as well.”

Applied wants to go further on the proof front, and Glazebrook says it is now automatically collecting performance data while candidates are on the job — “so that we can do an even better job of proving here is a person that you hired and you did a really good job of identifying the skill-sets that they are proving they have when they’re on the job”.

She says it will be feeding this intel back into the platform — “to build a better feedback loop the next time you’re looking to hire that particular role”.

“At the moment, what is astonishing, is that most HR departments 1) have terrible data anyway to answer these important questions, and 2) to the extent they have them they don’t pair those data sets in a way that allows them to prove — so they don’t know ‘did we hire them because of X or Y’ and ‘did that help us to actually replicate what was working well and jettison what wasn’t’,” she adds.

The seed funding will go on further developing these sorts of data science predictions, and also on updates to Applied’s gendered language tool and inclusive job description tool — as well as on sales and marketing to generally grow the business.

Commenting on the funding in a statement, Nick Crocker, general partner at Blackbird Ventures said: “Our mission is to find the most ambitious founders, and support them through every stage of their company journey. Kate and the team blew us away with the depth of their insight, the thoughtfulness of their product, and a mission that we’re obsessed with.”

In another supporting statement, Owain Service, CEO of BI Ventures, added: “Applied uses the latest behavioural science research to help companies find the best talent. We ourselves have recruited over 130 people through the platform. This investment represents an exciting next step to supporting more organisations to remove bias from their recruitment processes, in exactly the same way that we do.”

IBM launches cloud tool to detect AI bias and explain automated decisions

IBM has launched a software service that scans AI systems as they work in order to detect bias and provide explanations for the automated decisions being made — a degree of transparency that may be necessary for compliance purposes not just a company’s own due diligence. The new trust and transparency system runs on the […]

IBM has launched a software service that scans AI systems as they work in order to detect bias and provide explanations for the automated decisions being made — a degree of transparency that may be necessary for compliance purposes not just a company’s own due diligence.

The new trust and transparency system runs on the IBM cloud and works with models built from what IBM bills as a wide variety of popular machine learning frameworks and AI-build environments — including its own Watson tech, as well as Tensorflow, SparkML, AWS SageMaker, and AzureML.

It says the service can be customized to specific organizational needs via programming to take account of the “unique decision factors of any business workflow”.

The fully automated SaaS explains decision-making and detects bias in AI models at runtime — so as decisions are being made — which means it’s capturing “potentially unfair outcomes as they occur”, as IBM puts it.

It will also automatically recommend data to add to the model to help mitigate any bias that has been detected.

Explanations of AI decisions include showing which factors weighted the decision in one direction vs another; the confidence in the recommendation; and the factors behind that confidence.

IBM also says the software keeps records of the AI model’s accuracy, performance and fairness, along with the lineage of the AI systems — meaning they can be “easily traced and recalled for customer service, regulatory or compliance reasons”.

For one example on the compliance front, the EU’s GDPR privacy framework references automated decision making, and includes a right for people to be given detailed explanations of how algorithms work in certain scenarios — meaning businesses may need to be able to audit their AIs.

The IBM AI scanner tool provides a breakdown of automated decisions via visual dashboards — an approach it bills as reducing dependency on “specialized AI skills”.

However it is also intending its own professional services staff to work with businesses to use the new software service. So it will be both selling AI, ‘a fix’ for AI’s imperfections, and experts to help smooth any wrinkles when enterprises are trying to fix their AIs… Which suggests that while AI will indeed remove some jobs, automation will be busy creating other types of work.

Nor is IBM the first professional services firm to spot a business opportunity around AI bias. A few months ago Accenture outed a fairness tool for identifying and fixing unfair AIs.

So with a major push towards automation across multiple industries there also looks to be a pretty sizeable scramble to set up and sell services to patch any problems that arise as a result of increasing use of AI.

And, indeed, to encourage more businesses to feel confident about jumping in and automating more. (On that front IBM cites research it conducted which found that while 82% of enterprises are considering AI deployments, 60% fear liability issues and 63% lack the in-house talent to confidently manage the technology.)

In additional to launching its own (paid for) AI auditing tool, IBM says its research division will be open sourcing an AI bias detection and mitigation toolkit — with the aim of encouraging “global collaboration around addressing bias in AI”.

“IBM led the industry in establishing trust and transparency principles for the development of new AI technologies. It’s time to translate principles into practice,” said David Kenny, SVP of cognitive solutions at IBM, commenting in a statement. “We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making.”

How (and how not) to fix AI

While artificial intelligence was once heralded as the key to unlocking a new era of economic prosperity, policymakers today face a wave of calls to ensure AI is fair, ethical and safe.

While artificial intelligence was once heralded as the key to unlocking a new era of economic prosperity, policymakers today face a wave of calls to ensure AI is fair, ethical and safe. New York City Mayor de Blasio recently announced the formation of the nation’s first task force to monitor and assess the use of algorithms. Days later, the European Union enacted sweeping new data protection rules that require companies be able to explain to consumers any automated decisions. And high-profile critics, like Elon Musk, have called on policymakers to do more to regulate AI.

Unfortunately, the two most popular ideas — requiring companies to disclose the source code to their algorithms and explain how they make decisions — would cause more harm than good by regulating the business models and the inner workings of the algorithms of companies using AI, rather than holding these companies accountable for outcomes.

The first idea — “algorithmic transparency” — would require companies to disclose the source code and data used in their AI systems. Beyond its simplicity, this idea lacks any real merits as a wide-scale solution. Many AI systems are too complex to fully understand by looking at source code alone. Some AI systems rely on millions of data points and thousands of lines of code, and decision models can change over time as they encounter new data. It is unrealistic to expect even the most motivated, resource-flush regulators or concerned citizens to be able to spot all potential malfeasance when that system’s developers may be unable to do so either.

Additionally, not all companies have an open-source business model. Requiring them to disclose their source code reduces their incentive to invest in developing new algorithms, because it invites competitors to copy them. Bad actors in China, which is fiercely competing with the United States for AI dominance but routinely flouts intellectual property rights, would likely use transparency requirements to steal source code.

The other idea — “algorithmic explainability” — would require companies to explain to consumers how their algorithms make decisions. The problem with this proposal is that there is often an inescapable trade-off between explainability and accuracy in AI systems. An algorithm’s accuracy typically scales with its complexity, so the more complex an algorithm is, the more difficult it is to explain. While this could change in the future as research into explainable AI matures — DARPA devoted $75 million in 2017 to this problem — for now, requirements for explainability would come at the cost of accuracy. This is enormously dangerous. With autonomous vehicles, for example, is it more important to be able to explain an accident or avoid one? The cases where explanations are more important than accuracy are rare.

The debate about how to make AI safe has ignored the need for a nuanced, targeted approach to regulation.

Rather than demanding companies reveal their source code or limiting the types of algorithms they can use, policymakers should instead insist on algorithmic accountability — the principle that an algorithmic system should employ a variety of controls to ensure the operator (i.e. the party responsible for deploying the algorithm) can verify it acts as intended, and identify and rectify harmful outcomes should they occur.

A policy framework built around algorithmic accountability would have several important benefits. First, it would make operators responsible for any harms their algorithms might cause, not developers. Not only do operators have the most influence over how algorithms impact society, but they already have to comply with a variety of laws designed to make sure their decisions don’t cause harm. For example, employers must comply with anti-discrimination laws in hiring, regardless of whether they use algorithms to make those decisions.

Second, holding operators accountable for outcomes rather than the inner workings of algorithms would free them to focus on the best methods to ensure their algorithms do not cause harm, such as confidence measures, impact assessments or procedural regularity, where appropriate. For example, a university could conduct an impact assessment before deploying an AI system designed to predict which students are likely to drop out to ensure it is effective and equitable. Unlike transparency or explainability requirements, this would enable the university to effectively identify any potential flaws without prohibiting the use of complex, proprietary algorithms.

This is not to say that transparency and explanations do not have their place. Transparency requirements, for example, make sense for risk-assessment algorithms in the criminal justice system. After all, there is a long-standing public interest in requiring the judicial system be exposed to the highest degree of scrutiny possible, even if this transparency may not shed much light on how advanced machine-learning systems work.

Similarly, laws like the Equal Credit Opportunity Act require companies to provide consumers an adequate explanation for denying them credit. Consumers will still have a right to these explanations regardless of whether a company uses AI to make its decisions.

The debate about how to make AI safe has ignored the need for a nuanced, targeted approach to regulation, treating algorithmic transparency and explainability like silver bullets without considering their many downsides. There is nothing wrong with wanting to mitigate the potential harms AI poses, but the oversimplified, overbroad solutions put forth so far would be largely ineffective and likely do more harm than good. Algorithmic accountability offers a better path toward ensuring organizations use AI responsibly so that it can truly be a boon to society.