ServiceNow to acquire FriendlyData for its natural language search technology

Enterprise cloud service management company ServiceNow announced today that it will acquire FriendlyData and integrate the startup’s natural language search technology into apps on its Now platform. Founded in 2016, FriendlyData’s natural language query (NLQ) technology enables enterprise customers to build search tools that allow users to ask technical questions even if they don’t know […]

Enterprise cloud service management company ServiceNow announced today that it will acquire FriendlyData and integrate the startup’s natural language search technology into apps on its Now platform. Founded in 2016, FriendlyData’s natural language query (NLQ) technology enables enterprise customers to build search tools that allow users to ask technical questions even if they don’t know the right jargon.

FriendlyData’s NLQ tech figures out what they are trying to say and then answers with text responses or easy-to-understand data visualizations. ServiceNow said it will integrate FriendlyData’s tech into the Now Platform, which includes apps for IT, human resources, security operations, and customer service management. It will also be available in products for developers and ServiceNow’s partners.

In a statement, Pat Casey, senior vice president of development and operations at ServiceNow, said “ServiceNow is bringing NLQ capabilities to the Now Platform, enabling companies to ask technical questions in plain English and receive direct answers. With this technical enhancement, our goal is to allow anyone to easily make data driven decisions, increasing productivity and driving businesses forward faster.”

The acquisition of FriendlyData is the latest in ServiceNow’s initiative to reduce the friction of support requests within organizations with AI-based tools. For example, it launched a chatbot-building tools called Virtual Agent in May, which enables companies to create custom chatbots for services like Slack or Microsoft Teams to automatically handle routine inquiries such as equipment requests. It also announced the acquisition of Parlo, a chatbot startup, around the same time.

Google will not bid for the Pentagon’s $10B cloud computing contract, citing its “AI Principles”

Google has dropped out of the running for JEDI, the massive Defense Department cloud computing contract potentially worth $10 billion. In a statement to Bloomberg, Google said that it decided not to participate in the bidding process, which ends this week, because the contract may not align with the company’s principles for how artificial intelligence […]

Google has dropped out of the running for JEDI, the massive Defense Department cloud computing contract potentially worth $10 billion. In a statement to Bloomberg, Google said that it decided not to participate in the bidding process, which ends this week, because the contract may not align with the company’s principles for how artificial intelligence should be used.

In statement to Bloomberg, Google spokesperson said “We are not bidding on the JEDI contract because first, we couldn’t be assured that it would align with our AI Principles. And second, we determined that there were portions of the contract that were out of scope with our current government certifications,” adding that Google is still “working to support the U.S. government with our cloud in many ways.”

Officially called Joint Enterprise Defense Infrastructure, bidding for the initiative’s contract began two months ago and closes this week. JEDI’s lead contender is widely considered to be Amazon, because it set up the CIA’s private cloud, but Oracle, Microsoft, and IBM are also expected to be in the running.

The winner of the contract, which could last for up to 10 years, is expected to be announced by the end of the year. The project is meant to accelerate the Defense Department’s adoption of cloud computing and services. Only one provider will be chosen, a controversial decision that the Pentagon defended by telling Congress that the pace of handling task orders in a multiple-award contract “could prevent DOD from rapidly delivering new capabilities and improved effectiveness to the warfighter that enterprise-level cloud computing can enable.”

Google also addressed the controversy over a single provider, telling Bloomberg that “had the JEDI contract been open to multiple vendors, we would have submitted a compelling solution for portions of it. Google Cloud believes that a multi-cloud approach is in the best interest of government agencies, because it allows them to choose the right cloud for the right workload.”

Google’s decision no to bid for JEDI comes four months after it reportedly decided not to renew its contract with the Pentagon for Project Maven, which involved working with the military to analyze drone footage, including images taken in conflict zones. Thousands of Google employees signed a petition against its work on Project Maven because they said it meant the company was directly involved in warfare. Afterward, Google came up with its “AI Principles,” a set of guidelines for how it will use its AI technology.

It is worth noting, however, that Google is still under employee fire because it is reportedly building a search engine for China that will comply with the government’s censorship laws, eight years after exiting the country for reasons including its limits on free speech.

Facebook adds A.I. to Marketplace for categorization, price suggestions and soon, visual search

Facebook is celebrating the two-year anniversary of its Craigslist competitor, Facebook Marketplace, with the launch of new features powered by A.I. Specifically, the social network says it’s adding price range suggestions and auto-categorization features to make selling easier, and it says it’s testing camera features that would use A.I. to make product recommendations. Automating price […]

Facebook is celebrating the two-year anniversary of its Craigslist competitor, Facebook Marketplace, with the launch of new features powered by A.I. Specifically, the social network says it’s adding price range suggestions and auto-categorization features to make selling easier, and it says it’s testing camera features that would use A.I. to make product recommendations.

Automating price suggestions and categorization, however, is not unique to Facebook – eBay earlier this year introduced a feature in its mobile app that will fill out your listings for you, using technologies like structured data and predictive analytics. Letgo can also make generalized price suggestions.

In Facebook’s case, the company says it will be able to categorize items based on the photo and description, then suggest a price range (e.g. $50-$75) for sellers to choose from. According to the company, when this autosuggest feature is enabled, sellers are less likely to abandon their listings, it has learned. (9% of sellers abandoned listings before the feature was enabled, it noted.)

AI in Marketplace

Posted by Facebook on Tuesday, October 2, 2018

Facebook also highlighted some of the other ways it uses A.I. – to automatically enhance the lighting of images uploaded by sellers, for instance, and for detecting and removing inappropriate content.

And while not A.I.-based, the company additionally noted its new buyer and seller ratings where people can rate their experience and leave feedback.

Further down the road, things may get more interesting. Facebook lightly teases its plans to turn Marketplace into more of a discovery tool for finding things you want to buy using your smartphone camera. For example, the company writes in a blog post, you could point your camera at something you like – such as your friend’s cool headphones – snap a photo, and then Marketplace would search across its listings for similar items.

This sort of visual search tech is also common among competitors, including eBay again, plus Pinterest and even Google. Facebook, then, is playing a bit of catch-up for the time being.

Further down the road, Facebook’s plans for Marketplace put it more directly up against Pinterest. It says it envisions using A.I. in the future to help people with home design – like, by uploading a photo of their living room, then getting suggestions about furniture to buy. Home design and inspiration, of course, is the bread-and-butter of sites like Pinterest, Houzz and others, including newcomer Hutch.

That said, even if it’s lacking in some features today, Facebook Marketplace is not one to be counted out. Thanks to Facebook’s size and scale (and the annoying way it continuously red badged the Marketplace icon, forcing users to keep tapping it), the company says its buy-and-sell platform has grown to be used by more than one out of every three people in the U.S. on a monthly basis.

 

 

 

Hopper raises $100M more for its AI-based travel app, now valued at $780M

Hopper — a mobile-only travel booking app cofounded by a former Expedia executive in Montreal, Canada that uses artificial intelligence to help you search for and book hotels and flights — has gained a little elevation of its own today. The startup has raised another $100 million in funding, money that it plans to use to […]

Hopper — a mobile-only travel booking app cofounded by a former Expedia executive in Montreal, Canada that uses artificial intelligence to help you search for and book hotels and flights — has gained a little elevation of its own today. The startup has raised another $100 million in funding, money that it plans to use to build out its AI algorithms and expand deeper into international markets. Hopper has now passed 30 million installs and 75 million trips planned, and says it’s on track to make nearly $1 billion in sales this year.

Sources very close to the company say Hopper’s valuation with this round is also flying: it’s now close to 1 billion Canadian dollars ($780 million in US dollars). As a point of comparison, Hopper was valued at US$300 million in its last round, in late 2016, and it has raised C$184 million (US$235 million) to date. Throughout that time, it’s been a consistent presence in the top-10 travel apps in the US, according to stats from App Annie.

Frederic Lalonde, CEO and co-founder of Hopper, said in an interview that the company is not profitable at the moment because it reinvests all its returns in fuelling its growth.

This latest growth round, a Series D, was led by previous investor Omers, along with other repeat backers Caisse de dépôt et placement du Québec (CDPQ), Accomplice, Brightspark Ventures, Investissement Québec, BDC Capital IT Venture Fund. Is also included a notable new investor, Citi Ventures.

There are a sea of travel apps in the market today that help people search for and book trips, from old standbys like Expedia/Travelocity and Booking.com, through to newer upstarts like Airbnb and smaller startups that have been snapped up by bigger players (such as Hipmunk, now owned by SAP/Concur, and Kayak, acquired by Booking.com/Priceline for $1.8 billion).

Hopper has carved out a distinct place for itself by building an AI framework that not only helps people find good deals, but also discover trips they may have not known that they specifically wanted to take.

AI is used to build profiles of users and their interests, which Hopper starts to build after someone downloads the app and opens it for the first time and starts to use it. From that, Hopper asks to send push notifications, and when users respond to those, this helps shape their profiles further.

“We’re able to capture our users’ intent in an unprecedented way in the industry because users start watching their trips four to five months in advance of departure,” said Lalonde in an emailed interview (and pictured here with his cofounder Joost Ouwerkerk). “During that period, we build a relationship through an ongoing conversation about their trip, which primarily takes place via push notifications. User intent is key to our ability to implement further algorithms based on AI.”

Added to this are some classic AI methods: Hopper, Lalonde said, learns more about its users by building lookalike profiles of anonymised data of people who have similar preferences to you. “It’s similar to how Netflix will recommend a show to you based on what other viewers like you are watching,” he said. “What once was done by a human travel agent is now done through a machine that gets smarter each time an action is (or is not) taken.”

AI, as you probably know, is a term that is thrown around a lot today, but it has a very direct relationship to how Hopper has grown its business. Lalonde said that 25 percent of Hopper’s bookings are the result of AI — in other words, users are booking trips they didn’t explicitly search for but the app knew to suggest. “Conversion rates on AI-based recommendation notifications are 2.6 times higher than ones for which the users explicitly searched,” Lalonde added.

Hopper is designated an OTA — not a metasearch provider or aggregator — so the booking takes place right in the app, rather than passing you on to another site. This means that the company makes money via commissions on those bookings. Lalonde said that 52 percent of its airline bookings are for international, long-haul flights — which translates to more isbeing spent per booking than for domestic flights, and typically not last-minute bookings. “We’re a very complimentary channel for airline and hotel partners given our users are shopping far in advance on mobile so we aren’t competing with their websites,” he said.

Going forward, Hopper will likely integrate more forms of travel that fit the profile of its user base. It has already started to do that with airlines, adding 47 low-cost carriers in Europe in the last year, which the company said has boost sales by 154 percent in the region compared to a year ago.

Still, Lalonde would not comment specifically on whether the company might ever try to add Airbnb or any other private-home platform to give people that option.

“Nearly 70% of Hopper bookers are Millennials so alternate accommodations is something we may be interested in exploring,” he said. “However, we’re currently entirely focused on scaling our hotel markets and supply since accommodations is still a very new category for us.” I think that this is something to watch, though: the more a company like Hopper intersects with a company like Airbnb in terms of user base and the kinds of services it provides, we might start to see them either work together more, or potentially see one gobble up the other in an ongoing consolidation effort. (I’ll also point out that Airbnb — which is valued now at over $31 billion and is on track for an IPO — is looking for more ways to connect to users beyond simply when they are looking for a place to stay.)

Nor, it seems, does Hopper have plans for ever expanding to old school web.

“Our core strengths are due to the fact that we’re mobile-only so we have no plans to offer a web product,” Lalonde said. Indeed, as laptop usage has declined, smartphones have only grown in their ubiquity. “As the world continues to shift from the web to mobile, and in-app in particular — estimates place online mobile minutes anywhere between 70-90 percent worldwide; 92 percent of all mobile time is spent in-app — we believe Hopper is in a unique position to become the go-to way to book travel,” he added.

Despite all that growth, we’re still in a relatively early and small stage of the market. Travel is currently a $1.3 trillion industry, online accounts for $662 billion of that, and mobile is a $264 billion part of it. For Hopper’s investors, they’re betting that the third of these will eventually be the dominant platform for the wider business, and that Hopper with the early groundwork that’s it’s laid has a shot at being a very big player within that.

“Mobile travel is growing 20 percent year over year. By continuing to innovate on mobile and ultimately change the way consumers plan and book travel, we believe Hopper has a tremendous opportunity globally,” said Damien Steel, Managing Partner at OMERS Ventures, in a statement. “We’re proud to continue supporting Hopper as the company further establishes itself as the leader in mobile travel booking.”

 

5 takeaways on the state of AI from Disrupt SF

The promise of artificial intelligence is immense, but the roadmap to achieving those goals still remains unclear. Onstage at TechCrunch Disrupt SF, some of AI’s leading minds shared their thoughts on current competition in the market, how to ensure algorithms don’t perpetuate racism and the future of human-machine interaction. Here are five takeaways on the […]

The promise of artificial intelligence is immense, but the roadmap to achieving those goals still remains unclear. Onstage at TechCrunch Disrupt SF, some of AI’s leading minds shared their thoughts on current competition in the market, how to ensure algorithms don’t perpetuate racism and the future of human-machine interaction.

Here are five takeaways on the state of AI from Disrupt SF 2018:

1. U.S. companies will face many obstacles if they look to China for AI expansion

Sinnovation CEO Kai-Fu Lee (Photo: TechCrunch/Devin Coldewey)

The meteoric rise in China’s focus on AI has been well-documented and has become impossible to ignore these days. With mega companies like Alibaba and Tencent pouring hundreds of millions of dollars into home-grown businesses, American companies are finding less and less room to navigate and expand in China. AI investor and Sinnovation CEO Kai-Fu Lee described China as living in a “parallel universe” to the U.S. when it comes to AI development.

“We should think of it as electricity,” explained Lee, who led Google’s entrance into China. “Thomas Edison and the AI deep learning inventors – who were American – they invented this stuff and then they generously shared it. Now, China, as the largest marketplace with the largest amount of data, is really using AI to find every way to add value to traditional businesses, to internet, to all kinds of spaces.”

“The Chinese entrepreneurial ecosystem is huge so today the most valuable AI companies in computer vision, speech recognition, drones are all Chinese companies.”

2. Bias in AI is a new face on an old problem

SAN FRANCISCO, CA – SEPTEMBER 07: (L-R) UC Berkeley Professor Ken Goldberg, Google AI Research Scientist Timnit Gebru, UCOT Founder and CEO Chris Ategeka, and moderator Devin Coldewey speak onstage during Day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018 in San Francisco, California. (Photo by Kimberly White/Getty Images for TechCrunch)

AI promises to increase human productivity and efficiency by taking the grunt work out of many processes. But the data used to train many AI systems often falls victim to the same biases of humans and, if unchecked, can further marginalize communities caught up in systemic issues like income disparity and racism.

“People in lower socio-economic statuses are under more surveillance and go through algorithms more,” said Google AI’s Timnit Gebru. “So if they apply for a job that’s lower status they are likely to go through automated tools. We’re right now in a stage where these algorithms are being used in different places and we’re not event checking if they’re breaking existing laws like the Equal Opportunity Act.”

A potential solution to prevent the spread of toxic algorithms was outlined by UC Berkeley’s Ken Goldberg who cited the concept of ensemble theory, which involves multiple algorithms with various classifiers working together to produce a single result.

We’re right now in a stage where these algorithms are being used in different places and we’re not even checking if they’re breaking existing laws.

But how do we know if the solution to inadequate tech is more tech? Goldberg says this is where having individuals from multiple backgrounds, both in and outside the world of AI, is vital to developing just algorithms. “It’s very relevant to think about both machine intelligence and human intelligence,” explained Goldberg. “Having people with different viewpoints is extremely valuable and I think that’s starting to be recognized by people in business… it’s not because of PR, it’s actually because it will give you better decisions if you get people with different cognitive, diverse viewpoints.”

3. The future of autonomous travel will rely on humans and machines working together

Uber CEO Dara Khosrowshahi (Photo: TechCrunch/Devin Coldewey)

Transportation companies often paint a flowery picture of the near future where mobility will become so automated that human intervention will be detrimental to the process.

That’s not the case, according to Uber CEO Dara Khosrowshahi. In an era that’s racing to put humans on the sidelines, Khosrowshahi says humans and machines working hand-in-hand is the real thing.

“People and computers actually work better than each of them work on a stand-alone basis and we are having the capability of bringing in autonomous technology, third-party technology, Lime, our own product all together to create a hybrid,” said Khosrowshahi.

Khosrowshahi ultimately envisions the future of Uber being made up of engineers monitoring routes that present the least amount of danger for riders and selecting optimal autonomous routes for passengers. The combination of these two systems will be vital in the maturation of autonomous travel, while also keeping passengers safe in the process.

4. There’s no agreed definition of what makes an algorithm “fair”

SAN FRANCISCO, CA – SEPTEMBER 07: Human Rights Data Analysis Group Lead Statistician Kristian Lum speaks onstage during Day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018 in San Francisco, California. (Photo by Kimberly White/Getty Images for TechCrunch)

Last July ProPublica released a report highlighting how machine learning can falsely develop its own biases. The investigation examined an AI system used in Fort Lauderdale, Fla., that falsely flagged black defendants as future criminals at a rate twice that of white defendants. These landmark findings set off a wave of conversation on the ingredients needed to build a fair algorithms.

One year later AI experts still don’t have the recipe fully developed, but many agree a contextual approach that combines mathematics and an understanding of human subjects in an algorithm is the best path forward.

“Unfortunately there is not a universally agreed upon definition of what fairness looks like,” said Kristian Lum, lead statistician at the Human Rights Data Analysis Group. “How you slice and dice the data can determine whether you ultimately decide the algorithm is unfair.”

Lum goes on to explain that research in the past few years has revolved around exploring the mathematical definition of fairness, but this approach is often incompatible to the moral outlook on AI.

“What makes an algorithm fair is highly contextually dependent, and it’s going to depend so much on the training data that’s going into it,” said Lum. “You’re going to have to understand a lot about the problem, you’re going to have to understand a lot about the data, and even when that happens there will still be disagreements on the mathematical definitions of fairness.”

5. AI and Zero Trust are a “marriage made in heaven” and will be key in the evolution of cybersecurity

SAN FRANCISCO, CA – SEPTEMBER 06: (l-R) Duo VP of Security Mike Hanley, Okta Executive Director of Cybersecurity Marc Rogers, and moderator Mike Butcher speak onstage during Day 2 of TechCrunch Disrupt SF 2018 at Moscone Center on September 6, 2018 in San Francisco, California. (Photo by Kimberly White/Getty Images for TechCrunch)

If previous elections have taught us anything it’s that security systems are in dire need of improvement to protect personal data, financial assets and the foundation of democracy itself. Facebook’s ex-chief security officer Alex Stamos shared a grim outlook on the current state of politics and cybersecurity at Disrupt SF, stating the security infrastructure for the upcoming Midterm elections isn’t much better than it was in 2016.

So how effective will AI be in improving these systems? Marc Rodgers of Okta and Mike Hanley of Duo Security believe the combination of AI and a security model called Zero Trust, which cuts off all users from accessing a system until they can prove themselves, are the key to developing security systems that actively fight off breaches without the assistance of humans.

“AI and Zero Trust are a marriage made in heaven because the whole idea behind Zero Trust is you design policies that sit inside your network,” said Rodgers. “AI is great at doing human decisions much faster than a human ever can and I have great hope that as Zero Trust evolves, we’re going to see AI baked into the new Zero Trust platforms.”

By handing much of the heavy lifting to machines, cybersecurity professionals will also have the opportunity to solve another pressing issue: being able to staff qualified security experts to manage these systems.

“There’s also a substantial labor shortage of qualified security professionals that can actually do the work needed to be done,” said Hanley. “That creates a tremendous opportunity for security vendors to figure out what are those jobs that need to be done, and there are many unsolved challenges in that space. Policy engines are one of the more interesting ones.”

Microsoft’s machine learning tools for developers get smarter

It’s a big day for Microsoft today, which announced a slew of updates across virtually all of its product lines at its Ignite conference today. Unsurprisingly, one theme this year is artificial intelligence and machine learning. Microsoft is launching new tools to bring its Cortana assistant to the enterprise, but there is also a number […]

It’s a big day for Microsoft today, which announced a slew of updates across virtually all of its product lines at its Ignite conference today. Unsurprisingly, one theme this year is artificial intelligence and machine learning. Microsoft is launching new tools to bring its Cortana assistant to the enterprise, but there is also a number of other developer-centric updates and products that are launching today.

One of those is an update to the Azure Machine Learning services, the company’s platform for letting anyone build and train machine learning models with a focus on prediction. With today’s update, this platform is getting a new tool that automates much of the time-consuming selection, testing and tweaking necessary to build a solid model. Like many of Microsoft’s AI initiatives, the idea here is to allow any developer to build and use these models without having to delve into the depths of TensorFlow, PyTorch or other AI frameworks.

In addition to this automation service, Microsoft is also making more hardware-accelerated models for FPGAs available on Azure Machine Learning, as well as a Python SDK that will make the service more accessible from a number of popular IDEs and notebooks.

Azure Cognitive Services, which plays home to most of Microsoft’s pre-built and customizable machine learning APIs, is also getting a few updates. The Cognitive Services speech service for speech recognition and translation is now generally available, for example, and Microsoft argues that the voices its deep learning-based speech synthesis system generates are now nearly indistinguishable from recordings of real people. The same, of course, is true of Google’s and AWS’s speech synthesis engines, so we’ll have to hear them ourselves to see how true to life these voices are.

Microsoft’s Bot Framework SDK is also getting an update that promises to make human and computer interactions more natural. Version 4 of this framework is now generally available, and Microsoft specifically highlights that building a first bot is now easier. Since the hype around bots has died down significantly, though, we’ll have to see if developers still care now that most consumers tend to shy away from interacting with these systems.

Given that the term ‘AI’ doesn’t always have the most positive connotations (and not just because developers prefer a more precise terminology), it’s maybe also no surprise that Microsoft is launching a new program today that aims to “harness the power of artificial intelligence for disaster recovery, helping children, protecting refugees and displaced people, and promoting respect for human rights.” This new AI for Humanitarian Action project is a $40 million, five-year program that is part of the company’s AI for Good Initiative.

IBM launches cloud tool to detect AI bias and explain automated decisions

IBM has launched a software service that scans AI systems as they work in order to detect bias and provide explanations for the automated decisions being made — a degree of transparency that may be necessary for compliance purposes not just a company’s own due diligence. The new trust and transparency system runs on the […]

IBM has launched a software service that scans AI systems as they work in order to detect bias and provide explanations for the automated decisions being made — a degree of transparency that may be necessary for compliance purposes not just a company’s own due diligence.

The new trust and transparency system runs on the IBM cloud and works with models built from what IBM bills as a wide variety of popular machine learning frameworks and AI-build environments — including its own Watson tech, as well as Tensorflow, SparkML, AWS SageMaker, and AzureML.

It says the service can be customized to specific organizational needs via programming to take account of the “unique decision factors of any business workflow”.

The fully automated SaaS explains decision-making and detects bias in AI models at runtime — so as decisions are being made — which means it’s capturing “potentially unfair outcomes as they occur”, as IBM puts it.

It will also automatically recommend data to add to the model to help mitigate any bias that has been detected.

Explanations of AI decisions include showing which factors weighted the decision in one direction vs another; the confidence in the recommendation; and the factors behind that confidence.

IBM also says the software keeps records of the AI model’s accuracy, performance and fairness, along with the lineage of the AI systems — meaning they can be “easily traced and recalled for customer service, regulatory or compliance reasons”.

For one example on the compliance front, the EU’s GDPR privacy framework references automated decision making, and includes a right for people to be given detailed explanations of how algorithms work in certain scenarios — meaning businesses may need to be able to audit their AIs.

The IBM AI scanner tool provides a breakdown of automated decisions via visual dashboards — an approach it bills as reducing dependency on “specialized AI skills”.

However it is also intending its own professional services staff to work with businesses to use the new software service. So it will be both selling AI, ‘a fix’ for AI’s imperfections, and experts to help smooth any wrinkles when enterprises are trying to fix their AIs… Which suggests that while AI will indeed remove some jobs, automation will be busy creating other types of work.

Nor is IBM the first professional services firm to spot a business opportunity around AI bias. A few months ago Accenture outed a fairness tool for identifying and fixing unfair AIs.

So with a major push towards automation across multiple industries there also looks to be a pretty sizeable scramble to set up and sell services to patch any problems that arise as a result of increasing use of AI.

And, indeed, to encourage more businesses to feel confident about jumping in and automating more. (On that front IBM cites research it conducted which found that while 82% of enterprises are considering AI deployments, 60% fear liability issues and 63% lack the in-house talent to confidently manage the technology.)

In additional to launching its own (paid for) AI auditing tool, IBM says its research division will be open sourcing an AI bias detection and mitigation toolkit — with the aim of encouraging “global collaboration around addressing bias in AI”.

“IBM led the industry in establishing trust and transparency principles for the development of new AI technologies. It’s time to translate principles into practice,” said David Kenny, SVP of cognitive solutions at IBM, commenting in a statement. “We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making.”

Call for smart home devices to bake in privacy safeguards for kids

A new research report has raised concerns about how in-home smart devices such as AI virtual voice assistants, smart appliances, and security and monitoring technologies could be gathering and sharing children’s data. It calls for new privacy measures to safeguard kids and make sure age appropriate design code is included with home automation technologies. The […]

A new research report has raised concerns about how in-home smart devices such as AI virtual voice assistants, smart appliances, and security and monitoring technologies could be gathering and sharing children’s data.

It calls for new privacy measures to safeguard kids and make sure age appropriate design code is included with home automation technologies.

The report, entitled Home Life Data and Children’s Privacy, is the work of Dr Veronica Barassi of Goldsmiths, University of London, who leads a research project at the university investigating the impact of big data and AI on family life.

Barassi wants the UK’s data protection agency to launch a review of what she terms “home life data” — meaning the information harvested by smart in-home devices that can end up messily mixing adult data with kids’ information — to consider its impact on children’s privacy, and “put this concept at the heart of future debates about children’s data protection”.

“Debates about the privacy implications of AI home assistants and Internet of Things focus a lot on the the collection and use of personal data. Yet these debates lack a nuanced understanding of the different data flows that emerge from everyday digital practices and interactions in the home and that include the data of children,” she writes in the report.

“When we think about home automation therefore, we need to recognise that much of the data that is being collected by home automation technologies is not only personal (individual) data but home life data… and we need to critically consider the multiple ways in which children’s data traces become intertwined with adult profiles.”

The report gives examples of multi-user functions and aggregated profiles (such as Amazon’s Household Profiles feature) as constituting a potential privacy risk for children’s privacy.

Another example cited is biometric data — a type of information frequently gathered by in-home ‘smart’ technologies (such as via voice or facial recognition tech) yet the report asserts that generic privacy policies often do not differentiate between adults’ and children’s biometric data. So that’s another grey area being critically flagged by Barassi.

She’s submitted the report to the ICO in response to its call for evidence and views on an Age Appropriate Design Code it will be drafting. This code is a component of the UK’s new data protection legislation intended to support and supplement rules on the handling of children’s data contained within pan-EU privacy regulation — by providing additional guidance on design standards for online information services that process personal data and are “likely to be accessed by children”.

And it’s very clear that devices like smart speakers intended to be installed in homes where families live are very likely to be accessed by children.

The report concludes:

There is no acknowledgement so far of the complexity of home life data, and much of the privacy debates seem to be evolving around personal (individual) data. It seems that companies are not recognizing the privacy implications involved in children’s daily interactions with home automation technologies that are not designed for or targeted at them. Yet they make sure to include children in the advertising of their home technologies. Much of the responsibility of protecting children is in the hands of parents, who struggle to navigate Terms and Conditions even after changes such as GDPR [the European Union’s new privacy framework]. It is for this reason that we need to find new measures and solutions to safeguard children and to make sure that age appropriate design code is included within home automation technologies.

“We’ve seen privacy concerns raised about smart toys and AI virtual assistants aimed at children, but so far there has been very little debate about home hubs and smart technologies aimed at adults that children encounter and that collect their personal data,” adds Barassi commenting in a statement.

“The very newness of the home automation environment means we do not know what algorithms are doing with this ‘messy’ data that includes children’s data. Firms currently fail to recognise the privacy implications of children’s daily interactions with home automation technologies that are not designed or targeted at them.

“Despite GDPR, it’s left up to parents to protect their children’s privacy and navigate a confusing array of terms and conditions.”

The report also includes a critical case study of Amazon’s Household Profiles — a feature that allows Amazon services to be shared by members of a family — with Barassi saying she was unable to locate any information on Amazon’s US or UK privacy policies on how the company uses children’s “home life data” (e.g. information that might have been passively recorded about kids via products such as Amazon’s Alexa AI virtual assistant).

“It is clear that the company recognizes that children interact with the virtual assistants or can create their own profiles connected to the adults. Yet I can’t find an exhaustive description or explanation of the ways in which their data is used,” she writes in the report. “I can’t tell at all how this company archives and sells my home life data, and the data of my children.”

Amazon does make this disclosure on children’s privacy — though it does not specifically state what it does in instances where children’s data might have been passively recorded (i.e. as a result of one of its smart devices operating inside a family home.)

Barassi also points out there’s no link to its children’s data privacy policy on the ‘Create your Amazon Household Profile’ page — where the company informs users they can add up to four children to a profile, noting there is only a tiny generic link to its privacy policy at the very bottom of the page.

We asked Amazon to clarify its handling of children’s data but at the time of writing the company had not responded to multiple requests for comment.

The EU’s new GDPR framework does require data processors to take special care in handling children’s data.

In its guidance on this aspect of the regulation the ICO writes: “You should write clear privacy notices for children so that they are able to understand what will happen to their personal data, and what rights they have.”

The ICO also warns: “The GDPR also states explicitly that specific protection is required where children’s personal data is used for marketing purposes or creating personality or user profiles. So you need to take particular care in these circumstances.”

Microsoft launches new AI applications for customer service and sales

Like virtually every other major tech company, Microsoft is currently on a mission to bring machine learning to all of its applications. It’s no surprise then that it’s also bringing ‘AI’ to its highly profitable Dynamics 365 CRM products. A year ago, the company introduced its first Dynamics 365 AI solutions and today it’s expanding […]

Like virtually every other major tech company, Microsoft is currently on a mission to bring machine learning to all of its applications. It’s no surprise then that it’s also bringing ‘AI’ to its highly profitable Dynamics 365 CRM products. A year ago, the company introduced its first Dynamics 365 AI solutions and today it’s expanding this portfolio with the launch of three new products: Dynamics 365 AI for Sales, Customer Service and Market Insights.

“Many people, when they talk about CRM, or ERP of old, they referred to them as systems of oppression, they captured data,” said Alysa Taylor, Microsoft corporate VP for business applications and industry. “But they didn’t provide any value back to the end user — and what that end user really needs is a system of empowerment, not oppression.”

It’s no secret that few people love their CRM systems (except for maybe a handful of Dreamforce attendees), but ‘system of oppression’ is far from the ideal choice of words here. Yet Taylor is right that early systems often kept data siloed. Unsurprisingly, Microsoft argues that Dynamics 365 does not do that, allowing it to now use all of this data to build machine learning-driven experiences for specific tasks.

Dynamics 365 AI for Sales, unsurprisingly, is meant to help sales teams get deeper insights into their prospects using sentiment analysis. That’s obviously among the most basic of machine learning applications these days, but AI for Sales also helps these salespeople understand what actions they should take next and which prospects to prioritize. It’ll also help managers coach their individual sellers on the actions they should take.

Similarly, the Customer Service app focuses on using natural language understanding to understand and predict customer service problems and leverage virtual agents to lower costs. Taylor used this part of the announcement to throw some shade at Microsoft’s competitor Salesforce. “Many, many vendors offer this, but they offer it in a way that is very cumbersome for organizations to adopt,” she said. “Again, it requires a large services engagement, Salesforce partners with IBM Watson to be able to deliver on this. We are now out of the box.”

Finally, Dynamics 365 AI for Market Insights does just what the name implies: it provides teams with data about social sentiment, but this, too, goes a bit deeper. “This allows organizations to harness the vast amounts of social sentiment, be able to analyze it, and then take action on how to use these insights to increase brand loyalty, as well as understand what newsworthy events will help provide different brand affinities across an organization,” Taylor said. So the next time you see a company try to gin up some news, maybe it did so based on recommendations from Office 365 AI for Market Insights.

Ultimate.ai nabs $1.3M for a customer service AI focused on non-English markets

For customer service, Ultimate.ai‘s thesis is it’s not humans or AI but humans and AI. The Helsinki- and Berlin-based startup has built an AI-powered suggestion engine that, once trained on clients’ data-sets, is able to provide real-time help to (human) staff dealing with customer queries via chat, email and social channels. So the AI layer […]

For customer service, Ultimate.ai‘s thesis is it’s not humans or AI but humans and AI. The Helsinki- and Berlin-based startup has built an AI-powered suggestion engine that, once trained on clients’ data-sets, is able to provide real-time help to (human) staff dealing with customer queries via chat, email and social channels. So the AI layer is intended to make the humans behind the screens smarter and faster at responding to customer needs — as well as freeing them up from handling basic queries to focus on more complex issues.

AI-fuelled chatbots have fast become a very crowded market, with hundreds of so called ‘conversational AI’ startups all vying to serve the customer service cause.

Ultimate.ai stands out by merit of having focused on non-English language markets, says co-founder and CEO Reetu Kainulainen. This is a consequence of the business being founded in Finland, whose language belongs to a cluster of Eastern and Northern Eurasian languages that are plenty removed from English in sound and grammatical character.

“[We] started with one of the toughest languages in the world,” he tells TechCrunch. “With no available NLP [natural language processing] able to tackle Finnish, we had to build everything in house. To solve the problem, we leveraged state-of-the-art deep neural network technologies.

“Today, our proprietary deep learning algorithms enable us to learn the structure of any language by training on our clients’ customer service data. Core within this is our use of transfer learning, which we use to transfer knowledge between languages and customers, to provide a high-accuracy NLU engine. We grow more accurate the more clients we have and the more agents use our platform.”

Ultimate.ai was founded in November 2016 and launched its first product in summer 2017. It now has more than 25 enterprise clients, including the likes of Zalando, Telia and Finnair. It also touts partnerships with tech giants including SAP, Microsoft, Salesforce and Genesys — integrating with their Contact Center solutions.

“We partner with these players both technically (on client deployments) and commercially (via co-selling). We also list our solution on their Marketplaces,” he notes.

Up to taking in its first seed round now it had raised an angel round of €230k in March 2017, as well as relying on revenue generated by the product as soon as it launched.

The $1.3M seed round is co-led by Holtzbrinck Ventures and Maki.vc.

Kainulainen says one of the “key strengths” of Ultimate.ai’s approach to AI for text-based customer service touch-points is rapid set-up when it comes to ingesting a client’s historical customer logs to train the suggestion system.

“Our proprietary clustering algorithms automatically cluster our customer’s historical data (chat, email, knowledge base) to train our neural network. We can go from millions of lines of unstructured data into a trained deep neural network within a day,” he says.

“Alongside this, our state-of-the-art transfer learning algorithms can seed the AI with very limited data — we have deployed Contact Center automation for enterprise clients with as little as 500 lines of historical conversation.”

Ultimate.ai’s proprietary NLP achieves “state-of-the-art accuracy at 98.6%”, he claims.

It can also make use of what he dubs “semi-supervised learning” to further boost accuracy over time as agents use the tool.

“Finally, we leverage transfer learning to apply a single algorithmic model across all clients, scaling our learnings from client-to-client and constantly improving our solution,” he adds.

On the competitive front, it’s going up against the likes of IBM’s Watson AI. However Kainulainen argues that IBM’s manual tools — which he argues “require large onboarding projects and are limited in languages with no self-learning capabilities” — make that sort of manual approach to chatbot building “unsustainable in the long-term”.

He also contends that many rivals are saddled with “lengthy set-up and heavy maintenance requirements” which makes them “extortionately expensive”.

A closer competitor (in terms of approach) which he namechecks is TC Disrupt battlefield alum Digital Genius. But again they’ve got English language origins — so he flags that as a differentiating factor vs the proprietary NLP at the core of Ultimate.ai’s product (which he claims can handle any language).

“It is very difficult to scale out of English to other languages,” he argues. “It also uneconomical to rebuild your architecture to serve multi-language scenarios. Out of necessity, we have been language-agnostic since day one.”

“Our technology and team is tailored to the customer service problem; generic conversational AI tools cannot compete,” he adds. “Within this, we are a full package for enterprises. We provide a complete AI platform, from automation to augmentation, as well as omnichannel capabilities across Chat, Email and Social. Languages are also a key technical strength, enabling our clients to serve their customers wherever they may be.”

The multi-language architecture is not the only claimed differentiator, either.

Kainulainen points to the team’s mission as another key factor on that front, saying: “We want to transform how people work in customer service. It’s not about building a simple FAQ bot, it’s about deeply understanding how the division and the people work and building tools to empower them. For us, it’s not Superagent vs. Botman, it’s Superagent + Botman.”

So it’s not trying to suggest that AI should replace your entire customers service team but rather enhance your in house humans.

Asked what the AI can’t do well, he says this boils down to interactions that are transactional vs relational — with the former category meshing well with automation, but the latter (aka interactions that require emotional engagement and/or complex thought) definitely not something to attempt to automate away.

“Transactional cases are mechanical and AI is good at mechanical. The customer knows what they want (a specific query or action) and so can frame their request clearly. It’s a simple, in-and-out case. Full automation can be powerful here,” he says. “Relational cases are more frequent, more human and more complex. They can require empathy, persuasion and complex thought. Sometimes a customer doesn’t know what the problem is — “it’s just not working”.

“Other times are sales opportunities, which businesses definitely don’t want to automate away (AI isn’t great at persuasion). And some specific industries, e.g. emergency services, see the human response as so vital that they refuse automation entirely. In all of these situations, AI which augments people, rather than replaces, is most effective.

“We see work in customer service being transformed over the next decade. As automation of simple requests becomes the status-quo, businesses will increasingly differentiate through the quality of their human-touch. Customer service will become less labour intensive, higher skilled work. We try and imagine what tools will power this workforce of tomorrow and build them, today.”

On the ethics front, he says customers are always told when they are transferred to a human agent — though that agent will still be receiving AI support (i.e. in the form of suggested replies to help “bolster their speed and quality”) behind the scenes.

Ultimate.ai’s customers define cases they’d prefer an agent to handle — for instance where there may be a sales opportunity.

“In these cases, the AI may gather some pre-qualifying customer information to speed up the agent handle time. Human agents are also brought in for complex cases where the AI has had difficulty understanding the customer query, based on a set confidence threshold,” he adds.

Kainulainen says the seed funding will be used to enhance the scalability of the product, with investments going into its AI clustering system.

The team will also be targeting underserved language markets to chase scale — “focusing heavily on the Nordics and DACH [Germany, Austria, Switzerland]”.

“We are building out our teams across Berlin and Helsinki. We will be working closely with our partners – SAP, Microsoft, Salesforce and Genesys — to further this vision,” he adds. 

Commenting on the funding in a statement, Jasper Masemann, investment manager at Holtzbrinck Ventures, added: “The customer service industry is a huge market and one of the world’s largest employers. Ultimate.ai addresses the main industry challenges of inefficiency, quality control and high people turnover with latest advancements in deep learning and human machine hybrid models. The results and customer feedback are the best I have seen, which makes me very confident the team can become a forerunner in this space.”