A new Congress means a new opportunity for consumer privacy protections

Debra Berlyn Contributor Share on Twitter Debra Berlyn is the president of Consumer Policy Solutions and the executive director of Project GOAL, a project to raise awareness of both the benefits and challenges of innovative new technologies for the aging community. The 2018 mid-term elections, for the first time in U.S. history, resulted in a […]

The 2018 mid-term elections, for the first time in U.S. history, resulted in a Congress that has the look and feel of America…our very diverse America. There are now 102 women serving in Congress and a record number of Members representing all Americans. Our Members now represent the African American, Hispanic, LGBTQ, and interfaith communities.

Thirteen new members are under the age of 35. This evolution of the legislative branch provides an opportunity to represent the best interests of all consumers. In our digital world, what is it that consumers, from each and every community represented by this new diverse Congress, have asked for? Online privacy protections.

As consumers enjoy the benefits of the great range of services that ride on the internet, they have increasingly lost confidence in once trusted companies who, we now know, have offered false promises of protections for their private online information. In 2018, consumers experienced one of the greatest losses of their personal information when Facebook revealed that Cambridge Analytica gathered the personal data of millions of Facebook users without their consent.

In another significant incident, Marriott had its database hacked and the information about over 500 million individuals was accessed from their guest reservation system. Uniquely personal information including phone numbers, passport numbers and dates of birth could all be accessed from the Marriott database. These are just two examples, with many other incidences of loss of consumers’ personal data over the past several years by companies small and large.

These data breaches all come at a significant cost to consumers and companies. According to an IBM study last year, the average cost of a data breach per comprised record in 2018 was $148. The total cost of a breach that impacts 50 million comprised records (an average size breach) costs a total of over $350 million – and these dollar amounts increase every year.

While the monetary costs of a data breach are significant to business, the real, and perhaps even greater costs, are borne by consumers. The loss of privacy, the potential for identity theft, and the years it takes to repair the damages that result from identity theft are seemingly immeasurable.

Consumers are now very aware that the country lacks a reliable solution to online privacy threats and concerns. It’s time for Congress to pass legislation that will implement a set of national privacy rules, offering consumers strong privacy and data security protections, and data breach notifications. These privacy rules should be uniformly applied to all companies in the online ecosystem.

Consumers cannot distinguish between the companies they engage with in the online world, so neither should the rules. The best arbiter to manage and enforce these national rules is the Federal Trade Commission. The FTC has the expertise in consumer protection in privacy and security matters and should continue to build on this role with new and enhanced privacy protections.

While state legislative initiatives are noble efforts to offer privacy protections, this approach is not ideal for consumers, or for the digital economy.

They don’t offer uniform rules and they will protect a microcosm of consumers at best. As Representative Susan DelBene recently stated in reference to states moving forward on privacy legislation, “If we are not careful, we risk creating digital borders… within the (United) States causing massive disruptions in digital supply chains and digital trade…” A patchwork of state laws versus a national law could result in other implications for our digital economy as well. Congress must realize the immediate need of this privacy crisis and act; limited state protections cannot fill this void.

The best, and most long-lasting, resolution for consumers is for Congress to approve bipartisan privacy protections, providing national rules of the road for all companies to adhere to in this digital ecosystem.

With the unprecedented diversity represented by this Congress, we can feel confident that all points of view are being heard. It’s time to renew consumer confidence in our online services and devices. Let’s get it done, Congress. Our online privacy is an important protection that just can’t wait.

A new Congress means a new opportunity for consumer privacy protections

Debra Berlyn Contributor Share on Twitter Debra Berlyn is the president of Consumer Policy Solutions and the executive director of Project GOAL, a project to raise awareness of both the benefits and challenges of innovative new technologies for the aging community. The 2018 mid-term elections, for the first time in U.S. history, resulted in a […]

The 2018 mid-term elections, for the first time in U.S. history, resulted in a Congress that has the look and feel of America…our very diverse America. There are now 102 women serving in Congress and a record number of Members representing all Americans. Our Members now represent the African American, Hispanic, LGBTQ, and interfaith communities.

Thirteen new members are under the age of 35. This evolution of the legislative branch provides an opportunity to represent the best interests of all consumers. In our digital world, what is it that consumers, from each and every community represented by this new diverse Congress, have asked for? Online privacy protections.

As consumers enjoy the benefits of the great range of services that ride on the internet, they have increasingly lost confidence in once trusted companies who, we now know, have offered false promises of protections for their private online information. In 2018, consumers experienced one of the greatest losses of their personal information when Facebook revealed that Cambridge Analytica gathered the personal data of millions of Facebook users without their consent.

In another significant incident, Marriott had its database hacked and the information about over 500 million individuals was accessed from their guest reservation system. Uniquely personal information including phone numbers, passport numbers and dates of birth could all be accessed from the Marriott database. These are just two examples, with many other incidences of loss of consumers’ personal data over the past several years by companies small and large.

These data breaches all come at a significant cost to consumers and companies. According to an IBM study last year, the average cost of a data breach per comprised record in 2018 was $148. The total cost of a breach that impacts 50 million comprised records (an average size breach) costs a total of over $350 million – and these dollar amounts increase every year.

While the monetary costs of a data breach are significant to business, the real, and perhaps even greater costs, are borne by consumers. The loss of privacy, the potential for identity theft, and the years it takes to repair the damages that result from identity theft are seemingly immeasurable.

Consumers are now very aware that the country lacks a reliable solution to online privacy threats and concerns. It’s time for Congress to pass legislation that will implement a set of national privacy rules, offering consumers strong privacy and data security protections, and data breach notifications. These privacy rules should be uniformly applied to all companies in the online ecosystem.

Consumers cannot distinguish between the companies they engage with in the online world, so neither should the rules. The best arbiter to manage and enforce these national rules is the Federal Trade Commission. The FTC has the expertise in consumer protection in privacy and security matters and should continue to build on this role with new and enhanced privacy protections.

While state legislative initiatives are noble efforts to offer privacy protections, this approach is not ideal for consumers, or for the digital economy.

They don’t offer uniform rules and they will protect a microcosm of consumers at best. As Representative Susan DelBene recently stated in reference to states moving forward on privacy legislation, “If we are not careful, we risk creating digital borders… within the (United) States causing massive disruptions in digital supply chains and digital trade…” A patchwork of state laws versus a national law could result in other implications for our digital economy as well. Congress must realize the immediate need of this privacy crisis and act; limited state protections cannot fill this void.

The best, and most long-lasting, resolution for consumers is for Congress to approve bipartisan privacy protections, providing national rules of the road for all companies to adhere to in this digital ecosystem.

With the unprecedented diversity represented by this Congress, we can feel confident that all points of view are being heard. It’s time to renew consumer confidence in our online services and devices. Let’s get it done, Congress. Our online privacy is an important protection that just can’t wait.

Why you need to use a password manager

If you thought passwords will soon be dead, think again. They’re here to stay — for now. Passwords are cumbersome and hard to remember — and just when you did, you’re told to change it again. And sometimes passwords can be guessed and are easily hackable. Nobody likes passwords but they’re a fact of life. […]

Getty Images

If you thought passwords will soon be dead, think again. They’re here to stay — for now. Passwords are cumbersome and hard to remember — and just when you did, you’re told to change it again. And sometimes passwords can be guessed and are easily hackable.

Nobody likes passwords but they’re a fact of life. And while some have tried to kill them off by replacing them with fingerprints and face-scanning technology, neither are perfect and many still resort back to the trusty (but frustrating) password.

How do you make them better? You need a password manager.

What is a password manager?

Think of a password manager like a book of your passwords, locked by a master key that only you know.

Some of you think that might sound bad. What if someone gets my master password? That’s a reasonable and rational fear. But assuming that you’ve chosen a strong and unique, but rememberable, master password that you’ve not used anywhere else is a near-perfect way to protect the rest of your passwords from improper access.

Password managers don’t just store your passwords — they help you generate and save strong, unique passwords when you sign up to new websites. That means whenever you go to a website or app, you can pull up your password manager, copy your password, paste it into the login box, and you’re in. Often, password managers come with browser extensions that automatically fill in your password for you.

And because many of the password managers out there have encrypted sync across devices, you can take your passwords anywhere with you — even on your phone.

Why do you need to use one?

Password managers take the hassle out of creating and remembering strong passwords. It’s that simple. But there are three good reasons why you should care.

Passwords are stolen all the time. Sites and services are at risk of breaches as much as you are to phishing attacks that try to trick you into turning over your password. Although companies are meant to scramble your password whenever you enter it — known as hashing — not all use strong or modern algorithms, making it easy for hackers to reverse that hashing and read your password in plain text. Some companies don’t bother to hash at all! That puts your accounts at risk of fraud or your data at risk of being used against you for identity theft.

But the longer and more complex your password is — a mix of uppercase and lowercase characters, numbers, symbols and punctuation — the longer it takes for hackers to unscramble your password.

The other problem is the sheer number of passwords we have to remember. Banks, social media accounts, our email and utilities — it’s easy to just use one password across the board. But that makes “credential stuffing” easier. That’s when hackers take your password from one breached site and try to log in to your account on other sites. Using a password manager makes it so much easier to generate and store stronger passwords that are unique to each site, preventing credential stuffing attacks.

And, for the times you’re in a crowded or busy place — like a coffee shop or an airplane — think of who is around you. Typing in passwords can be seen, copied and later used by nearby eavesdroppers. Using a password manager in many cases removes the need to type any passwords in at all.

Which password manager should you use?

The simple answer is that it’s up to you. All password managers perform largely the same duties — but different apps will have more or relevant features to you than others.

Anyone running iOS 11 or later — which is most iPhone and iPad users — will have a password manager by default — so there’s no excuse. You can sync your passwords across devices using iCloud Keychain.

For anyone else — most password managers are free, with the option to upgrade to get better features.

If you want your passwords to sync across devices for example, LastPass is a good option. 1Password is widely used and integrates with Troy Hunt’s Pwned Passwords database, so you can tell if (and avoid!) a password that has been previously leaked or exposed in a data breach.

Many password managers are cross-platform, like Dashlane, which also work on mobile devices, allowing you to take your passwords wherever you go.

And, some are open source, like KeePass, allowing anyone to read the source code. KeePass doesn’t use the cloud so it never leaves your computer unless you move it. That’s much better for the super paranoid, but also for those who might face a wider range of threats — such as those who work in government.

What you might find useful is this evaluation of five password managers, which offers a breakdown by features.

Like all software, vulnerabilities and weaknesses in any password manager can make put your data at risk. But so long as you keep your password manager up to date — most browser extensions are automatically updated — your risk is significantly reduced.

Simply put: using a password manager is far better for your overall security than not using one.

Check out our full Cybersecurity 101 guides here.

Children are being “datafied” before we’ve understood the risks, report warns

A report by England’s children’s commissioner has raised concerns about how kids’ data is being collected and shared across the board, in both the private and public sectors. In the report, entitled Who knows what about me?, Anne Longfield urges society to “stop and think” about what big data means for children’s lives. Big data practices […]

A report by England’s children’s commissioner has raised concerns about how kids’ data is being collected and shared across the board, in both the private and public sectors.

In the report, entitled Who knows what about me?, Anne Longfield urges society to “stop and think” about what big data means for children’s lives.

Big data practices could result in a data-disadvantaged generation whose life chances are shaped by their childhood data footprint, her report warns.

The long term impacts of profiling minors when these children become adults is simply not known, she writes.

“Children are being “datafied” – not just via social media, but in many aspects of their lives,” says Longfield.

“For children growing up today, and the generations that follow them, the impact of profiling will be even greater – simply because there is more data available about them.”

By the time a child is 13 their parents will have posted an average of 1,300 photos and videos of them on social media, according to the report. After which this data mountain “explodes” as children themselves start engaging on the platforms — posting to social media 26 times per day, on average, and amassing a total of nearly 70,000 posts by age 18.

“We need to stop and think about what this means for children’s lives now and how it may impact on their future lives as adults,” warns Longfield. “We simply do not know what the consequences of all this information about our children will be. In the light of this uncertainty, should we be happy to continue forever collecting and sharing children’s data?

“Children and parents need to be much more aware of what they share and consider the consequences. Companies that make apps, toys and other products used by children need to stop filling them with trackers, and put their terms and conditions in language that children understand. And crucially, the Government needs to monitor the situation and refine data protection legislation if needed, so that children are genuinely protected – especially as technology develops,” she adds.

The report looks at what types of data is being collected on kids; where and by whom; and how it might be used in the short and long term — both for the benefit of children but also considering potential risks.

On the benefits side, the report cites a variety of still fairly experimental ideas that might make positive use of children’s data — such as for targeted inspections of services for kids to focus on areas where data suggests there are problems; NLP technology to speed up analysis of large data-sets (such as the NSPCC’s national case review repository) to find common themes and understand “how to prevent harm and promote positive outcomes”; predictive analytics using data from children and adults to more cost-effectively flag “potential child safeguarding risks to social workers”; and digitizing children’s Personal Child Health Record to make the current paper-based record more widely accessible to professionals working with children.

But while Longfield describes the increasing availability of data as offering “enormous advantages”, she is also very clear on major risks unfolding — be it to safety and well-being; child development and social dynamics; identity theft and fraud; and the longer term impact on children’s opportunity and life chances.

“In effect [children] are the “canary in the coal mine for wider society, encountering the risks before many adults become aware of them or are able to develop strategies to mitigate them,” she warns. “It is crucial that we are mindful of the risks and mitigate them.”

Transparency is lacking

One clear takeaway from the report is there is still a lack of transparency about how children’s data is being collected and processed — which in itself acts as a barrier to better understanding the risks.

“If we better understood what happens to children’s data after it is given – who collects it, who it is shared with and how it is aggregated – then we would have a better understanding of what the likely implications might be in the future, but this transparency is lacking,” Longfield writes — noting that this is true despite ‘transparency’ being the first key principle set out in the EU’s tough new privacy framework, GDPR.

The updated data protection framework did beef up protections for children’s personal data in Europe — introducing a new provision setting a 16-year-old age limit on kids’ ability to consent to their data being processed when it came into force on May 25, for example. (Although EU Member States can choose to write a lower age limit into their laws, with a hard cap set at 13.)

And mainstream social media apps, such as Facebook and Snapchat, responded by tweaking their T&Cs and/or products in the region. (Although some of the parental consent systems that were introduced to claim compliance with GDPR appear trivially easy for kids to bypass, as we’ve pointed out before.)

But, as Longfield points out, Article 5 of the GDPR states that data must be “processed lawfully, fairly and in a transparent manner in relation to individuals”.

Yet when it comes to children’s data the children’s commissioner says transparency is simply not there.

She also sees limitations with GDPR, from a children’s data protection perspective — pointing out that, for example, it does not prohibit the profiling of children entirely (stating only that it “should not be the norm”).

While another provision, Article 22 — which states that children have the right not to be subject to decisions based solely on automated processing (including profiling) if they have legal or similarly significant effects on them — also appears to be circumventable.

“They do not apply to decision-making where humans play some role, however minimal that role is,” she warns, which suggests another workaround for companies to exploit children’s data.

“Determining whether an automated decision-making process will have “similarly significant effects” is difficult to gauge given that we do not yet understand the full implications of these processes – and perhaps even more difficult to judge in the case of children,” Longfield also argues.

“There is still much uncertainty around how Article 22 will work in respect of children,” she adds. “The key area of concern will be in respect of any limitations in relation to advertising products and services and associated data protection practices.”

Recommendations

The report makes a series of recommendations for policymakers, with Longfield calling for schools to “teach children about how their data is collected and used, and what they can do to take control of their data footprints”.

She also presses the government to consider introducing an obligation on platforms that use “automated decision-making to be more transparent about the algorithms they use and the data fed into these algorithms” — where data collected from under 18s is used.

Which would essentially place additional requirements on all mainstream social media platforms to be far less opaque about the AI machinery they use to shape and distribute content on their platforms at vast scale. Given that few — if any — could claim not to have no under 18s using their platforms.

She also argues that companies targeting products at children have far more explaining to do, writing: 

Companies producing apps, toys and other products aimed at children should be more transparent about any trackers capturing information about children. In particular where a toy collects any video or audio generated by a child this should be made explicit in a prominent part of the packaging or its accompanying information. It should be clearly stated if any video or audio content is stored on the toy or elsewhere and whether or not it is transmitted over the internet. If it is transmitted, parents should also be told whether or not it will be encrypted during transmission or when stored, who might analyse or process it and for what purposes. Parents should ask if information is not given or unclear.

Another recommendation for companies is that terms and conditions should be written in a language children can understand.

(Albeit, as it stands, tech industry T&Cs can be hard enough for adults to scratch the surface of — let alone have enough hours in the day to actually read.)

Photo: SementsovaLesia/iStock

A recent U.S. study of kids apps, covered by BuzzFeed News, highlighted that mobile games aimed at kids can be highly manipulative, describing instances of apps making their cartoon characters cry if a child does not click on an in-app purchase, for example.

A key and contrasting problem with data processing is that it’s so murky; applied in the background so any harms are far less immediately visible because only the data processor truly knows what’s being done with people’s — and indeed children’s — information.

Yet concerns about exploitation of personal data are stepping up across the board. And essentially touch all sectors and segments of society now, even as risks where kids are concerned may look the most stark.

This summer the UK’s privacy watchdog called for an ethical pause on the use by political campaigns of online ad targeting tools, for example, citing a range of concerns that data practices have got ahead of what the public knows and would accept.

It also called for the government to come up with a Code of Practice for digital campaigning to ensure that long-standing democratic norms are not being undermined.

So the children’s commissioner’s appeal for a collective ‘stop and think’ where the use of data is concerned is just one of a growing number of raised voices policymakers are hearing.

One thing is clear: Calls to quantify what big data means for society — to ensure powerful data-mining technologies are being applied in ways that are ethical and fair for everyone — aren’t going anywhere.

Google expands its identity management portfolio for businesses and developers

Over the course of the last year, Google has launched a number of services that bring to other companies the same BeyondCorp model for managing access to a company’s apps and data without a VPN that it uses internally. Google’s flagship product for this is Cloud Identity, which is essentially Google’s BeyondCorp, but packaged for […]

Over the course of the last year, Google has launched a number of services that bring to other companies the same BeyondCorp model for managing access to a company’s apps and data without a VPN that it uses internally. Google’s flagship product for this is Cloud Identity, which is essentially Google’s BeyondCorp, but packaged for other businesses.

Today, at its Cloud Next event in London, it’s expanding this portfolio of Cloud Identity services with three new products and features that enable developers to adopt this way of thinking about identity and access for their own apps and that make it easier for enterprises to adopt Cloud Identity and make it work with their existing solutions.

The highlight of today’s announcements, though, is Cloud Identity for Customers and Partners, which is now in beta. While Cloud Identity is very much meant for employees at a larger company, this new product allows developers to build into their own applications the same kind of identity and access management services.

“Cloud Identity is how we protect our employees and you protect your workforce,” Karthik Lakshminarayanan, Google’s product management director for Cloud Identity, said in a press briefing ahead of the announcement. “But what we’re increasingly finding is that developers are building applications and are also having to deal with identity and access management. So if you’re building an application, you might be thinking about accepting usernames and passwords, or you might be thinking about accepting social media as an authentication mechanism.”

This new service allows developers to build in multiple ways of authenticating the user, including through email and password, Twitter, Facebook, their phones, SAML, OIDC and others. Google then handles all of that authentication work. Google will offer both client-side (web, iOS and Android) and server-side SDKs (with support for Node.ja, Java, Python and other languages).

“They no longer have to worry about getting hacked and their passwords and their user credentials getting compromised,” added Lakshminarayanan, “They can now leave that to Google and the exact same scale that we have, the security that we have, the reliability that we have — that we are using to protect employees in the cloud — can now be used to protect that developer’s applications.”

In addition to Cloud Identity for Customers and Partners, Google is also launching a new feature for the existing Cloud Identity service, which brings support for traditional LDAP-based applications and IT services like VPNs to Cloud Identity. This feature is, in many ways, an acknowledgment that most enterprises can’t simply turn on a new security paradigm like BeyondCorp/Cloud Identity. With support for secure LDAP, these companies can still make it easy for their employees to connect to these legacy applications while still using Cloud Identity.

“As much as Google loves the cloud, a mantra that Google has is ‘let’s meet customers where they are.’ We know that customers are embracing the cloud, but we also know that they have a massive, massive footprint of traditional applications,” Lakshminarayanan explained. He noted that most enterprises today run two solutions: one that provides access to their on-premise applications and another that provides the same services for their cloud applications. Cloud Identity now natively supports access to many of these legacy applications, including Aruba Networks (HPE), Itopia, JAMF, Jenkins (Cloudbees), OpenVPN, Papercut, pfSense (Netgate), Puppet, Sophos and Splunk. Indeed, as Google notes, virtually any application that supports LDAP over SSL can work with this new service.

Finally, the third new feature Google is launching today is context-aware access for those enterprises that already use its Cloud Identity-Aware Proxy (yes, those names are all a mouthful). The idea here is to help enterprises provide access to cloud resources based on the identity of the user and the context of the request — all without using a VPN. That’s pretty much the promise of BeyondCorp in a nutshell, and this implementation, which is now in beta, allows businesses to manage access based on the user’s identity and a device’s location and its security status, for example. Using this new service, IT managers could restrict access to one of their apps to users in a specific country, for example.

 

AdGuard resets all user passwords after account hacks

Popular ad-blocker AdGuard has forcibly reset all of its users’ passwords after it detected hackers trying to break into accounts. The company said it “detected continuous attempts to login to AdGuard accounts from suspicious IP addresses which belong to various servers across the globe,” in what appeared to be a credential stuffing attack. That’s when […]

Popular ad-blocker AdGuard has forcibly reset all of its users’ passwords after it detected hackers trying to break into accounts.

The company said it “detected continuous attempts to login to AdGuard accounts from suspicious IP addresses which belong to various servers across the globe,” in what appeared to be a credential stuffing attack. That’s when hackers take lists of stolen usernames and passwords and try them on other sites.

AdGuard said that the hacking attempts were slowed thanks to rate limiting — preventing the attackers from trying too many passwords in one go. But, the effort was “not enough” when the attackers know the passwords, a blog post said.

“As a precautionary measure, we have reset passwords to all AdGuard accounts,” said Andrey Meshkov, AdGuard’s co-founder and chief technology officer.

AdGuard has more than five million users worldwide, and is one of the most prominent ad-blockers available.

Although the company said that some accounts were improperly accessed, there wasn’t a direct breach of its systems. It’s not known how many accounts were affected. An email to Meshkov went unreturned at the time of writing.

It’s not clear why attackers targeted AdGuard users, but the company’s response was swift and effective.

The company said it now has set stricter password requirements, and connects to Have I Been Pwned, a breach notification database set up by security expert Troy Hunt, to warn users away from previously breached passwords. Hunt’s database is trusted by both the UK and Australian governments, and integrates with several other password managers and identity solutions.

AdGuard also said that it will implement two-factor authentication — a far stronger protection against credential stuffing attacks — but that it’s a “next step” as it “physically can’t implement it in one day.”

Myki raises $4M Series A to decentralize identity management for enterprises

Myki, a startup based between Beirut and New York which offers both a consumer and enterprise identity management solution to store sensitive information offline, today announced at TechCrunch Disrupt in San Francisco that it’s raised a $4 million Series A to scale its operations. The round was led by Dubai-based VC BECO Capital with participation […]

Myki, a startup based between Beirut and New York which offers both a consumer and enterprise identity management solution to store sensitive information offline, today announced at TechCrunch Disrupt in San Francisco that it’s raised a $4 million Series A to scale its operations.

The round was led by Dubai-based VC BECO Capital with participation from Beirut-based LEAP Ventures and B&Y Venture Partners, all of which are returning investors. Myki plans to expand its U.S. operations with its “decentralised Identity Management” solution for enterprise.

Priscilla Elora Sharuk, who co-founded the startup with Antoine Vincent Jabberer in 2015, said: “Online security and data privacy is not a privilege, it is a right, and that is why at Myki we empower our users with the tools to securely manage their digital identity.”

Myki actually launched on the TechCrunch Disrupt Battlefield stage in September of 2016, and has since gone on to win several plaudits from tech industry outlets for its free and powerful password management, and amassing more than 250,000 users worldwide.

Back in May, on the TechCrunch Disrupt Berlin stage, Myki announced a partnership with self-sovereign identity application Blockpass to combine self-sovereign identity and offline password security.

Myki is going after the consumer password space, with biometric authentication such as touch ID and Face ID; the enterprise with “Myki for Teams”; and a solution for Managed Service Providers.