Australian Government Wants to Give Satire The Boot

The National Symbols Officer of Australia recently wrote to Juice Media, producers of Rap News and Honest Government Adverts, suggesting that its “use” of Australia’s coat of arms violated various Australian laws. This threat came despite the fact that Juice Media’s videos are clearly satire and no reasonable viewer could mistake them for official publications. Indeed, the coat of arms that appeared in the Honest Government Adverts series does not even spell “Australian” correctly.

It is unfortunate that the Australian government cannot distinguish between impersonation and satire. But it is especially worrying because the government has proposed legislation that would impose jail terms for impersonation of a government agency. Some laws against impersonating government officials can be appropriate (Australia, like the U.S., is seeing telephone scams from fraudsters claiming to be tax officials). But the proposed legislation in Australia lacks sufficient safeguards. Moreover, the recent letter to Juice Media shows that the government may lack the judgment needed to apply the law fairly.

In a submission to Parliament, Australian Lawyers for Human Rights explains that the proposed legislation is too broad. For example, the provision that imposes a 2 year sentence for impersonation of a government agency does not require any intent to deceive. Similarly, it does not require that any actual harm was caused by the impersonation. Thus, the law could sweep in conduct outside the kind of fraud that motivates the bill.

The proposed legislation does include an exemption for “conduct engaged in solely for genuine satirical, academic or artistic purposes.” But, as critics have noted, this gives the government leeway to attack satire that it does not consider “genuine.” Similarly, the limitation that conduct be “solely” for the purpose of satire could chill speech. Is a video produced for satirical purposes unprotected because it was also created for the purpose of supporting advertising revenue?

Government lawyers failing to understand satire is hardly unique to Australia. In 2005, a lawyer representing President Bush wrote to The Onion claiming that the satirical site was violating the law with its use of the presidential seal. The Onion responded that it was “inconceivable” that anyone would understand its use of the seal to be anything but parody. The White House wisely elected not to pursue the matter further. If it had, it likely would have lost on First Amendment grounds. Australia, however, does not have a First Amendment (or even a written bill of rights) so civil libertarians there are rightly concerned that the proposed law against impersonation could be used to attack political commentary. We hope the Australian government either kills the bill or amends the law to include both a requirement of intent to deceive and a more robust exemption for satire.

In its own style, Juice Media has responded to the proposed legislation with an “honest” government advert.

mytubethumb
play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube-nocookie.com%2Fembed%2FPmCDxmZI3I8%3Fautoplay%3D1%22%20frameborder%3D%220%22%20width%3D%22750%22%20height%3D%22422%22%3E%3C%2Fiframe%3E

Privacy info. This embed will serve content from youtube-nocookie.com

    

‘Australien Government’ coat of arms Juice Media, CC BY-NC-SA 3.0 AU 

Calming the Complexity: Bringing Order to Your Network 

In thinking about today’s network environments – with multiple vendors and platforms in play, a growing number of devices connecting to the network and the need to manage it all – it’s easy to see why organizations can feel overwhelmed, unsure of the first step to take towards network management and security. But what if there was a way to distill that network complexity into an easily-managed, secure, and continuously compliant environment?

Exponential Growth

Enterprise networks are constantly growing. Between physical networks, cloud networks, the hybrid network, and the fluctuations that mobile devices introduce to the network, the number of connection points to a network that need to be recognized and protected is daunting. Not to mention that in order to keep your organization running at optimal efficiency – and to keep it secure from potential intrusions – you must operate at the pace that the business dictates. New applications need to be deployed and ensuring connectivity is an absolute requisite, but the old now overly permissive rules need to be removed, and servers decommissioned – it’s a lot, but teams can trudge through it.

But getting through it isn’t all that you have to worry about – the potential for human error on a simple network misconfiguration needs to be factored in as well. As any IT manager knows, even slight changes to the network environment – intended or not – can have a knock-on effect across the entire network.  

What’s in Your Network?

Adding up all the moving parts that make up the network, the likely risk of introducing error through manual processes and the resulting consequences of such errors puts your network in a persistent state of jeopardy. This can take the form of lack of visibility, increased time for network changes, disrupted business continuity, or an increased attack surface that cybercriminals could find and exploit.

Considering how large enterprise networks are and the number of changes required to keep the business growing, – an organization’s security team can face hundreds of change requests each and every week. These changes are too numerous, redundant, and difficult to manage manually; in fact, one manual rule change error could inadvertently introduce new access points to your network zones that may be exposed to nefarious individuals. In a large organization, small problems can quickly escalate.

The network has also fundamentally changed. Long gone are the days of sole reliance on the physical data center as organizations incorporate the public cloud and hybrid networks into their IT infrastructure. Understanding your network topology is substantially more difficult when it’s no longer on premise. Hybrid networks are not always visible to the IT and security teams, and thus complicates the ability to maintain application connectivity and ensure security.

Network Segmentation & Complexity: A Balancing Act

Network segmentation limits the exposure that an attacker would have in the event that the network is breached. By segmenting the network into zones, any attacker that enters a specific zone would be able to access only that zone – nothing else. By dividing their enterprise networks into different zones, IT managers minimize access privileges, ensuring that only those who are permitted have access to the data, information, and applications they need.

However, by segmenting the network you’re inherently adding more complexity to be managed. The more segments you have, the more opportunity there is for changes to be made in the rules that govern access among these zones.

How can an IT manager turn an intricate, hybrid network into something manageable, secure, and compliant?

The Answer: Automation and Orchestration

As we have seen, the enterprise network changes all the time – so it’s imperative to ensure that you’re making the correct decisions so that changes do not put the company at risk. The easiest way to do this is to set a network security policy, and use that policy as the guide for all changes that are made in the network. Using a policy-based approach, any change within the network infrastructure is confirmed to be secure and compliant. With a centralized policy in place, now you have control.

The next step to managing complexity is removing the risks of manual errors. This is where automation and orchestration built on a policy-based approach is required.

Now you’re able to analyze the network, design network security rules, and develop and automate the rule approval process. This approach streamlines the change process and eradicates unintended errors.

Using the right automation and orchestration tools can add order and visibility to the network, manage policy violations and exceptions, and streamline operations with continuous compliance and risk management.

Together, automation and orchestration of network security policies ensures that you have a process in place that will enable you to make secure, compliant changes across the entire network – without compromising agility, risking network downtime, or investing valuable time on tedious, manual tasks.

Complexity is the reality of today’s enterprise networks. Rather than risk letting one small event cause a big ripple across your entire organization, with an automated and orchestrated approach to network security management, your network can become better-controlled – helping you improve visibility, compliance, and security.

About the author: Reuven Harrison is CTO and Co-Founder of Tufin. He led all development efforts during the company’s initial fast-paced growth period, and is focused on Tufin’s product leadership. Reuven is responsible for the company’s future vision, product innovation and market strategy. Under Reuven’s leadership, Tufin’s products have received numerous technology awards and wide industry recognition.

Copyright 2010 Respective Author at Infosec Island

KRACK Vulnerability: What You Need To Know

This week security researchers announced a newly discovered vulnerability dubbed KRACK, which affects several common security protocols for Wi-Fi, including WPA (Wireless Protected Access) and WPA2. This is a bad vulnerability in that it likely affects billions of devices, many of which are hard to patch and will remain vulnerable for a long time. Yet in light of the sometimes overblown media coverage, it’s important to keep the impact of KRACK in perspective: KRACK does not affect HTTPS traffic, and KRACK’s discovery does not mean all Wi-Fi networks are under attack. For most people, the sanest thing to do is simply continue using wireless Internet access.

The limited privacy goals of WPA

It’s worth taking a step back and remembering why a cryptographic protocol like WPA was developed to begin with. Before the advent of Wi-Fi, computers typically connected to their local Internet access point (e.g. a modem) using a physical wire. Traditional protocols like Ethernet for carrying data on this wire (called the physical layer) were not encrypted, meaning an attacker could physically attach an eavesdropping device to the wire (or just another computer using the same wire) to intercept communications. Most people weren’t too worried about this problem; physically attaching a device is somewhat difficult, and important traffic should be encrypted anyways at a higher layer (most commonly a protocol like TLS at the transport layer). So Ethernet was unencrypted, and remains so today.

With wireless protocols it became much easier to eavesdrop on the physical layer. Instead of attaching a device to a specific wire, you just need an antenna somewhere within range. So while an unencrypted wireless network is theoretically no less secure than an unencrypted wired network, in practice it’s much easier to set up an eavesdropping device. For some it became a hobby to drive or bike around with an antenna looking for wireless networks to eavesdrop on (called wardriving). In response, the IEEE (a computer and electronics engineers’ professional organization) standardized an encryption protocol called WEP (Wired Equivalent Privacy). The name is telling here: the goal was just to get back to the relative privacy of a wired connection, by using encryption so that an eavesdropping device couldn’t read any of the traffic. WEP was badly broken cryptographically and has been supplanted by WPA and WPA2, but they have the same basic privacy goal.

Note that WPA’s privacy goals were always very limited. It was never intended to provide complete confidentiality of your data all the way to its final destination. Instead, protocols like TLS (and HTTPS) exist which protect your data end-to-end. In fact, WPA provides no protection against a number of adversaries:

  • At any point between the access point and the server you’re communicating with, an eavesdropper can read your data whether the first hop was WPA, Ethernet, anything else. This means your Internet provider or any Internet router on the network path between you and the destination server can intercept your traffic.
  • Your access point operator (e.g. the owner of your local coffee shop) can read your traffic.
  • Anybody who compromises your access point can read your traffic, and there is a long history of exploits against wireless routers.
  • Anybody who knows the access point’s password can perform a machine-in-the-middle attack and read your traffic. This includes anybody who cracks that password.

A secondary goal: access control

In addition to providing privacy against local eavesdroppers, WPA is commonly used to provide access control to the network by requiring the use of a “pre-shared key” to create sessions. This is the Wi-Fi access password or token which is familiar to most users when trying to connect to a new network. The goal here is simple: the owner of the wireless access point may want to prevent access by unauthorized devices, require new devices to jump through some hoops like watching an advertisement or agreeing to a terms of use agreement, or otherwise alter traffic for unauthorized guests. For years EFF has supported increased availability of open wireless access points, but certainly access point owners should have the ability to limit access if they want to.

How KRACK changes the picture

KRACK makes it possible for an adversary to completely undermine the privacy properties of WPA and WPA2 in many cases. The attack is somewhat complex in that it requires active broadcasting of packets and tricking a device into resetting its key, but it’s the kind of thing that will likely soon be automated in software. This means that, for now, data on many wireless access points may be vulnerable to interception or modification. Keep in mind two big caveats:

  • The attacker must be local and proactive. Carrying out this attack requires having an active antenna in range of the targeted wireless network and requires broadcasting many packets and intercepting or delaying others. This is all doable, but does not easily scale.
  • Important traffic should already be protected with HTTPS. As discussed above, there are already many potential attackers that WPA provides no security against. At worst, KRACK adds an additional one to the list, but with no more power than you ISP or any router on the Internet backbone already has (and those are much more scalable places to conduct surveillance or other mischief). We already have protocols to defend against these attackers, and thanks to the success of projects like EFF’s Encrypt The Web initiative more than half of all Internet traffic is already protected by HTTPS.

On the access control front, it’s unclear how much KRACK matters. It does not provide a new way to crack the pre-shared key or password of a wireless network. Some variants of KRACK enable recovering enough key material to hijack an existing connection and use it to gain unauthorized access, but this is probably not the easiest way to gain unauthorized access.

How did we get here?

Matt Green provides a great overview of the flawed process that led to KRACK being undiscovered for over a decade. The biggest single problem is that the protocol definitions were not easily available to security researchers, so none bothered to seriously look. This is another clear example of why important protocols like WPA and WPA2 should be open and free to the public: so that security researchers can investigate and catch these sorts of vulnerabilities early in the life of a protocol, before it’s embedded in billions of devices.

What you can do to protect your local network

Fortunately, while the KRACK vulnerability is baked into the WPA specification and deployed on billions of devices, it is relatively easy to patch in a backwards-compatible way. It requires patching both devices that connect to the Internet and access points. If you operate a wireless network, patching your router is a great step. Your Internet devices (your computer, phone or tablet) will also need to be patched. Many patches are already available and many devices will automatically be patched.

With that said, it’s a forgone conclusion that there will still be billions of unpatched devices for years (maybe even decades) to come. That’s because, as we’ve said before:

patching large, legacy systems is hard. For many kinds of systems, the existence of patches for a vulnerability is no guarantee that they will make their way to the affected devices in a timely manner. For example, many Internet of Things devices are unpatchable, a fact that was exploited by the Mirai Botnet. Additionally, the majority of Android devices are no longer supported by Google or the device manufacturers, leaving them open to exploitation by a toxic hellstew” of known vulnerabilities.

So while we don’t think people should necessarily freak out about KRACK, it does demonstrate once again how important it is for industry to solve the patching problem.

Expanding E-Verify is a Privacy Disaster in the Making

E-Verify is a massive federal data system used to verify the eligibility of job applicants to work in the United States. The U.S. Department of Homeland Security (DHS), U.S. Citizenship and Immigration Services (USCIS), and the U.S. Social Security Administration (SSA) administer E-Verify. Until now, the federal government has not required private employers to use E-Verify, and only a few states have required it. However, a proposed bill in Congress, the Legal Workforce Act (HR 3711), aims to make E-Verify use mandatory nationwide despite all the very real privacy and accuracy issues associated with the data system.

EFF recently joined human rights and workers rights organizations from across the United States and sent a letter to Congress pointing out the flaws of E-Verify. 

Instead of learning from the recent Equifax data breach that access to sensitive information creates an attractive target for data thieves, our elected representatives want to compel a massive increase in the use of yet another data system that can be breached. To use E-Verify, employers need to collect and transmit sensitive information, such as our social security and passport numbers.

And a data breach isn’t the only concern with such a data system: there’s also the likelihood of data errors that can prevent many Americans from obtaining jobs. Even worse, E-Verify is likely to have an unfair disparate impact against women, as they are more likely to change their names due to marriage or divorce. Additionally, a Government Accountability Office (GAO) report [.pdf page 19] found that despite being eligible, E-Verify leads to more denials for people not born in America, and can “create the appearance of discrimination.” The GAO report also stated that these errors would increase dramatically if E-Verify is made mandatory.

Instead of recognizing the problematic nature of E-Verify, the White House is pushing to make it mandatory in its negotiations with Congress concerning legislative protection for Deferred Action for Childhood Arrivals (DACA) recipients. If successful, this would jeopardize Americans’ collective security and privacy. Not to mention that this expanded database may find uses beyond employment verification, and end up as another tool in an already impressive law enforcement surveillance arsenal.

As we have in the past, EFF will continue to do everything in our power to fight against the mandatory usage of E-Verify. It was a bad idea then and it’s a bad idea now.

South Dakota Civil Liberties Groups Urge Senator Thune to Put the Brakes on SESTA

A coalition of civil liberties groups in South Dakota is sending a clear message to Senator John Thune: don’t turn your back on our right to assemble online.

The ACLU of South Dakota, Indivisible 605, Indivisible Rapid City, and Queer South Dakota signed a letter [.pdf] urging Senator Thune to put the brakes on the Stop Enabling Sex Traffickers Act (S. 1693, SESTA). Thune is the Chairman of the Senate Committee on Commerce, Science, and Transportation, which is currently considering the bill.

Despite its name, SESTA wouldn’t punish sex traffickers. It would threaten legitimate online speech. SESTA would expose any platform that hosts online discussions to the risk of massive civil and criminal liability if anyone uses those platforms for sex trafficking purposes. As we’ve explained previously, SESTA would likely force online platforms to become far more restrictive in their moderation practices, censoring a lot of innocent people in the process. And experts in sex trafficking say that the bill would put trafficking victims themselves in even more danger, compromising the tools that law enforcement agencies use to find traffickers.

As the South Dakota groups point out, the online communities that SESTA would compromise are uniquely important for people who live in rural areas:

Section 230 [the law that shields online platforms from some types of liability for their users’ speech] is one of the most important laws protecting free expression online. Its protections are uniquely important to South Dakotans: we rely on online communities to share our thoughts and ideas with friends across the country and around the world. For rural Americans, online communities often serve as our most important connection to likeminded friends. For people of color, members of the LGBTQ community, and other marginalized South Dakotans, online communities are our lifelines.

If Congress were to pass a bill undermining Section 230, it would almost certainly result in online platforms over-relying on automated filters to censor their users’ speech. When platforms lean too heavily on computerized filters, marginalized people are usually the first ones silenced.

South Dakotans, we encourage you to join these groups in speaking out to Senator Thune. Sign our online petition and we will deliver the message to Thune’s staff. And if you’re in the Sioux Falls area, come discuss the issues at an Indivisible 605 event this Thursday.

Everyone else, please take a moment to write to your members of Congress and urge them to stop SESTA.

Take Action

Tell Congress: Stop SESTA.

Alice Saves Medical Startup From Death By Telehealth Patent

Justus Decher, a veteran of the U.S. Army, has a motto he uses when faced with adversity: “Don’t get even, get Justice.”

A health scare in 2010 saw Justus going to the emergency room about 20 times for observation. Each time he went, he would have a battery of tests to determine if something was wrong. The hospital trips meant a lot of worry and time, even if all his tests came back normal.

Being an entrepreneur and living by his motto, Justus thought, “There must be a better way.” Justus saw how technology could allow patients, families, and health care providers to better monitor patient health from home. As the saying goes, necessity is the mother of all invention.

The patent gave no explanation on how to accomplish any of the goals it claimed. Instead, it seemed to claim the idea of telehealth itself. Justus thought, “I put in four years of work to build my product, and this patent seems so basic.”

So in 2013, Justus set to work on building a product not only for himself, but for everyone else too. His company, MyVitalz, was formed. He set to work on building a product to send medical information to health professionals who could analyze it remotely and let a patient and the healthcare team know as soon as possible if something were amiss.

By 2016, Justus’ company was still small, but thriving. It had just been named one of top finalists in a U.S. Department of Veterans Affairs’ competition to find new ideas and services in home telehealth, the practice of providing medical care remotely. Justus anticipated that new regulations that came into effect as part of the Affordable Care Act would make telehealth an important piece in delivering affordable, preventative medicine. And this was especially important for rural hospitals, like those in Nebraska where Justus lives.

But right when things were going well, Justus and MyVitalz got a demand letter from My Health, Inc., claiming that the technology Justus built infringed on U.S. Patent No. 6,612,985, titled “Method and system for monitoring and treating a patient.” The patent, filed for in 2001, claimed telehealth broadly, even though the practice had been around since telephones were first invented. The letter told Justus that MyVitalz needed a license to the patent if it was going to continue to operate.

When Justus received the demand letter, he was shocked. He read the patent, and it seemed incredibly mundane. It didn’t offer any of the technical detail that Justus knew went into building a complex product like the one offered by MyVitalz. It gave no explanation on how to accomplish any of the goals it claimed. Instead, it seemed to claim the idea of telehealth itself. Justus thought, “I put in four years of work to build my product, and this patent seems so basic.”

“It almost felt as though my business was being blackmailed,” Justus says. “Sure, I could make the threat go away with a payment that would be less than the cost of litigation. But I refused to pay just to be able to keep running my business which I’d devoted my life to building.”

Justus scoured the Internet for information that could help him with My Health’s demand. He tried to figure out how he could defend himself, knowing that to do so would likely mean selling his personal assets to afford a lawyer.

My Health persisted in its demands. In February 2017, several months after My Health made its first demand, it told Justus and MyVitalz that it wanted a $25,000 payment.

Luckily for Justus and MyVitalz, just one day after My Health sent its $25,000 demand, a court recommended that the patent be invalidated under Alice v. CLS Bank, the landmark Supreme Court ruling that simply implementing an abstract idea on a computer doesn’t turn that idea into a patentable invention. The My Health patent is exactly the sort of patent Alice was intended to stop, patents that are nothing more than a broad idea implemented using generic, well known technology. The court ruling finding My Health’s patent invalid became final on July 3, 2017.

Thanks to Alice, Justus never heard from My Health again. He’s now back focusing on what matters most: helping people get better health care.

Whistleblower Protections in USA Liberty Act Not Enough

The USA Liberty Act fails to safeguard whistleblowers—both as federal employees and contractors—because of a total lack of protection from criminal prosecution. These shortcomings—which exist in other whistleblower protection laws—shine a light on much-needed Espionage Act reform, a law that has been used to stifle anti-war speech and punish political dissent.                                                        

Inside the recent House bill, which seeks reauthorization for a massive government surveillance tool, authors have extended whistleblower protections to contract employees, a group that, today, has no such protection.                                                

The Liberty Act attempts to bring parity between intelligence community employees and contract employees by amending Section 1104 of the National Security Act of 1947.

According to the act, employees for the CIA, NSA, Defense Intelligence Agency, Office of the Director of National Intelligence, National Geospatial-Intelligence Agency, and National Reconnaissance Office are protected from certain types of employer retaliation when reporting evidence of “a violation of any federal law, rule, or regulation,” or “mismanagement, a gross waste of funds, an abuse of authority, or a substantial and specific danger to public health or safety.” Employees working at agencies the President deems have a “primary function” of conducting foreign intelligence or counterintelligence are also covered by these protections.

Employees can’t be fired. Employees can’t be demoted. They can’t receive lower pay or benefits or be reassigned. And no “personnel actions” whatsoever can be ordered, actually, meaning no promotions or raises.

But employees are only protected from retaliation in the workplace. Entirely missing from Section 1104 of the National Security Act of 1947 are protections from criminal prosecution. That’s because the government treats whistleblowers differently from what they call leakers. According to the federal laws, government employees who make protected disclosures to approved government officials are whistleblowers, and they have protections; employees who deliver confidential information to newspapers are leakers. Leakers do not have protections.

Extending these whistleblower protections to contractors—while positive—is just an extension of the incomplete protections our federal employees currently receive. And, as written, the Liberty Act only protects contract employees from retaliation made by the government agency they contract with, not their direct employer. Contract employees work directly for private companies—like Lockheed Martin—that have contracts with the federal government for specific projects. The available data is unclear, but a 2010 investigation by The Washington Post revealed that “1,931 private companies work on programs related to counterterrorism, homeland security and intelligence in about 10,000 locations across the United States.”

The problems continue. Currently, the Liberty Act, and Section 1104, do not specify how whistleblower protection is enforced.

Let’s say a contractor with Booz Allen Hamilton—the same contracting agency Edward Snowden briefly worked for when he confirmed widespread government surveillance to The Guardian in 2013—believes she has found evidence of an abuse of authority. According to the Liberty Act, she can present that evidence to a select number of individuals, which includes Director of National Intelligence Daniel Coats, Acting Inspector General of the Intelligence Community Wayne Stone, and any of the combined 38 members of the House of Representatives Permanent Select Committee on Intelligence and the U.S. Senate Select Committee on Intelligence. And, according to the Liberty Act, she will be protected from agency employer retaliation.

Maybe.

If the NSA still does fire the contractor, the Liberty Act does not explain how the contractor can fight back. There is no mention of appeals. There are no instructions for filing complaints. The bill—and the original National Security Act of 1947—has no bite.

The Liberty Act makes a good show of extending whistleblower protections to a separate—and steadily growing—class of employee. But the protections themselves are lacking. Contractors who offer confidential information to the press—like Reality Winner, who allegedly sent classified information to The Intercept—are still vulnerable under a World War I era law called The Espionage Act.

As we wrote, the Espionage Act has a history mired in xenophobia, with an ever-changing set of justifications for its use. University of Texas School of Law professor Stephen Vladeck lambasted the law in a 2016 opinion piece for The New York Daily News:

“Among many other shortcomings, the Espionage Act’s vague provisions fail to differentiate between classical spying, leaking, and whistleblowing; are hopelessly overbroad in some of the conduct they prohibit (such as reading a newspaper story about leaked classified information); and fail to prohibit a fair amount of conduct that reasonable people might conclude should be illegal, such as discussing classified information in unclassified settings.”

Whistleblower protections, present in the National Security Act of 1947 and extended in the Liberty Act, are weakened by the U.S. government’s broad interpretation of the Espionage Act.  Though the law was intended to stop spies and potential state sabotage, it has been used to buttress McCarthyism and to sentence a former Presidential candidate to 10 years in prison. Today, it is used to charge individuals who bring confidential information to newspapers and publishing platforms.                                                                                             

Whistleblower protections to the entire intelligence community are lacking. Instead of treating contractors the same, contractors should—together with employees—be treated better.

Improve whistleblower protections. Reform the Espionage Act.

#NCSAM: Third-Party Risk Management is Everyone’s Business

One of the weekly themes for National Cyber Security Awareness Month is “Cybersecurity in the Workplace is Everyone’s Business.”

And we couldn’t agree more. Cybersecurity is a shared responsibility that extends not just to a company’s employees, but even to the vendors, partners and suppliers that make up a company’s ecosystem. The average Fortune 500 company works with as many as 20,000 different vendors, most of whom have access to critical data and systems. As these digital ecosystems become larger and increasingly interdependent, the exposure to third-party cyber risk has emerged as one of the biggest threats resulting from these close relationships.

Third-party risk is only going to get more difficult, but collaboration – the pooling of information, resources and knowledge – represents the industry’s best chance to effectively mitigate this growing threat. The PwC Global State of Information Security Survey 2016 found that 65 percent of organizations are formally collaborating with partners to improve security and reduce risks.

Overall, organizations need to put more emphasis on understanding the cyber risks their third parties pose. What risks does each third party bring to your company? Do they have access to your network? What would the impact be if they were to be breached? One of the key ways to do this is by engaging with your third parties, and assessing them based of the appropriate level of risk they pose and collaborating with them on a prioritized mitigation strategy.

It’s unlikely that the pressure facing businesses to become more efficient will lessen, which means larger digital ecosystems and more cyber risks to businesses. The only way to protect your organization from suffering a data breach as a result of a third party is to put more emphasis on understanding the cyber risks your third parties pose and working together to mitigate them.

Learn more about NCSAM at: https://www.dhs.gov/national-cyber-security-awareness-month.

Help spread the word by joining in the online conversation using the #NCSAM hashtag!

About the author: As Head of Business Development, Scott is responsible for implementing CyberGRX’s go-to-market and growth strategy. Previous to CyberGRX, he led sales & marketing at SecurityScorecard, Lookingglass, iSIGHT Partners and iDefense, now a unit of VeriSign.

Copyright 2010 Respective Author at Infosec Island

Oracle CPU Preview: What to Expect in the October 2017 Critical Patch Update

The recent media attention focused on patching software could get a shot of rocket fuel on Tuesday with the release of the next Oracle Critical Patch Update (CPU). In a pre-release statement, Oracle has revealed that the October CPU is likely to see nearly two dozen fixes to Java SE, the most common language used for web applications. New security fixes for the widely used Oracle Database Server are also expected along with patches related to hundreds of other Oracle products.

Most of the Java related flaws can be exploited without needing user credentials, with the highest vulnerability score expected to be 9.6 on a 10.0 scale. The CPU could also include the first patches related to the latest version of Java – Java 9 – which was released in September.

Oracle is also expected to include advanced encryption capabilities included in Java 9 (JCE Unlimited Strength Policy Files) for previous Java versions 8 – 6.

The October CPU comes on the heels of a September out-of-cycle Security Alert from Oracle addressing flaws exploited in the Equifax attack. The Alert followed the announcement of vulnerabilities in the Struts 2 framework by Apache that were deemed too critical to wait for distribution in the quarterly patch update.

IBM also issued an out-of-cycle patch to address flaws in IBM’s Java related products in the wake of the Equifax breach.

The Equifax attack has put a spotlight on the vital importance of rapidly applying security patches as well as the continuing struggle of security teams to keep pace with the increasing pace and size of patches. So far in 2017, NIST’s National Vulnerability Database has catalogued 11,525 new software flaws and has tracked more than 95,000 known vulnerabilities.

Oracle will release the final version of the CPU mid-afternoon Pacific Daylight Time on Tuesday, 17 October.   

About the author: James E. Lee is the Executive Vice President and Chief Marketing Officer at Waratek Inc., a pioneer in the next generation of application security solutions.

Copyright 2010 Respective Author at Infosec Island

Digital Rights Groups Demand Deletion of Unlawful Filtering Mandate From Proposed EU Copyright Law

Today EFF and 56 other civil society organizations have sent an open letter [PDF] to European lawmakers outlining our grave concerns with Article 13 of the proposed new Directive on Copyright in the Digital Single Market, which would impose a new responsibility on Internet platforms to filter content that their users upload. The letter explains:

Article 13 introduces new obligations on internet service providers that share and store user-generated content, such as video or photo-sharing platforms or even creative writing websites, including obligations to filter uploads to their services. Article 13 appears to provoke such legal uncertainty that online services will have no other option than to monitor, filter and block EU citizens’ communications if they are to have any chance of staying in business. …

Article 13 would force these companies to actively monitor their users’ content, which contradicts the “no general obligation to monitor” rules in the Electronic Commerce Directive. The requirement to install a system for filtering electronic communications has twice been rejected by the Court of Justice, in the cases Scarlet Extended (C 70/10) and Netlog/Sabam (C 360/10). Therefore, a legislative provision that requires internet companies to install a filtering system would almost certainly be rejected by the Court of Justice because it would contravene the requirement that a fair balance be struck between the right to intellectual property on the one hand, and the freedom to conduct business and the right to freedom of expression, such as to receive or impart information, on the other.

European Commission Foreshadows More Of the Same To Come

Article 13 is bad enough as a copyright filtering mandate. But what makes the proposal even more alarming is that it won’t stop there. If we lose the battle against the use of upload filters for copyright, we’ll soon see a push for a similar mandate on platforms to filter other types of content, beginning with ill-defined “hate speech” and terrorist content, and ending who knows where. Evidence for this comes in the form of a Communication on Tackling Illegal Content Online, released last month. The Communication states:

Online platforms should do their utmost to proactively detect, identify and remove illegal content online. The Commission strongly encourages online platforms to use voluntary, proactive measures aimed at the detection and removal of illegal content and to step up cooperation and investment in, and use of, automatic detection technologies.

The Communication also talks up the possibility of “so-called ‘trusted flaggers’, as specialised entities with specific expertise in identifying illegal content,” being given special privileges to initiate content removal. However, we already have bodies that have expertise in identifying illegal content. They’re called courts. As analyses of the Communication by European Digital Rights (EDRi), the Center for Democracy and Technology (CDT), and Intellectual Property Watch point out, shifting the burden of ruling on the legality of content from courts onto private platforms and their “trusted flaggers” will inevitably result in over-removal by those platforms of content that a court would have found to be lawful speech.

The Communication clearly foreshadows future legislative measures as soon as 2018 if no significant progress is made by the platforms in rolling out automated filtering and trusted flagging procedures on a “voluntary” basis. This means that the Communication, although expressed to be non-binding, is not really “voluntary” at all, but rather a form of undemocratic Shadow Regulation by the unelected European Commission. And the passage of the upload filtering mandate in the Digital Single Market Directive would be all the encouragement needed for the Commission to press forward with its broader legislative agenda.

The Link Tax Paid To Publishers … That Publishers Don’t Want

The upload filtering mandate in Article 13 isn’t the only provision of the proposed Directive that concerns us. Another provision of concern, Article 11, would impose a new “link tax” payable to news publishers on websites that publish small snippets of news articles to contextualize links to those articles. Since we last wrote about this, an interesting new report has come out providing evidence that European publishers—who are the supposed beneficiaries of the link tax—actually oppose it. The report also states:

[T]here is little evidence that the decline in newspaper revenues has anything to do with the activities of news aggregators or search engines (that appear as the primary targets of the new right). In fact, it is widely recognised that there are two reasons for the decline in newspaper revues: changes in advertising practice associated with the Internet (but not especially related to digital use of new material on the Internet); and the decline in subscriptions, which may be in part related to the decision of press publishers to make their products available on the Internet. These are simply changes in the newspaper market that have little, if anything, to do with the supposed “unethical” free riding of other internet operators.

The European Parliament’s Civil Liberties (LIBE) Committee is due to vote on its opinion on the Digital Single Market proposals this Thursday, 19 October. Although it’s not the final vote on these measures, it could be the most decisive one, since a recommendation for deletion of Article 11 and Article 13 at the LIBE committee would be influential in convincing the lead committee (the Legal Affairs or JURI Committee) to follow suit.

Digital rights group OpenMedia has provided a click-to-call tool that you can use, available in EnglishFrenchGermanSpanish, and Polish, to express your opposition to the upload filtering mandate and the link tax. If you are European or have European friends or colleagues, please do take this opportunity to speak out and oppose these proposals, which could change the Internet as we know it in harmful and unnecessary ways.

USA Liberty Act Won’t Fix What’s Most Broken with NSA Internet Surveillance

A key legal linchpin for the National Security Agency’s vast Internet surveillance program is scheduled to disappear in under 90 days. Section 702 of FISA—enacted in 2008 with little public awareness about the scope and power of the NSA’s surveillance of the Internet—supposedly directs the NSA’s powerful surveillance apparatus toward legitimate foreign intelligence targets overseas. Instead, the surveillance has been turned back on us. Despite repeated inquiries from Congress, the NSA has yet to publicly disclose how many Americans are impacted by this surveillance. 

With the law’s sunset looming, Congress is taking up the issue. The USA Liberty Act, introduced by Representatives Goodlatte (R-Va.), John Conyers (D-Mich.), Jim Sensenbrenner (R-Wis.), and others, may offer a chance to address some of the worst abuses of NSA Internet surveillance even as it reauthorizes some components of the surveillance for another six years. 

But the first draft of the bill falls short.

The bill doesn’t effectively end the practice of “backdoor searching,” when government agents—including domestic law enforcement not working on issues of national security—search through the NSA-gathered communications of Americans without any form of warrant from a judge. It doesn’t institute adequate transparency and oversight measures, and it doesn’t deal with misuse of the state secrets privilege, which has been invoked to stave off lawsuits against mass surveillance.  

Perhaps most importantly, the bill won’t curtail the NSA’s practices of collecting data on innocent people. 

The bill does make significant changes to how and when agents can search through data collected under 702. It also institutes new reporting requirements, new defaults around data deletion, and new guidance for amicus engagement with the FISA Court. But even these provisions do not go far enough. 

Congress has an opportunity and a responsibility to rein in NSA surveillance abuses. This is the first time, since 2013 reporting by the Washington Post and the Guardian changed the worldwide perception of digital spying, that Congress must vote on whether to reauthorize Section 702. Before this debate moves ahead, leaders in the House Judiciary Committee should fix the shortcomings in this bill. 

The Problems of 702

Section 702 is supposed to give the NSA authority to engage in foreign intelligence collection. The NSA is only allowed to target non-Americans located outside U.S. borders. This legal authority has been the basis for two controversial data collection programs:

  • Upstream surveillance: data collection that siphons off copies of digital communications directly from the “Internet backbone,” the high-capacity fiber-optic cables run by telecommunications companies like AT&T that transmit the majority of American digital communications.
  • PRISM (also known as “downstream surveillance”): data collection gathered from the servers of major Internet service providers, such as Google, Facebook, and Apple.

These programs flourished under President Bush and President Obama. As the Washington Post reported, their NSA director took an expansive view on data collection:

“Rather than look for a single needle in the haystack, his approach was, ‘Let’s collect the whole haystack,’ ” said one former senior U.S. intelligence official who tracked the plan’s implementation. “Collect it all, tag it, store it. . . . And whatever it is you want, you go searching for it.”

Unfortunately, the Liberty Act won’t address most of these fundamental problems.  Here’s an analysis of some of the key provisions in the bill, and we’ll have future articles exploring specific topics in more detail.

Leaving the Backdoor Ajar

Agents for the NSA, CIA, and FBI have long rifled through the communications collected under Section 702, which include American communications, as well as the communications of foreigners who have no connection to crime or national security threats. With no approval from a judge, they’re able to search this database of communications using a range of personal identifiers, then review the contents of communications uncovered in those searches. Government agents can then use these results to build a case against someone, or they may simply review it without prosecution.

Ordinarily, if the FBI wants to intercept or collect a U.S. person’s communications, they must first get permission from a judge. But as a result of Section 702, the FBI today reviews NSA-collected communications of U.S. persons without permission from a judge. Privacy advocates call this the “backdoor search loophole.” 

This practice violates the Fourth Amendment right to privacy against unreasonable searches and seizures. And it can be difficult to prove because government agents may not disclose when they use evidence from the 702 database in prosecutions or for any other purposes. 

The first draft of the Liberty Act doesn’t resolve the problem. It still allows government agents—including domestic law enforcement agents—to query the 702 database, including using identifiers associated with American citizens, such as the email address of an American. The main improvement is that when an agent conducts a query looking for evidence of a crime, she must obtain a probable cause warrant from a judge to access the results. 

But the warrant requirement is limited due to a number of troubling carve-outs. First, this court oversight requirement won’t be triggered except for those searches conducted to find evidence of a crime. No other searches for any other purposes will require court oversight, including when spy agencies search for foreign intelligence, and when law enforcement agencies explore whether a crime occurred at all.

Metadata—how many communications are sent, to whom, at what times—won’t require court oversight at all.  In fact, the Liberty Act doesn’t include the reforms to metadata queries the House had previously passed (which unfortunately did not pass the Senate). In the Massie-Lofgren Amendment, which passed the House twice, agents who conducted queries for metadata would be required to show the metadata was relevant to an investigation. That relevance standard is not in the Liberty Act.

Finally, some may interpret vague language in the bill as putting responsibility for assessing probable cause in the hands of the Attorney General, the main governmental prosecutor, rather than in the hands of the FISA Court. This language should be clarified to ensure the judge’s role in approving the applications is the same as in other FISA proceedings.

Targeting Procedures

The bill will require the NSA to exercise “due diligence in determining whether a person targeted is a non-United States person reasonably believed to be located outside of the United States,” and requires agents to consider the “totality of the circumstances” when making that evaluation.

At face value, this sounds promising. We do want the NSA to exercise due diligence when evaluating targets of surveillance. However, this provision is more of a fig leaf than a real fix, because even if targeting is improved, it won’t resolve the problem of Americans’ communications being collected. Right now, countless Americans are surveilled through so-called “incidental collection.” This means that while the official target was a non-American overseas, American communications are swept up as well. Even though Americans were never the intended “target,” their emails, chats, and VOIP calls end up in a database accessible to the NSA, FBI, and others. Tightening up targeting won’t address this problem.

In addition, the bill doesn’t change the NSA’s practice of intercepting communications of countless innocent foreigners outside the United States. People outside our national borders are not criminals by default and should not be treated as if they were. If the United States wants to uphold our obligations to human rights under the International Covenant on Civil and Political Rights, we must respect the basic privacy and dignity of citizens of other countries. That means not vacuuming up as many communications as possible for all foreigners overseas. This is an especially pressing issue now, as the European Union decides whether to limit how European data can be held by American companies. The recently enacted Privacy Shield falls short of the privacy commitments enshrined in European law. 

Retention of Communications

After the NSA uses Section 702 to collect vast quantities of communications, the NSA stores these records for years to come. Every day the NSA holds these sensitive records is a day they can be misused by rogue government employees or deployed by agency leadership in new ways as part of inevitable “mission creep.” That’s why privacy advocates call for legislation that would require the NSA to purge these Section 702 communications by a fixed deadline, except for specific communications reasonably determined by analysts to have intelligence or law enforcement value.

Unfortunately, the Liberty Act does not solve this problem. Rather, it would only require that if the NSA determines that a communication lacks foreign intelligence value, then the NSA must purge it within 90 days. However, it’s unclear how often the NSA reviews its collected data to assess its foreign intelligence value. Since the bill requires no review, this provision may have little practical effect.

Whistleblowers Left Unprotected

Whistleblowers like Thomas Drake, Mark Klein, Bill Binney, and Edward Snowden were fundamental to the public’s understanding of NSA surveillance abuses. But they risked their careers and often their freedom in the process. The United States has a pressing need to improve protections for whistleblowers acting in the public good—including federal contractors who may be witness to wrongdoing.

The Liberty Act includes a section that would extend certain whistleblower protections to federal contractors. However, these protections only apply to “lawful disclosure” to a handful of government officers, such as the Director of National Intelligence. It does not provide any protection when a whistleblower speaks to the media or to advocacy organizations such as EFF.

Furthermore, the bill only protects whistleblowers against “personnel action,” so whistleblowers could still face criminal prosecution. The Espionage Act—a draconian law from 1917 with penalties including life in prison or the death penalty—has become the tool de jour to intimidate and punish public-interest whistleblowers. The Liberty Act will provide whistleblowers no protection against prosecution under the Espionage Act. 

To make matters worse, the bill also creates new penalties for the unauthorized removal or retention of classified documents, including when done negligently. This will likely be another tool used to go after whistleblowers. This section of the bill must be significantly narrowed or cut. 

Ending “About” Collection 

The National Security Agency announced in April the end of a controversial form of spying known colloquially as “about surveillance.” After collecting data directly from the backbone of the Internet and doing a rough filter, government agents use key selector terms about targeted persons to search through this massive trove of data. In the past, these searches would not merely search the address lines (the to and from section of the communications) but would directly search the full contents of the communications, so that any mention of a selector in the body of the email would be returned in the results. Thus, communications of people who were not surveillance targets, and were not communicating with surveillance targets, were included in the results. 

The NSA was unable to find a way to conduct this type of “about” searching while adhering to restrictions imposed by the FISA Court, and thus the agency discontinued the practice in April. However, this is currently a voluntary policy, and the agency could begin again. In fact, NSA Director Mike Rogers testified before Congress in June that he might recommend that Congress reinstitute the program in the future.

The Liberty Act codifies the end of “about surveillance.” It provides that the NSA must limit its targeting “to communications to or from the targeted person.” While the NSA’s upstream program will still collect the communications passing through the Internet backbone, including the communications of vast numbers of innocent U.S. and foreign citizens, the end of “about” surveillance will reduce the number of communications stored in the 702 database. 

Other Positive Changes in the Bill 

Critically, unlike some other pending reauthorization proposals, the Liberty Act will maintain Section 702’s “sunset,” ensuring that Congress must review, debate, and vote on this issue again in six years. Permanent reauthorization, which we strongly oppose, would prevent this Congressional check on executive overreach.

The Liberty Act makes some other modest improvements to the NSA’s surveillance practices. It gives the Privacy and Civil Liberties Oversight Board the ability to function without an appointed chair, which has been a persistent problem with this accountability body. It also puts in place new reporting requirements. 

The bill would require the FISA Court to appoint an amicus curiae to assist it in reviewing the annual “certification” from the Attorney General and the Director of National Intelligence regarding the NSA’s Section 702 targeting and minimization procedures. This would be a helpful check on this currently one-sided process. However, the FISA Court could dispense with this check whenever it found the amicus appointment “not appropriate” – a nebulous test that could neuter this new safeguard.

A Few More Missing Pieces 

Many vital fixes to the worst surveillance abuses of the NSA are missing from this bill. 

Congress should clear a pathway for individuals to contest privacy abuses by the NSA. This includes ensuring that Americans whose data may have been “incidentally” collected by the NSA under Section 702 have legal standing to go to court to challenge this violation of their constitutional rights. It also requires an overhaul of the controversial state secrets privilege, a common law doctrine that government agencies have invoked to dismiss, or refuse to provide evidence in, cases challenging mass surveillance.

Congress should crack down on “incidental collection,” and ensure the communications of innocent Americans are not collected in the first place. 

Finally, we need to empower the FISA court to review and approve the targets of NSA surveillance. Currently, the NSA receives only general guidelines from the FISA Court, with no individual review of specific targets and selector terms. This means the NSA has little obligation to defend its choice of targets, resulting in little recourse when agents are over-inclusive of inappropriate targets. 

Next steps for the Judiciary Committee 

Congress still has time to get this right. This bill hasn’t gone to markup yet, and the Judiciary Committee is likely to amend the bill before passing it to the floor. We urge the Judiciary Committee members to make changes to the bill to address these shortcomings.

As public awareness of NSA surveillance practices has grown, so too has public outrage. That outrage is the fuel for meaningful change. We passed one bill to begin reining in surveillance abuses in 2015, and from that small victory springs the political will for the next, more powerful reform. Join EFF in calling on Congress to rein in these surveillance abuses, and defend privacy for Internet users of today and in the years to come. 

Speak out.

Tell Congress It’s time to let the sun set on mass Internet spying

With assistance from Adam Schwartz.

Q&A with Professor Xiaoxing Xi, Victim of Unjust Surveillance

Professor Xiaoxing Xi, a physics professor with Temple University, was the subject of government surveillance under a FISA order. During September’s Color of Surveillance Hill briefing, Professor Xi told his story of the devastating impact of government surveillance on his life.

mytubethumb
play

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube-nocookie.com%2Fembed%2F4i6r454XxFc%3Frel%3D0%26autoplay%3D1%22%20allowfullscreen%3D%22%22%20height%3D%22315%22%20frameborder%3D%220%22%20width%3D%22560%22%3E%3C%2Fiframe%3E

Privacy info. This embed will serve content from youtube-nocookie.com

Professor Xi faced a prosecution that was later dropped because there was no evidence that he had engaged in any wrongdoing. Ever since this invasive surveillance against him, he has become an outspoken advocate against race-based surveillance and prosecution.

We asked Professor Xi to elaborate on the surveillance against him and the effect it had on him, his family, and his scientific work. 

Q: People assume their private communications are not visible to others, but it’s become more and more clear that the government is surveilling countless Americans. How did you feel when you learned that the government had been reading your private emails, listening to your private phone calls, and conducting electronic surveillance? 

It was frightening. I knew from the beginning that their charges against me were completely wrong, but we were fearful till the end that they might twist something I wrote in my emails or something I said over the phone to send me to jail. I also felt like I was being violated. When you lose your privacy, it’s like being forced to walk around naked.

Q: Does knowing you had been surveilled cause you concern now, years later? Do you still worry you’re under surveillance?

Yes, my whole family are still seriously concerned about our emails being read and phone calls being listened to. People tell us that it is very unlikely we are still being surveilled, and they are probably right. Once violated, it is very difficult to shake off the fear. We watch every word we write and say, so that we don’t give them excuses to “pick bones out of an egg,” and life is very stressful like this.

Q: Your children were still young when this happened, especially your daughter. How did your family feel about all this? How were they affected?

They were shaken by guns being pointed at them and seeing me snatched away in handcuffs. Everyone was traumatized by this experience, like the sky was falling upon us. My wife was very courageous, trying to shield the children from the harm, even though she herself was under tremendous stress. My elder daughter was a chemistry major, and now she works in a civil rights organization trying to raise the awareness of people about the injustices immigrants face. My younger daughter tries to go about her life like nothing has happened, but we worry about the long term effect on her.

Q: How has your scientific work been affected by this horrible and unjust surveillance and prosecution?

It damaged my scientific research significantly. My reputation is now tainted and the opportunities for me to advance in the scientific community are more limited. My current research group is just a tiny fraction of what I used to have. In addition, I worry about routine academic activities being misconstrued by the government and I am scared to put my name on forms required for obtaining funding and managing research. 

Add your voice. Join EFF in speaking out against mass surveillance. 

Speak Out

California Governor Signs Bill to Defend Against Religious Registries

On the last day to act on legislation in 2017, California Gov. Jerry Brown signed a bill creating a firewall between the state’s data and any attempt by the federal government to create lists, registries, or databases based on a person’s religion, nationality, or ethnicity. 

S.B. 31 was one of the earliest bills introduced by the legislature to oppose discriminatory policies floated by Pres. Donald Trump and his surrogates during the 2016 campaign. S.B. 31, authored by Sen. Ricardo Lara, was a direct response to Trump’s and his surrogates’ support of a so-called “Muslim Registry.” Although the bill places California at odds with the White House, both parties in the California Senate unanimously approved the bill, as did an overwhelming bipartisan majority in the Assembly. 

The bill prohibits state and local government officials from sharing personal data with the federal government for the purposes of creating these kinds of registries or using government resources to support law enforcement or immigration enforcement activities based on religious beliefs, practices, or affiliations, or national origin or ethnicity. Police authorities are also prohibited from investigating or enforcing a requirement to register with such a registry.  

In addition, the legislation would prohibit the state and local law enforcement agencies from collecting information on a person’s religious beliefs or practices, except in two narrow situation: when there is a “clear nexus” between criminal activity and the religious information, or when there is a need to provide religious accommodations (e.g. providing Kosher or Halal food in a detention setting). 

With S.B. 31, Californians have ensured that the state does not repeat the mistakes of the last century, when the U.S. Census Bureau shared confidential data with the military for the purposes of interning Japanese Americans during World War II. 

We are grateful to Sen. Lara for championing this important legislation, Gov. Brown for signing it, and to the 700 EFF supporters who emailed their legislators to defend our data from being used to persecute our communities. 

California Police and Civil Liberties Groups Agreed on a Simple Transparency Measure. Gov. Brown Vetoed It Anyway.

California Gov. Jerry Brown used the weekend to veto one of 2017’s last remaining bills to shine light on police practices. 

S.B. 345 was pretty straightforward: every law enforcement agency would have to upload its policies and training materials to its public website—but only documents that would be available anyway under the California Public Records Act (CPRA). The bill had uncommon support from both law enforcement associations and civil liberties organizations, like EFF and the ACLU of California

Some of S.B. 345’s supporters. Source: Senate Analysis

So why did Brown veto it? 

“The bill is too broad in scope and vaguely drafted. I appreciate the author’s desire for additional transparency of police practices and local law enforcement procedures, but I believe this goal can be accomplished with a more targeted and precise approach,” he wrote in his rejection letter [PDF]. 

We’re not quite sure what he’s talking about. The bill was elegant and short and specified exactly what documents it applied to: “current standards, policies, practices, operating procedures, and education and training materials that would otherwise be available to the public if a request was made pursuant to the California Public Records Act.” If he has a better idea, we’d love to hear it.

Sadly, S.B. 345 was the just last of a series of failures by California leadership to enhance government transparency this session. 

The legislature failed to pass S.B. 21, which would have more narrowly shined light on just surveillance technologies. Lawmakers also gutted a measure to penalize agencies that intentionally and improperly stymie public records requests. Yet, lawmakers somehow found the will to pass legislation to exempt even more documents from CPRA. And now that Brown has signed A.B. 492, independent companies that market public record research will have to include about as many disclosures and disclaimers as a pharmaceutical company advertising prescription drugs. 

Californians deserve much better. The sun should shine as brightly on our government as it does on our beaches. 

Along with the other transparency measures that fell short this session, we mourn the death of S.B. 345. We thank its sponsor, Sen. Steven Bradford, and all the transparency allies who urged the governor to sign this bill. 

Surviving Fileless Malware: What You Need to Know about Understanding Threat Diversification

Businesses and organizations that have adopted digitalization have not only become more agile, but they’ve also significantly optimized budgets while boosting competitiveness. Despite these advances in performance, the adoption of these new technologies has also increased the attack surface that cybercriminals can leverage to deploy threats and compromise the overall security posture of organizations.

The traditional threat landscape used to involve threats designed to either covertly run as independent applications on the victim’s machine, or compromise the integrity of existing applications and alter their behavior. Commonly referred to as file-based malware, traditional endpoint protection solutions have incorporated technologies designed to scan files written to disk before execution.

File-based vs. Fileless

Some of the most common attack techniques involve victims either downloading a malicious application whose purpose is to silently run in the background and track the user’s behavior or to exploit a vulnerability in a commonly installed piece of software so that it can covertly download additional components and execute them without the victim’s knowledge.

Traditional threats must make it onto the victim’s disk before executing the malicious code. Signature-based detection exists specifically for this reason, as it can uniquely identify a file that’s known to be malicious and prevent it from being written or executed on the machine. However, new mechanisms such as encryption, obfuscation, and polymorphism have rendered traditional detection technologies obsolete, as cybercriminals cannot only manipulate the way the file looks for each individual victim, but also make it difficult for security scanning engines to analyze the code within them.

Traditional file-based malware is usually designed to gain unauthorized access to the operating system and its binaries, normally creating or unpacking additional files and dependencies, such as .dll, .sys or .exe files, that have different functions. They could also install themselves as drivers or rootkits to take full control of the operating system if they could obtain the use of a valid digital certificate to avoid triggering any traditional file-based endpoint security technologies. One such piece of file-based malware was the highly advanced Stuxnet, designed to infiltrate a specific target while remaining persistent. It was digitally signed and had various modules that enabled it to covertly spread from one victim to another until it reached its intended target.

Fileless malware is completely different than file-based malware in terms of how the malicious code is executed and how it dodges traditional file-scanning technologies. As the term implies, fileless malware does not involve any file written on-disk for it to be executed. The malicious code may be executed directly within the memory of the victim’s computer, meaning that it will not be persistent after a system reboot. However, various techniques have been adopted by cybercriminals that combine fileless abilities with persistence. For example, malicious code placed within registry entries and executed each time Windows reboots, allows for both stealth and persistency.

The use of scripts, shellcode and even encoded binaries is not uncommon for fileless malware leveraging registry entries, as traditional endpoint security mechanisms usually lack the ability to scrutinize scripts. Because traditional endpoint security scanning tools and technologies mostly focus on static file analysis between known and unknown malware samples, fileless attacks can go unnoticed for a very long time.

The main difference between file-based and fileless malware is where and how its components are stored and executed. The latter is becoming increasingly popular as cybercriminals have managed to dodge file scanning technologies while maintaining persistency and stealth.

Delivery mechanisms

While both types of attacks rely on the same delivery mechanisms, such as infected email attachments or drive-by downloads exploiting vulnerabilities in browsers or commonly used software, fileless malware is usually script-based and can leverage existing legitimate applications to execute commands. For example, PowerShell scripts that are attached to booby-trapped Word documents can automatically be executed by PowerShell – a native Windows tool. The resulting commands could either send detailed information about the victim’s system to the attacker or download an obfuscated payload that the local traditional security solution can’t detect.

Other possible examples involve a malicious URL that, once clicked, redirects the user to websites that exploit a Java vulnerability to execute a PowerShell Script. Because the script itself is just a series of legitimate commands that may download and run a binary directly within memory, traditional file-scanning endpoint security mechanisms will not detect the threat.

These elusive threats are usually targeted at specific organizations and companies with the purpose of covert infiltration and data exfiltration.

Next-gen endpoint protection platforms

These next-gen endpoint protection platforms are usually the type of security solutions that combine layered security – which is to say file-based scanning and behavior monitoring – with machine learning technologies and threat detection sandboxing. Some technologies rely on machine learning algorithms alone as a single layer of defense. Whereas, other endpoint protection platforms use detection technologies that involve several security layers augmented by machine learning. In these cases, the algorithms are focused on detecting advanced and sophisticated threats at pre-execution, during execution, and post-execution.

A common mistake today is to treat machine learning as a standalone security layer capable of detecting any type of threat. Relying on an endpoint protection platform that uses only machine learning will not harden the overall security posture of an organization.

Machine learning algorithms are designed to augment security layers, not replace them. For example, spam filtering can be augmented through the use machine learning models, and detection of file-based malware can also use machine learning to assess whether unknown files could be malicious.

Signature-less security layers are designed to offer protection, visibility, and control when it comes to preventing, detecting, and blocking any type of threat. Considering these new attack methods, it’s highly recommended that next-gen endpoint security platforms protect against attack tools and techniques that exploit unpatched known vulnerabilities – and of course, unknown vulnerabilities – in applications. 

It’s important to note, traditional signature-based technologies are not dead and should not be discarded. They’re an important security layer, as they’re accurate and quick to validate whether a file is known to be malicious or not. The merging of signatures, behavioral-based, and machine learning security layers create a security solution that’s not only able to deal with known malware, but also tackle unknown threats, which boosts the overall security posture of an organization. This comprehensive mix of security technologies is designed to not only increase the overall cost of attack for cybercriminals, but also offer security teams deep insight into what types of threats are usually targeting their organization and how to accurately mitigate them.

About the author: Bogdan Botezatu is living his second childhood at Bitdefender as senior e-threat analyst. When he is not documenting sophisticated strains of malware or writing removal tools, he teaches extreme sports such as surfing the Web without protection or how to rodeo with wild Trojan horses.

Copyright 2010 Respective Author at Infosec Island

Why Cloud Security Is a Shared Responsibility

Security professionals protect on-premises data centers with wisdom gained through years of hard-fought experience. They deploy firewalls, configure networks and enlist infrastructure solutions to protect racks of physical servers and disks.

With all this knowledge, transitioning to the cloud should be easy. Right?

Wrong. Two common misconceptions will derail your move to the cloud

  1. The cloud provider will take care of security
  2. On-premises security tools work just fine in the cloud

So, if you’re about to join the cloud revolution, start by answering these questions: how are security responsibilities shared between clients and cloud vendors? And why do on-premises security solutions fail in the cloud?

Cloud Models and Shared Security

A cloud model defines the services provided by the provider. It also defines how the provider splits security responsibilities with customers. Sometimes the split is obvious: cloud providers are, of course, tasked with physical security for their facilities. Cloud customers, obviously, control which users can access their apps and services. After that the picture can get a little murky.

The following three cloud models don’t comprehensively account for every cloud variation, but they help clarify who is responsible for what:

Software-as-a-Service (SaaS): SaaS providers are responsible for the hardware, servers, databases, data, and the application itself. Customers subscribe to the service and end users interact directly with the application(s) provided by the SaaS vendor. Salesforce and Office365 are two well-known SaaS offerings.

Platform as a Service (PaaS): PaaS vendors offer a turnkey environment for higher-level programming. The vendor manages the hardware, servers, and databases while the PaaS customer writes the code needed to deliver custom applications. Engine Yard and Google App Engine are examples of PaaS solutions.

Infrastructure as a Service (IaaS): An IaaS environment lets customers create and operate an end-to-end virtualized infrastructure. The IaaS vendor manages all physical aspects of the service as well as the virtualization services needed to build solutions. Customers are responsible for everything else – the applications, workloads, or containers deployed in the cloud. Amazon Web Services (AWS) and Microsoft Azure are popular IaaS solutions.

The key to understanding shared security lies in understanding who makes the decisions about a specific aspect of the cloud solution. For example, Microsoft calls the shots on Excel development for their Office 365 SaaS solution. Vulnerabilities in Excel are, therefore, Microsoft’s responsibility. In the same spirit, security vulnerabilities in an app you create on a PaaS service are your responsibility – but operating system vulnerabilities are not.

This all seems like common sense – but it means you’ll need to understand your cloud model to understand your security responsibilities. If you’re securing an IaaS solution you’ll need to take a broad perspective. Everything from server configurations to container provenance can impact your security posture – and they are your responsibility.

Security “Lift and Shift”

An IaaS solution can virtually replicate on-premises infrastructure in the cloud. So lifting and shifting your on-premises security to the cloud may seem like the best way to get up and running. But that approach has led many cloud transitions to ruin. Why? The cloud needs different security approaches for three important reasons:

Change Velocity

Hardware limits how fast a traditional data center can change. The cloud eliminates physical constraints and changes how we think about servers and storage. Cloud solutions, for example, scale by instantly and automatically bringing new servers online. But for traditional security tools, this cloud velocity is chaos. Metered usage costs rapidly spin out of control. Configuration and policy management becomes an overwhelming task. Interdependent security processes become brittle and unreliable.

Network Limitations

On-premises data centers take advantage of stable networks to establish boundaries. In the cloud, networks are temporary resources. Virtual entities join and leave instantaneously and across geographical boundaries. Network identifiers (like IP addresses) no longer provide the same stable control points as they once did and encryption makes it harder to observe application behavior from the network. Network-centric security tools leave cloud solutions vulnerable to lateral movement by attackers.

Cloud Complexity

When the cloud removes barriers to velocity, the number of machines, servers, containers, and networks explodes. As complex as on-premises data centers can be, cloud solutions are far worse: the number of cloud entities, configuration files, event logs, locations, networks, and connections are too much for even expert human analysis. Analyzing security incidents, assessing the impact of a breach, or even simply tracing an administrator’s activities isn’t possible with traditional data center security tools.

Cloud Security Needs New Solutions

Moving to the cloud is more than a simple lift-and-shift of existing servers and apps to a different set of servers. Granted, offloading infrastructure responsibilities to your provider is a huge win. Without capital expenses and the inertia of hardware, IT organizations do more with less, faster.

Fortunately, new cloud-centric security solutions make your move to the cloud easier. Three key capabilities can keep you out of trouble as you transition: automation, an expanded focus on apps and operations (in addition to networks), and behavioral baselining.

Automation makes it possible to keep up with cloud changes (and DevOps teams) during deployment, operations, and incident investigations. Moving the security focus up the stack reduces the impact of network impermanence in the cloud and delivers better visibility into high-level application and service operations. And behavioral baselining makes short work of otherwise tedious rule and policy development.

With the right technologies, and an understanding of differences, security pros can easily make the move to the cloud.

About the author: Sanjay Kalra is co-founder and CPO at Lacework, leading the company’s product strategy, drawing on more than 20 years of success and innovation in the cloud, networking, analytics, and security industries.

Copyright 2010 Respective Author at Infosec Island

Pass the Protecting Data at the Border Act

This blog post was first published in The Hill on September 28, 2017.

The federal government sees the U.S. border as a Constitution-free zone. The Department of Homeland Security (DHS) claims that border officers—from Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE)—can freely ransack travelers’ smartphones and laptops and the massive troves of highly personal information they contain. This practice is an unconstitutional invasion of privacy and free speech rights. Congress can and should fix this problem by enacting the bipartisan Protecting Data at the Border Act (S. 823 and H.R. 1899).

The need for reform is urgent. In the last two years, DHS more than tripled the number of border device searches. It conducted about 8,500 in fiscal year 2015, about 19,000 in fiscal year 2016, and is on track to conduct 30,000 in fiscal year 2017. DHS’s written policies specify that border officers may search electronic devices “with or without individualized suspicion.”

Innocent Americans from all walks of life have been forced to turn over and/or unlock their devices for data searches at the border. The Electronic Frontier Foundation (EFF) and the American Civil Liberties Union (ACLU) recently filed suit against the government on behalf of 11 Americans whose devices were searched without a warrant at the border. None of the plaintiffs have been subsequently charged with any violation of law.

Our smartphones and laptops contain our emails, text messages, photos and browsing history. They document our travel patterns, shopping habits and reading preferences. They expose our love lives, health conditions, and religious and political beliefs. They reveal whom we know and associate with. The EFF/ACLU lawsuit argues that warrantless devices searches at the border violate travelers’ rights to privacy under the Fourth Amendment, and freedom of speech and press, private association, and anonymity under the First Amendment.

The Protecting Data at the Border Act was introduced in April by Sens. Ron Wyden (D-Ore.), and Rand Paul (R-Ky.), Reps. Jared Polis (D-Colo.), Blake Farenthold (R-Texas), and Adam Smith (D-Wash). It would protect our digital privacy and free speech rights in several important ways.

First, the bill would require government officials to obtain a judicial warrant based on probable cause before accessing the contents of an electronic device in the possession of a U.S. citizen or lawful permanent resident at the U.S. border. There’s nothing new about protecting the significant privacy interests that people have in their electronic devices. In 2014, the U.S. Supreme Court ruled in the landmark case Riley v. California that the Fourth Amendment requires police to get a warrant before searching cell phones of arrested persons, stating that: “With all they contain and all they may reveal, they hold for many Americans the privacies of life.”

Second, the bill would prohibit border officers from denying entry into or exit from the United States by a citizen or green card holder if they refuse to provide their device password or unlock their device.

Third, the bill would address the problem of border officers coercing travelers into letting officers search their devices. Officers wear uniforms, carry weapons, restrict travelers in unfamiliar areas, and seize their passports. Travelers are often exhausted after a lengthy international trip, or in danger of missing a connecting flight. Officers ask pointed questions, such as: “If you have nothing to hide, then why don’t you let me search your phone?” When a traveler complies, officers may later claim that the travelers voluntarily “consented” to the search. The bill would require border officers to notify travelers—before requesting consent to search their devices—that they have the right to refuse. The bill also would require any consent to be written.

Fourth, the bill would require border officers to have probable cause that the traveler committed a felony before confiscating their electronic device. Today, border officers confiscate devices for weeks or months at a time, based on no suspicion at all.

Fifth, the bill would forbid border officers from keeping information they obtained from a traveler’s device, unless the information on the device amounts to probable cause that the traveler committed a crime. Under the government’s current policies, border officers may keep information even if there is no suspicion of crime.

Sixth, the bill contains a strong enforcement tool: evidence gathered in violation of these rules would not be admissible in court.

Finally, the bill would require the government to gather and publish statistics regarding border searches of electronic devices, including how officers obtained access (e.g., via warrant or consent), the breakdown of U.S. versus non-U.S. persons whose devices were searched, the countries from which travelers arrived, and the perceived race and ethnicity of travelers subjected to these searches.

The border isn’t a Constitution-free zone. Congress should enact the Protecting Data at the Border Act. Everyone who opposes the federal government’s overreach should contact their senators and representatives, and urge them to co-sponsor and pass the bill.

Put Your S3 Buckets to the Test to Ensure Cloud Fitness

A poignant aspect of many of the headline-grabbing data breaches is the relative ease with which hackers were able to get to sensitive data. We think of hackers running wildly complex algorithms and plotting with sophisticated schemes, but when you encounter a data repository named "Access Keys", and it doesn't require a password, it turns out your job is pretty easy.

AWS S3 buckets are getting the lion's share of the blame for many of these breaches. But like any asset, S3 buckets simply operate according to how they're configured and managed. And therein is a problem that's representative of so much of the vulnerabilities faced by cloud users. Misconfigurations, poorly constructed access policies, lack of controls; these are just some of the issues that can open a cloud environment to bad actors, and all of this work is directed by humans, with the S3 buckets just doing what they're told. In an environment that's as dynamic as a typical enterprise cloud, humans aren't necessarily going to be able to keep track of every aspect of every asset. For these assets to function optimally and securely, organizations have to apply active management along with continuous scrutiny to ensure they operate optimally and with effective security controls.

Within a cloud environment, there are so many factors all repeatedly and simultaneously occurring. IT teams have to think both about being active and reactive in order to effectively deal with vulnerabilities and attacks. To support those efforts, they have processes and tools that prevent, monitor, and remediate, all in an effort to constantly thwart risk. While the potential for incoming issues is massive, the work required to mitigate that risk can be fairly simple, but like exercise and regular check-ups, they have to be done regularly and with purpose.

We know that default settings from AWS tend to be fairly permissive; some of the problem in so many breaches relates to this permissive nature. But no customer should operate something so important to their environment without customizing it to their own needs. And no matter what their infrastructure needs are, the privacy of their data and that of their customers requires that they put their S3 buckets through fitness tests to ensure they are aware and in control of how those buckets are functioning. Enterprises that want to effectively secure S3 buckets must recognize the liability involved if these get breached. There are some key aspects to how S3 objects and buckets operate, and security teams should be familiar with AWS settings and functionality before they move forward with implementing a security plan. These include access to buckets, user rights within buckets, and versioning and logging capabilities.

Access to your stored data is the logical initial place to start. There are settings in AWS that allow you to determine who can view lists of your S3 buckets, and who can see and edit your Access Control Lists (ACLs). If your buckets have those settings set to give “All AWS Users” access, you are setting yourself up to be compromised. With global ACL permissions on, you allow anyone to grant wide permissions to your content, at best, you give them a detailed treasure map of which buckets may contain interesting data.

At the same time, while the breaches that make the news are all about hackers getting access to remove data, hackers putting data into your S3 buckets can be equally dangerous to your organization. If the Global PUT permission is enabled on any of your S3 buckets it means that anyone can place information into your S3 buckets. This may seem harmless, but someone with malicious intent could place content that would be harmful or embarrassing to your business into your buckets. It is best to only allow authorized users and systems to PUT to your S3 buckets. With the right permissions, a bad actor can also apply "global delete" to your repository which would wipe all the data contained therein. Requiring multi-factor authentication (MFA) in order to use that capability can ensure that CloudTrail logs and other sensitive data cannot be removed by an unauthorized user.

AWS customers should also be aware that the default settings do not enable versioning of S3 objects by default. Versioning is incredibly important; in the event of an object being overwritten or deleted, versioning keeps an instance of the object available to “roll back” to as a method of recovery. Additionally, with audit logging of your S3 buckets enabled, you will be able to get the details of all bucket activity. The logs are an important tool when troubleshooting issues, or investigating an incident. Logging cannot be enabled retroactively, so it is important to collect your audit logs as you set up your S3 buckets if you wish to keep tabs on bucket/object activity.

Advice must be followed by action in order to become, and remain, fit in the cloud. While these measures are critical to attain the basic level of security for your S3 buckets, they are always going to be a target because they store sensitive data. So, continuous awareness through automated monitoring will provide the necessary control needed to identify and fix vulnerabilities, and provide the right layer of control to maintain safe and effective business operations.

Copyright 2010 Respective Author at Infosec Island

Is Your “Father’s IAM” Putting You at Risk?

Identity and access management (IAM) is all about ensuring that the right people, have the right access, to the right resources and that you can prove that all the access is right. But as any of us that are heavily involved in IAM know, that is much easier said than done. There’s a lot that goes into getting all those things “right.”

First you must set up the accounts that enable a user to get to the right stuff – that is often called provisioning (and its dangerous sister, de-provisioning). Second, in order for that account to grant the appropriate access, there has to be a concept of authorization which provides a definition for what is allowed and not allowed with that access. And third, there should be some way to make sure that provisioning and de-provisioning are done securely (and ideally efficiently), and that the associated authorization is accurate – i.e. everyone has exactly the access they need, nothing more and nothing less.

Everyone has been provisioning and de-provisioning since we first started networking PCs. And as soon as larger numbers of users began using those computers, this has forced the need to implement some concept of authorization. The problem is that the practices that worked so well in these relatively closed networks with relatively few users simply don’t cut it in today’s open (close to boundary-less), fluid, and modern networks. The result is loads of inefficiency, elevated risk, and the potential for catastrophic breaches.

In recent research sponsored by One Identity, the dangers of old-fashioned practices for provisioning and de-provisioning and authorization were stripped bare before the world. Stated plainly, the practices and technologies that served you so well in the past, simply are inadequate in today’s digitally transformed world.

Here’s some of the key findings gleaned from responses from more than 900 IT-security professionals worldwide, with a little exposition on each:

  • 87% reported that they have dormant accounts and 71% were concerned about them – that means that more than three-quarters of those interviewed have not de-provisioned accounts that are no longer needed, either because the user is no longer with the organization or has switched roles and most of those are worried about it.
  • Only 1/3 expressed that they were “very confident” that they even knew which dormant user accounts exist. So not only do they have dangerous entry points into their networks, most people couldn’t even tell you what accounts they were.
  • 97% have a process for identifying dormant accounts but only 19% have tools to help find them. In addition 92% report that they regularly check for dormant accounts. This is where there is a disconnect. If the majority have dormant accounts and most have a process to find them, obviously the process is not working. In spite of best efforts (or as I would say old-fashioned de-provisioning practices) the risk is still there.

The risk is not in the fact that there are dormant accounts, the risk is what can be done with those hidden doors into your systems and data. Most high-profile breaches are the result of a bad actor compromising a legitimate user account. That could be gaining access through phishing or social engineering or hunting for and finding a dormant account that the organization doesn’t even know exists. Once in, a series of lateral moves and rights escalation activities can result in access to those systems and that data that you are trying to protect.

So here’s where the second set of data becomes remarkably intriguing. We asked the same 900+ IT security professionals a series of questions about the rights and permissions that their users possess, and here were the big reveals:

  • Only one in four expressed that they were “very confident” that user rights and permissions are correct. That means that ¾ of our respondents were unsure of the fundamental aspect of access control – authorization. Any user with excessive rights (rights that are more than necessary to do the job) is an easy path for bad actors to execute those lateral moves they are so good at.
  • Less than 1/3 are “very confident” that users are de-provisioned properly. By properly we mean fully and immediately (only 14% of respondents reported that users were de-provisioned immediately upon a change in status). De-provisioning is the process of turning off accounts and revoking rights when they are no longer needed. Poor de-provisioning, either through outdated and cumbersome manual processes or limited tools, is the primary cause of dormant accounts.
  • In fact, 95% reported that while they have a process for de-provisioning, it requires IT intervention. In other words, someone has to put hands on a keyboard to make it happen. Any amount of time that an unneeded account remains “open” is an invitation for disaster as evidenced by so many of the high-visibility breaches over the past several years.

So what can be done? There are many ways to modernize these processes and get IAM right. Here’s a few suggestions:

  1. Determine a single source of the truth for authorization. Define business roles once and use them everywhere. And most importantly, let the line-of-business be the decision makers here. Many instances of inappropriate rights are simply the byproduct of IT doing the best they can with the knowledge they’ve been given. It’s all too common for the line-of-business to ask IT to “give Joe the same rights as Bill” when there was no oversight into what rights Bill has, how he got them, and whether they are still appropriate for the job he does.
  2. De-provision immediately and completely. Tools exist that can update permissions at the instance status changes in an authoritative data source. For example, as soon as an employee’s status in the HR system switches from active to inactive, that user’s access rights across every system in the enterprise (including cloud-based services) can also be immediately terminated as well – effectively closing all those doors and eliminating dormant accounts.
  3. Implement identity analytics. A new class of IAM solution called identity analytics will proactively and constantly evaluate your systems to find instances where user rights are out of alignment with what is “right.” These technologies quickly find dormant accounts, mis-provisioned accounts, and instances of rights elevation that are often the smoking gun in breach detection and prevention.

Just like the technology we rely on every day is evolving and the boundaries expanding, the identity and access management practices we use to secure access to those systems must evolve as well. As our survey reaffirmed, what worked well a few years ago is almost certainly inadequate given today’s realities. But there is hope, with simple shifts in responsibility, IAM practices, and IAM technologies you can significantly reduce risk, modernize your business, and sleep better at night.

About the author: Jackson Shaw is senior director of product management at One Identity, an identity and access management company formerly under Dell. Jackson has been leading security, directory and identity initiatives for 25 years.

Copyright 2010 Respective Author at Infosec Island

SAP Cyber Threat Intelligence Report – October 2017

The SAP threat landscape is always growing thus putting organizations of all sizes and industries at risk of cyberattacks. The idea behind SAP Cyber Threat Intelligence report is to provide an insight into the latest security threats and vulnerabilities.

Key takeaways

  • This set of SAP Security Notes consists of 30 patches with the majority of them rated medium.

  • A critical DoS vulnerability was found in SAP Enqueue service allowing to shut operations down, around 3000 of services are exposed to the internet.

  • SAP Mobile Platform vulnerabilities are on the rise, 4 issues in different components of SAP Mobile infrastructure were patched.

SAP Security Notes – October 2017

SAP has released the monthly critical patch update for October 2017. This patch update includes 30 SAP Security Notes (17 SAP Security Patch Day Notes and 13 Support Package Notes). 9 of all the patches are updates to previously released Security Notes.

15 of all the Notes were released after the second Tuesday of the previous month and before the second Tuesday of this month.

5 of the released SAP Security Notes have a High priority rating. The highest CVSS score of the vulnerabilities is 7.7.

image

The most common vulnerability type is Information Disclosure.

image

DoS vulnerability in Enqueue service

One of the most critical loopholes fixed this month is a Denial of Service vulnerability in SAP Standalone Enqueue Server found by ERPScan researchers. This issue can be exploited by hackers in order to shut the business processes down, therefore compromising the company (the details are provided below).

After a today’s brief scan, ERPScan’s Research and Threat Intelligence Team has identified around 3000 instances of SAP systems with Enqueue service available online that pose a high risk of cyberattacks. The majority of these services are located in North America.

This is one of the most widespread SAP vulnerability this year so far.

SAP Mobile platform vulnerabilities

Nowadays companies tend to use more business applications and constantly involve mobile devices in their core business processes.

SAP like any other large vendor is also evolving towards greater mobility, therefore provides solutions for mobile users to interact with business applications.

SAP Mobile Platform (or SMP) is a mobile enterprise application platform solution designed to monitor and manage applications installed on mobile phones and access business data.

The “mobilization” opened unintentional doors to all the evil that comes along with integration and security. The purpose of SMP is providing business data to mobile devices with the enterprise cybersecurity.

This month, 4 issues in different components of SAP Mobile infrastructure were patched. Among them are 3 Information Disclosure vulnerabilities in SAP NetWeaver Mobile Client and one possible leakage of sensitive data in SAP Mobile Platform SDK. The vulnerabilities allow gaining access to critical data stored on mobile devices that use SAP NetWeaver mobile client such as passwords, keys and other sensitive information.

SAP users are recommended to implement security patches as they are released.

Issues that were patched with the help of ERPScan

This month, one critical vulnerability identified by ERPScan’s researcher Vahagn Vardanyan was closed.

Below are the details of the SAP vulnerability, which was identified by ERPScan team.

  • A Denial of Service vulnerability in SAP Standalone Enqueue (CVSS Base Score: 7.5). Update is available in SAP Security Note 2476937. An attacker can use it to terminate a process of a vulnerable component. Nobody can use this service for this time. This fact negatively influences business processes, system downtime, and business reputation as a result.

Other critical issues closed by SAP Security Notes October

The most dangerous vulnerabilities of this update can be patched by the following SAP Security Notes:

  • 2511453: SAP Mobile Platform SDK 3.0 has an Information Disclosure vulnerability (CVSS Base Score: 6.9). An attacker can exploit it for revealing additional information (system data, debugging information, etc.) that will help to learn about a system and to plan further attacks. Install this SAP Security Note to prevent the risks.

  • 2517501: SAP ERP Funds Management Account Assignments has an Implementation flaw vulnerability (CVSS Base Score: 6.3). Depending on the problem, an implementation flaw can cause unpredictable behavior of a system, troubles with stability and safety. Patches solve configuration errors, add new functionality, and increase system stability. Install this SAP Security Note to prevent the risks.

  • 2236258: Adobe Document Services has an XML external entity vulnerability (CVSS Base Score: 5.5). An attacker can use it to send specially crafted unauthorized XML requests which will be processed by XML parser. An attacker can use an XML external entity vulnerability for getting unauthorised access to OS file system. Install this SAP Security Note to prevent the risks.

Advisories for these SAP vulnerabilities with technical details will be available in 3 months on erpscan.com. Exploits for the most critical vulnerabilities are already available in ERPScan Security Monitoring Suite.

Copyright 2010 Respective Author at Infosec Island

Victory! California Just Reformed Its Gang Databases and Made Them More Accountable

Gov. Jerry Brown has signed A.B. 90, a bill that EFF advocated for to bring additional accountability and transparency to the various shared gang databases maintained by the State of California. With a campaign organized by a broad coalition of civil liberties organizations—such as Youth Justice Coalition, National Immigration Law Center, Urban Peace Institute, among others—the much needed reform was passed.

Why Reform was Desperately Needed

The California State Auditor found the state’s gang databases to be riddled with errors, containing records on individuals who should never have been included in the first place and information that should been purged a long time ago. The investigation also found that the system lacks basic oversight safeguards, and went as far as saying that due to the inaccurate information in the database, it’s crime-fighting value was “diminished.”

What the Reform Brings

The legislation brings a broad package of reforms. It codifies new standards and regulations for operating a shared gang database, including audits for accuracy and proper use. The bill would also create a new technical advisory committee comprised of all stakeholders—including criminal defense representatives, civil rights and immigration experts, gang-intervention specialist, and a person personally impacted because of being labeled as a gang member—as opposed to just representatives from law enforcement. Further, the legislation would ensure that the criteria for inclusion in the database and how long the information is retained is supported by empirical research.

Today, California has passed common-sense reforms that were desperately needed to protect its residents’ civil liberties. Californians should be proud.

Copyright Isn’t a Tool for Removing Negative Reviews

At EFF, we see endless attempts to misuse copyright law in order to silence content that a person dislikes. Copyright law is sadly less protective of speech than other speech regulations like defamation, so plaintiffs are motivated to find ways to turn many kinds of disputes into issues of copyright law. Yesterday, a federal appeals court rejected one such ploy: an attempt to use copyright to get rid of a negative review.

The website Ripoff Report hosts criticism of a variety of professionals and companies, who doubtless would prefer that those critiques not exist. In order to protect platforms for speech like Ripoff Report, federal law sets a very high bar for private litigants to collect damages or obtain censorship orders against them. The gaping exception to this protection is intellectual property claims, including copyright, for which a lesser protection applies.

One aggrieved professional named Goren (and his company) went to court to get a negative review taken down from Ripoff Report. If Goren had relied on a defamation claim alone, the strong protection of CDA 230 would protect Ripoff Report. But Goren sought to circumvent that protection by getting a court order seizing ownership of the copyright from its author for himself, then suing Ripoff Report’s owner for copyright infringement. We filed a brief explaining several reasons why his claims should fail, and urging the court to prevent the use of copyright as a pretense for suppressing speech.

Fortunately, the Court of Appeals for the First Circuit agreed that Ripoff Report is not liable. It ruled on a narrow basis, pointing out that the person who originally posted the review on Ripoff Report gave the site’s owners irrevocable permission to host that content. Therefore, continuing to host it could not be an infringement, even if Goren did own the copyright.

Goren paid the price for his improper assertion of copyright here: the appeals court upheld an award of over $100,000 in attorneys’ fees. The award of fees in a case like this is important both because it deters improper assertion of copyright, and because it helps compensate defendants who choose to litigate rather than settling for nuisance value simply to avoid the expense of defending their rights.

We’re glad the First Circuit acted to limit the ways that private entities can censor speech online.

Digital Trade Agreements Failing to Reflect Internet Community Input: UNCTAD

A hallmark of the new generation of trade agreements under negotiation, such as the North American Free Trade Agreement (NAFTA) and the Regional Comprehensive Economic Partnership (RCEP), is the inclusion of chapters on e-commerce or digital trade. But interest in using trade agreements to address issues such as data localization, disclosure of software source code, and platform safe harbors, isn’t restricted to these regional trade negotiations.

The same issues have also been raised at the international level at bodies such as the World Trade Organization (WTO), the United Nations Conference on Trade and Development (UNCTAD), and the World Economic Forum (WEF). Recent reports from some of these bodies highlight some serious shortcomings in the way that these digital issues are being shoehorned into new trade agreements without adequate transparency and consultation.

UNCTAD Information Economy Report 2017

Last week UNCTAD released the 2017 edition of its Information Economy Report (PDF). The annual publication acknowledges that a growing number of countries are adopting disincentives or barriers to processing, storage and transfer of data, which are driving the push to address these barriers through trade agreements. It suggests that  harmonization and interoperability of national data protection regimes could help to to find an appropriate balance between supporting processes that allow the transfer of data, on the one hand, and addressing concerns related to issues such as privacy and security on the other.

But more significantly the report also criticizes the lack of dialogue between the trade and Internet communities on risks related to data privacy and security, and devotes an entire chapter to exploring the interfaces between trade and Internet governance policy making, including the differences in workings and expectations of the two communities. The chapter stresses the need for aligning trade processes with multi-stakeholder values such as transparency, openness, and inclusive peer-to-peer participation by any interested party on an equal-footing. It recommends a bottom-up agenda-setting and iterative multi-stage consultation processes to address Internet-related issues, such as those being proposed for the digital trade chapter of NAFTA. EFF’s Open Digital Trade Network and our Brussels Declaration on Trade and the Internet that call upon governments to make trade policymaking on Internet issues more transparent and accountable are explicitly referenced in the report.

In contrast to the hard law-making approach that the U.S. Trade Representative is bringing to the NAFTA negotiations, the report draws upon a  framework proposed by the World Economic Forum (WEF) to suggest how norms for e-commerce and the digital economy should be developed. The WEF paper outlines a three-pronged strategy:

  1. Rather than attempting to agree on binding trade rules, nations should come together to issue declarative statements of mutual interest, based on inputs from stakeholders at the national and global level. Currently efforts are underway to formulate principles and best practices for cooperation across several multilateral mechanisms including the G-7, G-20 and OECD. UNCTAD is put forward as a possible facilitator to help improve synergies amongst these efforts and serve as a platform for formulating soft law rules  outside of the pressure of trade negotiations.
  2. Groups of experts from both the trade and Internet communities should be used to build consensus on key issues. Consensus building is seen as critical for reconciling the multi-stakeholder and traditional trade processes to create an approach that is open, transparent and inclusive at the levels of agenda setting, rules design and implementation, while preserving governments’ final decision making authority. One possible model for such expert groups would be based on the Global Commission on Internet Governance and the Global Commission on the Stability of Cyberspace (GCSC). In tandem with this “inner circle” of experts, a more broadly inclusive range of interested stakeholders could be engaged through online platforms and given opportunities to provide inputs into the work of the inner circle. The UNCTAD E-Commerce Week, a new UNCTAD Intergovernmental Group of Experts on E-Commerce and the Digital Economy, and the Internet Governance Forum (IGF) are recommended as potential avenues for engagement with this outer circle.
  3. There should also be long-term efforts to open up trade processes so they can benefit from being informed by a broader matrix of analysis and dialogue.  Reforms allowing relevant stakeholders to track developments and contribute perspectives and experiences, could increase their buy-in and support for trade policies being developed through trade processes. (EFF has recommended such a set of reforms for U.S. trade policymaking.) The report also recommends an assessment on whether trade process or other mechanisms are more suitable for developing new rules on digital issues not covered under existing trade rules.

All of these recommendations are derived from existing practices drawn from other sectors or in other trade negotiations on the digital economy. For example, the European Commission report on the Trade in Services Agreement (TISA) notes that consultations are an essential part of every trade negotiation, stating, “The interaction between international and local experts, NGOs, business, national government officials, EU officials and other stakeholders leads to a two-way exchange of information.”  

Although it doesn’t go as far as we would like, the Commission’s approach to the TISA negotiations does provide a stark contrast with those of the NAFTA negotiations.  For example, there is a dedicated website which forms an essential part of the consultation process and serves as the main tool for sharing information and updates about the study with stakeholders. The website is used to share drafts of the inception and interim reports and newsletters. It also includes information and documents from civil society dialogues, stakeholder surveys, and summary reports of the TISA negotiation rounds. A dedicated email address exists where stakeholders can send their questions or feedback on the reports.

Going forward, the Commission has also committed to releasing its negotiating mandates for trade agreements and establishing a new Advisory Group on EU trade agreements.

Future of Multilateral Negotiations at the WTO

The UNCTAD and WEF recommendations aren’t relevant only to regional and bilateral negotiations such as those in NAFTA and RCEP, or to plurilateral deals such as TISA, but also to the future of fully multilateral trade discussions at the WTO. In parallel to the release of the Information Economy Report and the first meeting of UNCTAD’s Intergovernmental Group of Experts which took place on October 4-6, Ministers of a select group of WTO member countries gathered in Morocco between 9-10 October to firm up the agenda for negotiations for the WTO’s upcoming Ministerial Conference in December in Buenos Aires, Argentina.

The WTO’s biennial summit is the biggest multilateral consultation among nations regarding global trade and investment norms, and the meeting in December will set the discussion and norms for sectoral issues, at least for the next 2 years. Several countries including United States, Australia, Switzerland and Norway have submitted proposals to include global e-commerce rules. Other countries are opposing these proposals on different grounds, which has resulted in a stalemate at the WTO. WTO procedures mandate that any new resolution garner the unanimous support of member-countries before being adopted.

The Morocco “mini-Ministerial” aims to provide political momentum on these long-standing issues and reconcile member nations at  loggerheads with each other. The EU has tabled proposals in several areas including e-commerce and recently called upon WTO partners to plan for substantive ambitious outcomes. However other member states are neither as optimistic nor keen on expanding the the multilateral trade order. In a series of tweets on Tuesday, the Indian Commerce and Industry Minister Suresh Prabhu said: “Trying hard to ensure multilateral WTO remain intact, despite huge pressures. Having several bilaterals with all important countries”. India is of the view that WTO members need to first deal with the issues which were already under negotiation, before moving on to new ones and urged nations to “avoid further widening and perpetuation of the imbalance between developed and developing countries.”

Around 300 civil society organisations from 150 nations have also raised concerns over the e-commerce liberalisation seemingly gaining priority over traditional issues such as food security. The letter (PDF) addressed to WTO members describes a push for “a dangerous and inappropriate new agenda under the disguising rubric of ‘e-commerce’, while there is no consensus to introduce this new issue during or since the last WTO Ministerial conference.” The letter argues that the WTO is not the proper forum to negotiate e-commerce issues which “have either already been discussed and resolved, or are currently being discussed, in other forums, most of which are more responsive and accountable to public interest concerns than the WTO.”

Despite our opposition to the secretive trade agreements of the past, EFF isn’t against free trade. We would like to see future trade agreements that work for Internet users and innovators, as well as for traditional trade stakeholders. But as UNCTAD and the WEF recognize, expectations around the levels of transparency and public consultation that can be expected in trade negotiations have changed. This is especially so in relation to Internet-related rules, where prescriptions nominally about commerce and trade can affect citizens’ free speech and other fundamental individual rights. The future for trade negotiations whether they are pursued at bilateral, plurilateral, or multilateral venues lies in the adoption of more transparent, consultative practices. The WEF and UNCTAD recommendations are a welcome exploration of possible reforms to trade negotiation practices to make them fit for the 21st century.

Gov. Brown Vetoes Internet Access For Juvenile Halls and Foster Homes—For Now

California Gov. Jerry Brown today vetoed A.B. 811, a bill that would have required the government to provide youth in state care—be they juvenile halls or foster homes—with reasonable access to computers and the Internet for educational purposes. In some cases, juveniles would also have been able to use computers to stay in touch with their families and for extracurricular and social activities. 

The bill, authored by Assemblymember Mike Gipson, was supported by the Youth Law Center, EFF, and Facebook, and received no opposition when it landed on the governor’s desk. More than 250 supporters sent letters to the legislature and the governor asking for this bill to become law. 

The good news is that Brown took the concept to heart. In vetoing the bill [PDF], he left the door open for future legislation: 

While I agree with this bill’s intent, the inclusion of state facilities alone will cost upwards of $15 million for infrastructure upgrades. Also, the reasonable access standard in this bill is vague, and could lead to implementation questions on top of the potentially costly state mandate created by the legislation. 

I therefore urge the proponents to revisit the local aspects of this bill in the future, taking these concerns under advisement. In the meantime, I am directing the Department of Juvenile Justice to present a plan in the coming year to provide computer and Internet access as soon as is practicable, and that can be budgeted for accordingly.

EFF  welcomes the governor’s commitment to bringing the Internet to state juvenile detention facilities through administrative action, and we are glad to see that he’s open to new legislation and budgeting for next year. However, we are disappointed that, in a year when he approved a $3.1 billion injection of funds into K-12 schools and community colleges, improving educational opportunities for these at-risk youth was not seen as an immediate statewide priority. 

EFF is proud to have lent our technological and policy expertise to this campaign, and we thank the hundreds of Californians who stood up for the rights of youth. We hope you will join us again next year as we continue this campaign to bring Internet access to children in these challenging environments. 

Cyber Security in the Workplace Is Everyone’s Obligation

Cyber security is no longer just a technology challenge—it’s a test for everybody who uses and interacts with technology daily. That means: everyone in your organization.

The protection and security of employees’ work and personal lives are no longer separate. They have been intertwined with evolving trends of social networks, the internet of things, and unlimited connectivity.  Because of this, cybersecurity is no longer just the responsibility of the company IT department. It is now the responsibility of every employee, not just to protect their work assets but their personal data as well. 

Failure to do so puts your organization at risk.

Cyber attackers do not care about age, gender, race, culture, beliefs or nationality.  They attack based on opportunity or potential financial gain. They attack irrespective of whom the victim is, whether it’s an 8-year boy at home playing computer games on dad’s office laptop or an employee sitting in the office reading emails.

So why are so many organizations experiencing cyber breaches?

Cyber breaches occur because of three major factors:

  • The Human Factor
  • Identities and Credentials
  • Vulnerabilities

Today people are sharing a lot more information publicly, ultimately exposing themselves to more social engineering and targeted spear phishing attacks. The goal of these attacks is to compromise devices for financial fraud or to steal identities in order to access organizations that employees are entrusted with protecting. Once an attacker has stolen a personal identity they can easily bypass an organization’s traditional security perimeters undetected, and if that identity has access to privileged accounts, the attacker can carry out malicious attacks in the name of that identity.

Employees power up devices daily and connect to the internet to access online services so they can get the latest news, shop for the best deals, chat and connect with friends, stream music and videos, get health advice, share their thoughts, and access their financial information.  As they use these online services they can quickly become a target of cyber criminals and hackers.  So, it’s critically important that everyone in your organization learns how cyber criminals target their victims, how to reduce their risk, and how to make it a lot more challenging for attackers to steal their information, identity or money.

When using services like social media people are often inadvertently sharing personally identifiable information—both physical and digital—like their full name, home address, telephone numbers, IP address, biometric details, location details, date of birth, birthplace, and even family members’ names.  The more information they make available online the easier it is for a cyber-criminal to successfully use that personal information to target them.

Did you know these facts? Cyber criminals and hackers spend up to 90% of their time performing reconnaissance of their targets before acting, meaning that they typically have a complete blueprint of their target.

With the increase in our digital activities, hackers and cyber-criminals have changed the techniques they use to target people, with email being the number one weapon of choice, followed by infected websites, social media scams, and stealing digital identities and passwords.  Reports and statistics in the past years have shown that more than 80% of data breaches have involved an employee as a victim—hackers claim that it is the fastest way to breach a company’s security controls.

This means that people—including your own employees—are on the front line of cyber security attacks. Threats can start from something as simple as a personal social footprint, and end up with individuals being used as a mule to gain access to your organization’s finances and sensitive information.

The time has come to create a balance between technology and people. We must increase our cyber security awareness to help us protect and secure both our personal assets and our company assets.  The time for a people-centric cyber security approach is now—which means that cyber security is everyone’s responsibility.

About the author: Joe Carson is a cyber-security professional with more than 20 years’ experience in enterprise security & infrastructure. Currently, Carson is the Chief Security Scientist at Thycotic. He is an active member of the cyber security community and a Certified Information Systems Security Professional (CISSP).

Copyright 2010 Respective Author at Infosec Island

Courtroom “Feud” Leaves Accurate Speech About Celebrities Unprotected

The first season of FX’s drama Feud told the story of the rivalry between Bette Davis and Joan Crawford. Set in Hollywood during the early sixties, the drama portrays numerous real-life figures from the era. Catherine Zeta-Jones appeared as Olivia de Havilland. Unfortunately, de Havilland did not enjoy the show. She sued FX asserting a number of torts including defamation, false light, and the right of publicity.

The right of publicity is a cause of action for commercial use of a person’s identity. It makes good sense when applied to prevent companies from, say, falsely claiming that a celebrity endorsed their product. But when it is asserted against creative expression, such as a TV show, it can burden First Amendment rights. Celebrities have brought right of publicity cases against a wide range of creative work ranging from moviesrap lyrics, and magazine features, to computer games

EFF has been very critical of recent right of publicity jurisprudence. Beginning with a California Supreme Court decision called Comedy III Productions v. Gary Saderup, many courts have tied First Amendment protection in right of publicity cases to whether the work somehow “transforms” the identity or likeness of the celebrity. But this rule (called the transformative use test) is a bad fit. The test is borrowed from copyright’s fair use standard. But, unlike copyright cases, right of publicity cases do not involve comparing two creative works. Instead, the accused work is compared against the actual celebrity: i.e. real life. Plenty of valuable speech, such as biographies or documentaries, involves depicting real people as accurately as possible. Why should these works be unprotected by the First Amendment?

The transformative use test has never made any sense in the right of publicity context. Fortunately for the TV and movie industry, courts have usually found the depiction of famous people in biopics like Feud to be protected. In our view, these decisions are correct in their result but only because they apply the transformative use test unfaithfully. Courts tend to apply that test more strictly when considering disfavored media such as computer games and comic books. But we have warned that the transformative use test threatens all creative expression about real people, including in film and TV.

Now we know we were right to worry. In de Havilland’s suit, FX filed an anti-SLAPP motion seeking to have the Gone with the Wind actress’s case thrown out. The cable network argued that its show was protected by the First Amendment. Evaluating the right of publicity claim, Judge Kendig of LA Superior Court wrote [PDF] that since the defendants “admit that they wanted to make the appearance of [de Havilland] as real as possible, there is nothing transformative about the docudrama.” Thus, she concluded that FX would likely lose under the transformative use test. With this ruling, we have reached the bottom of the slippery slope: accurate speech about real people is not protected by the First Amendment. 

Paradoxically, the court also held that de Havilland had a substantial likelihood of success on her defamation and false light claims. In our view, this part of the decision also failed to apply necessary First Amendment protections. Generally, dramatic recreations of real events are given strong free speech protection and the First Amendment requires that the plaintiff show that the speaker acted with “actual malice.” As a court faced with similar claims reasoned: “minor fictionalization cannot be considered evidence or support for the requirement of actual malice.” In de Havilland’s case, the ruling effectively punishes FX both for portraying the actress too accurately and not accurately enough.

If upheld, the Superior Court’s decision could make it much harder to make creative works based on real people. We hope the ruling gets reversed on appeal. Ultimately, the Supreme Court may need to step in and clean up the mess that state and lower federal courts have made of right of publicity law. The Supreme Court has not heard a right of publicity case for more than 40 years and guidance in this area is long overdue.

Deputy Attorney General Rosenstein’s “Responsible Encryption” Demand is Bad and He Should Feel Bad

Deputy Attorney General Rod Rosenstein delivered a speech on Tuesday about what he calls “responsible encryption” today. It misses the mark, by far.

Rosenstein starts with a fallacy, attempting to convince you that encryption is unprecedented:

Our society has never had a system where evidence of criminal wrongdoing was totally impervious to detection, especially when officers obtain a court-authorized warrant.  But that is the world that technology companies are creating.

In fact, e’ve always had (and will always have) a perfectly reliable system whereby criminals can hide their communications with strong security: in-person conversations. Moreover, Rosenstein’s history lesson forgets that, for about 70 years, there was an unpickable lock. In the 1770s, engineer Joseph Bramah created a lock that remained unpickable until 1851.  Installed in a safe, the owner could ensure that no one could get inside, or at least not without destroying the contents in the process. 

Billions of instant messages are sent and received each day using mainstream apps employing default end-to-end encryption.  The app creators do something that the law does not allow telephone carriers to do:  they exempt themselves from complying with court orders.

Here, Rosenstein ignores the fact that Congress exempted those app creators-”electronic messaging services”- from the Computer Assistance for Law Enforcement Act (CALEA). Moreover, CALEA does not require telephone carriers to decrypt encryption where users hold the keys.  Instead, Section 1002(b)(3) of CALEA provides:

(3) Encryption. A telecommunications carrier shall not be responsible for decrypting, or ensuring the government’s ability to decrypt, any communication encrypted by a subscriber or customer, unless the encryption was provided by the carrier and the carrier possesses the information necessary to decrypt the communication.

By definition, when the customer sends end-to-end encrypted messages—in any kind of reasonably secure implementation—the carrier does not (and should not) possess the information necessary to decrypt them.

With his faulty premises in place, Rosenstein makes his pitch, coining yet another glib phrase to describe a backdoor.

Responsible encryption is achievable. Responsible encryption can involve effective, secure encryption that allows access only with judicial authorization.  Such encryption already exists.  Examples include the central management of security keys and operating system updates; the scanning of content, like your e-mails, for advertising purposes; the simulcast of messages to multiple destinations at once; and key recovery when a user forgets the password to decrypt a laptop.

As an initial matter, “the scanning of content, like your e-mails, for advertising purposes” is not an example of encryption, “responsible” or otherwise. Rosenstein’s other examples are just describing systems where the government or another third party holds the keys. This is known as “key escrow,” and, as well explained in the Keys Under Doormats paper, the security and policy problems with key escrow are not only unsolved, but unsolvable.

Perhaps sensitive to the criticisms of the government’s relentless attempts to rename backdoors, Rosenstein claims “No one calls any of those functions a “back door.”  In fact, those capabilities are marketed and sought out by many users.” In fact, critics of backdoors have fairly consistently called key escrow solutions “backdoors.” And any reasonable reader would call Google’s ability to access your email a backdoor, especially when that backdoor is used by unauthorized parties such as Chinese hackers.

Such a proposal would not require every company to implement the same type of solution.  The government need not require the use of a particular chip or algorithm, or require any particular key management technique or escrow.  The law need not mandate any particular means in order to achieve the crucial end: when a court issues a search warrant or wiretap order to collect evidence of crime, the provider should be able to help.

This is the new DOJ dodge.  In the past, whenever the government tried to specify ‘secure’ backdoored encryption solutions, researchers found security holes – for example, rather famously the Clipper Chip was broken quickly and thoroughly.

So now, the government refuses to propose any specific technical solution, choosing to skate around the issue by simply asking technologists to “nerd harder” until the magical dream of secure golden keys is achieved.

Rosenstein attempts to soften his demand with an example of a company holding private keys.

A major hardware provider, for example, reportedly maintains private keys that it can use to sign software updates for each of its devices.  That would present a huge potential security problem, if those keys were to leak.  But they do not leak, because the company knows how to protect what is important.

This is a fallacy for several reasons.  First, perfect security is an unsolved problem. No one, not even the NSA, knows how to protect information with zero chance of leaks. Second, the security challenge of protecting a signing key, used only to sign software updates, is much less than the challenge of protecting a system which needs access to the keys for communications at the push of a button, for millions of users around the globe.

Rosenstein then attempts to raise the stakes to near apocalyptic levels:

If companies are permitted to create law-free zones for their customers, citizens should understand the consequences.  When police cannot access evidence, crime cannot be solved.  Criminals cannot be stopped and punished.

This is a bit much.  For a long time, people have had communications that were not constantly available for later government access. For example, when pay phones were ubiquitous, criminals used them anonymously, without a recording of every call. Yet, crime solving did not stop. In any case, law enforcement has been entirely unable to provide solid examples of encryption foiling even a handful of actual criminal prosecutions.

Finally, in his conclusion, Rosenstein misstates the law and misunderstands the Constitution.

Allow me to conclude with this thought: There is no constitutional right to sell warrant-proof encryption.  If our society chooses to let businesses sell technologies that shield evidence even from court orders, it should be a fully-informed decision.

This is simply incorrect. Code is speech, and courts have recognized a Constitutional right to distribute encryption code. As the Ninth Circuit Court of Appeals noted:

The availability and use of secure encryption may … reclaim some portion of the privacy we have lost. Gov’t efforts to control encryption thus may well implicate not only the First Amendment rights … but also the constitutional rights of each of us as potential recipients of encryption’s bounty.

Here, Rosenstein focuses on a “right to sell,” so perhaps the DOJ means to distinguish “selling” under the commercial speech doctrine, and argue that First Amendment protections are therefore lower. That would be quite a stretch, as commercial speech is generally understood as speech proposing a commercial transaction. Newspapers, for example, do not face weaker First Amendment protections simply because they sell their newspapers.

The Department of Justice has said that they want to have an “adult conversation” about encryption. This is not it. The DOJ needs to understand that secure end-to-end encryption is a responsible security measure that helps protect people.

With Facebook, Twitter in the Crosshairs of Investigators Probing Russian Interference, Let’s Consider The Risks of Applying Election Ad Rules to the Online World

Social media platforms are avenues for typical Americans—those without enough money to purchase expensive television or radio ads—to make their voices part of the national political dialogue. But with news that a Russian company with ties to the Kremlin maintained hundreds of Twitter accounts and purchased $100,000 worth of Facebook ads aimed at influencing American voters—and specifically targeting voters in swing states like Wisconsin and Michigan—these same social media companies are now at the center of a widening government investigation into Russian interference in the 2016 election.

This controversy has also sparked renewed calls for more government regulation of political ads on social media and other online platforms—including creating news rules for Internet ads that would mirror those the FEC and FCC currently apply to political ads on TV, cable, and radio. In the past, policymakers proposed essentially extending the broadcast rules to the Internet without adequately and thoughtfully considering the differences between the broadcast and online worlds. As a result, we argued for limiting the burden on online speakers from campaign finance regulations in both 2006 and 2014.

We can’t emphasize enough what’s at stake here. Social media and digital communications have an enormous role in elections. On the whole, this is a good thing, because it creates many new avenues for Americans to communicate, share, participate, debate, and organize. Online speech rules must maintain our ability to speak out—anonymously if we choose—about candidates, elections, and issues. At the same time, American elections should be decided by Americans and not subject to foreign influence. The rules that surround our elections should be carefully created to protect American voters and not just at the moment of voting.  Our right to participate and voice our opinions must not be compromised on the way to preventing foreign intervention in our elections.

The Problems With Proposals to Blindly Apply Offline Regulation to Online Speech

We’re still in the early stages of this latest round of policymaking. Before moving forward, regulators and lawmakers and the public need to consider the risks of applying election rules designed for broadcast to the Internet—along with the following basic and long-standing principles: enforce existing laws first, make sure that any applicable laws are tailored to the difference in size and resources of various speakers and platforms, and protect Americans’ right to participate in the public debate, including anonymously.  EFF will evaluate any proposals to make sure they adhere to the following long-standing principles.

1.  It is already illegal for agents of foreign governments to buy electioneering ads.  Stronger and speedier enforcement of existing laws is a better strategy than more regulation.

Stronger enforcement of existing laws, supported by more funding and support for powerful and swift FEC enforcement efforts—including Department of Justice follow up with criminal charges for serious offenses—is the best first step for any Congressional action. 

The core concern driving the current proposals is election interference by foreign governments. Our existing election laws already prohibit foreign governments like Russia or their agents from purchasing campaign ads—online or offline—that directly advocate for or against a specific candidate. In addition, for 60 days prior to an election, foreign agents cannot even purchase ads that mention a candidate. Finally, the Foreign Agent Registration Act also requires information materials distributed by a foreign entity to contain a statement of attribution and copies must be filed with the U.S. Attorney General.

Our election commissions, and possibly law enforcement, should already be looking deeply into potential violations of these laws during the 2016 election. Facebook and the other platforms should be cooperating with those investigations, as the law requires.

Additionally, political campaigns are required to report their spending, yet that reporting often trails the actual purchases of ads for so long that the information is not helpful during the heat of the election. The penalties are also notoriously weak and slow, making too many of them merely afterthought for campaigns.

We plainly need stronger enforcement of these laws, including both catching violations in time to block their influence and ensuring real consequences for those involved. It’s pretty clear that Russia doesn’t care that its interference with our elections is illegal, so it won’t care if it’s doubly illegal, but many of the agents who purchased these ads may be subject to U.S. enforcement actions. We may also need to consider how to make ad purchase information by campaigns more readily visible to the voters

2. On the Internet, one size does not fit all.

Don’t apply rules designed for large entities to smaller ones. Don’t apply rules based on high-cost advertising to low-cost advertising.

One big difference between online and broadcast media: broadcast media is largely owned by a relatively few big companies. And while Facebook and Google are certainly large companies and there are several large ad placement firms online, the Internet is full of smaller platforms, websites, and blogs that have a range of sponsorship models. Applying FEC campaign finance rules designed for large companies to small platforms and individuals doesn’t make sense and can perversely further entrench the power of those large companies..

The FEC’s campaign finance rules applicable to TV and radio advertising are based on the fact that these broadcast media utilize the scarce public spectrum. The rules are long, complex, and require significant paperwork and accounting structures, lawyers, and other significant resources. These rules make sense when applied to a handful of giant media companies that are already heavily regulated, that now control the vast majority of TV and radio in the United States, and that have the resources needed to handle onerous tracking and reporting requirements. 1

In addition, TV and radio ads with big reach are prohibitively expensive for small purchasers. Those with less than $500,000 to spend will probably find themselves shut out of all but the most obscure corners of TV and those with less than $100,000 will have a hard time purchasing national or even regional radio ads. This also helps keeps the universe of those who have to comply with the rules quite small and makes it less troubling that complying with the rules requires accounting structures, lawyers, and other significant resources.

Of course the Internet also has gigantic corporate players—Google, Facebook, and even mid-sized companies like Twitter and Reddit that can likely handle the burden of reporting on major advertising purchases, whether singly or aggregated. Some of the big advertising networks may be able to do so, too.

But the revolutionary thing about the Internet is that you don’t need millions of dollars to make your voice heard. The Internet has millions of small websites, podcasts, blogs, and other outlets where people can and do discuss elections and politics. Ordinary individuals without ready access to big cash can purchase Internet ads, and for little or no cost they can create YouTube videos and post banners on their personal websites to express support for particular candidates, parties, or issues. In this way, the Internet is less like radio and TV and more like print publications or even handbills—where there are many publications and the cost of ads are less.

It is critical that any effort to consider applying the FEC’s offline rules to the online world differentiate between big platforms, like Facebook and Google, and smaller ones, and between platforms and their users. A podcaster doesn’t have the resources of Apple even if her podcast is available from iTunes, and a Twitter personality doesn’t necessarily have the resources of Twitter even if she has hundreds of thousands of followers.

The risk in not understanding the Internet landscape is serious. Extending the TV and radio election rules to small speakers and free and low-cost Internet speech will discourage these smaller entities from allowing or engaging in political expression at all.  If regulation is not done carefully, it could undermine one of the great gifts of the Internet—allowing those without great financial resources to make their voices heard about the candidates and issues they’re passionate about. This could also end up entrenching both large Internet companies and large broadcasters (most of whom operate online, too) even more than they already are. 

3.  Anonymity speech is critical for democracy.

Regulations that infringe on anonymous speech will do more harm than good

Congress and regulators also need to consider the risks to privacy and anonymity of wholesale application of TV and radio disclosure rules to Internet ads and online speech. Anonymity is critical for democracy, as it’s a tool for those in the minority to safely voice their dissent. Speaking out on issues of public concern can be dangerous, and throughout our country’s history, speakers in favor of issues as wide-ranging as civil rights, reproductive rights, and religious freedom have all relied on their First Amendment right to speak anonymously in order to safely make their voices heard. Anonymous speech even played a key role in the founding of the United States: the authors of the Federalist Papers hid their identities for fear of retaliation. And courts have recognized that anonymous speech on the Internet “facilitates the rich, diverse, and far ranging exchange of ideas” and “can foster open communication and robust debate.”

Unfortunately, many initial suggestions about how to respond to and prevent Russian interference start from the premise that all speakers online must be positively identifiable. Some at the FEC argue that the identity of all who publish endorsements, even free ones, on a website must be made public. Such proposals would place an onerous burden on the small Internet publishers and platforms discussed above.

Not only are proposals that unduly burden anonymous political speech unconstitutional, but they simply don’t work. Study after study has debunked the idea that forcibly identifying speakers is an effective strategy against those who spread bad information online. And Facebook has had a real name policy, requiring all users to use their real name, for years. It didn’t stop Russia. But it does hurt innocent people—including drag queens, LGBTQ people, Native Americans, survivors of domestic and sexual violence, political dissidents, sex workers, therapists, and doctors.

Any rules aimed at protecting our election from foreign intervention should not infringe on the rights of Americans to engage in public debate without being forced to identify themselves or disclose other sensitive personal information.

 What Can Be Done?

While blindly extending offline rules to the online environment is a dangerous course, there is much that can be done, especially by Internet companies.  Here are a few ideas:

1.  Companies should do more to address malicious bots and other tools of channel flooding.

One of the techniques that we’re hearing a lot about is the use of bots, fake online accounts programmed to post messages automatically and mimic a real person’s identity. Not all bots are bad, but they and similar tools can be used to manipulate public opinion by creating a sense of a mass movement where there is none. It’s suspected that Russia used this tactic to disrupt the 2016 election.

The malicious use of bots is a problem that platforms can and should do more to address. Companies like Facebook already spend resources tracking bots when they involve spammers and other kinds of detrimental behaviors. While the problems are not identical, they share enough similar traits that the companies should be able to make progress through some concentrated effort to make locating and shutting down these strategies. Like many things, this will always be a cat and mouse game, but the companies can certainly better prioritize rooting out these malicious political bots, especially as an election approaches.

2.  Users deserve to know why they are being served certain ads. 

Another important way companies can step up right now is by being transparent about how they decide which ads to serve which users. Partly in response to a crowd-sourced campaign led by ProPublica, Facebook is taking some first steps in this direction. But the company has a long way to go. Facebook and other companies need to provide their users with real information—not vague policies or protocols—about why we are seeing the ads we are seeing. For election-related and political ads, arguing that ad placement processes and the tracking that underlies them are “proprietary” just doesn’t cut it anymore. If Facebook, Google and other companies want us to trust them to protect the democratic process going forward, they need to be transparent about how they are choosing the information they are feeding us.2

3.  Facebook and other companies need to allow independent auditing.

To truly get to the bottom of things, we also need analysis by independent researchers—with no bottom line or corporate interest. Facebook, Google, and others should let truly independent researchers work with and audit their data. Right now, only Facebook has access to the data that can reveal exactly how Russian and other agents used the platform to spread divisive news, hoaxes, and misinformation—including how much of it there was, who created and read it, and how much influence it may have had. Facebook exercises complete control over independent researchers’ access to this data. There are of course serious privacy considerations that must be dealt with, but leaving the analysis to these locked-up platforms is what allowed Mark Zuckerberg to deny there was even a problem for ten whole months after the election. This is plainly not sufficient.

Going Forward

EFF will be evaluating proposals based on the principles and concerns noted above, and we’ll be keeping a close eye on the companies too.

 

  • 1. Roughly NBC/Universal, Comcast, NewsCorp. CBS, Disney, Time Warner, Cox, Clear Channel, Tribune Company, Gannett, Sinclair and Washington Post. See e.g. https://www.freepress.net/ownership/chart, http://www.businessinsider.com/these-6-corporations-control-90-of-the-media-in-america-2012-6.
  • 2. Facebook could even go a step further by doing two things: first, give users more power to avoid being tracked in the first place and second, open up the feed – let users have an API so that they can create and curate their own Newsfeeds, making it harder for foreign agents to simply buy ads on one or two platforms and thereby reach millions of Americans. We know these are not likely to be adopted by Facebook, but it’s important to keep in mind that the “total surveillance” business model of Facebook and its continued embraced of a closed ecosystem both contribute to the difficulty of solving these problems.

Hey Alexa – Show Me Whitelisted Malware

Noise is a huge concern for the SOC. Security teams are struggling to deal with the daily barrage of noise coming from a myriad of security tools. As the volume gets louder, teams are increasingly seeking shortcuts and ways to automate certain processes in order to save precious time and cut down the noise.

One such popular shortcut among security analysts is to automate populating a whitelist by pulling from existing lists that the team deems to be safe. Curating a whitelist can be extremely time-consuming, and may seem like a distraction when other investigations are piling up on analysts’ plates. However, we’ve found that using existing lists for whitelisting could mean opening up your organization to vulnerabilities.

The team at Awake Security recently took a closer look at one seemingly benign list – the Alexa Top 1 Million list of domains – to assess whether it would be safe to use for whitelisting. While the Alexa list isn’t intended as a whitelist, many security teams see it as logical starting point. It makes sense that the most visited sites on the web would be nonthreatening, and could automatically be considered safe during an investigation.

In our investigation, however, we found that potentially malicious domains were making it up as high as #447. Just under Glassdoor, only five spots away from Dell and even more popular than BoredPanda.com, was a suspicious domain: piz7ohhujogi[.]com. At first glance, this domain looks suspicious because it appears to be randomly generated nonsense, much like the DGA domains that some malware like to use. At closer examination, courtesy of a quick Google search, we found pages of search results featuring advice on removing the domain from your redirects, with many sites referring to it as a pop-up or redirect virus.

We monitored the list for over a week, and saw this suspicious domain continue to creep up the list, reaching as high as #432. Since then, it has gradually fallen in rank, but it still remains as one of the top domains in the Alexa list.

Learning that this site had made it into the Alexa Top 1M begged the question: What other suspicious domains may have snuck their way in? To find the answer, we compared Alexa Top 1M with six different malware blacklists – Maltrail, ZeusTracker, MalwareDomains.com, Malware Domain List, Malware Bytes and Cybercrime.

The Malware Bytes list had the most domains that were also on the Alexa Top 1M (1308), however the types of domains it included were not all inherently malicious. The first domain, for example, qq.com, is a popular Chinese social website that offers a messaging app. The second was a Chinese news site. However, depending on your organization’s acceptable use policy, these sites and others on the list may still be threats to your whitelist if you don’t condone pirating software (thepiratebay[.]org, utorrent[.]com) or viewing pornography (cam4[.]com).

These are just a few of the examples we unearthed. In the end, it’s important to remember that lists like the Alexa Top 1M are not intended for whitelisting. As tempting as it can be to harness existing lists in order to cut down on noise, there is a danger in putting implicit trust in external sources.

To borrow a phrase from the Alexa website – “Information is power – if you have the right tools.” Those using popular lists for whitelisting should take another look at their tools and their approach to ensure security for their organizations.

About the author: Troy Kent  is a Threat Researcher at Awake Security. He has spent his career in SOCs as multiple Tiers of Analyst and an Investigator; working ticket queues, hunting for security incidents, rapidly prototyping new ideas into existence, working terrible hours and questioning career decisions.

Copyright 2010 Respective Author at Infosec Island

58 Human Rights and Civil Liberties Organizations Demand an End to the Backdoor Search Loophole

EFF and 57 organizations, including American Civil Liberties Union, R Street, and NAACP, spoke out against warrantless searches of American citizens in a joint letter this week demanding reforms of the so-called “backdoor search” loophole that exists for data collected under Section 702.

The backdoor search loophole allows federal government agencies, including the FBI and CIA, to, without a warrant, search through data collected on American citizens.

Applying a warrant requirement only to searches of Section 702 data involving ‘criminal suspects,’ is not an adequate solution to this problem. 

The data is first collected by the intelligence community under a section of law called Section 702 of the FISA Amendments Act of 2008, which provides rules for sweeping up communications of foreign individuals outside the United States. However, the U.S. government also uses 702 to collect the communications of countless American citizens and store them in a database accessible by several agencies.

EFF and many others believe this type of mass collection alone is unconstitutional. The backdoor search loophole infringes American rights further—allowing agencies to warrantlessly search through 702-collected data by using search terms that describe U.S. persons. These terms could include names, email addresses, and more.   

This practice needs to end. And a proposal before Congress to require warrants on backdoor searches used only in criminal investigations—as recently reported by the New York Times—does not go far enough.

As EFF, and several other organizations, said in an Oct. 3 letter:

“Applying a warrant requirement only to searches of Section 702 data involving ‘criminal suspects,’ is not an adequate solution to this problem. Most fundamentally, it ignores the fact that the Fourth Amendment’s warrant requirement is not limited to criminal or non-national security related cases.”

Further, carving out a warrant requirement solely for criminal investigations ignores the broader umbrella term under which the FBI conducts many searches—that of “foreign intelligence.” Because the FBI conducts investigations with both criminal and foreign intelligence elements, the agency could predictably bypass backdoor warrant requirements by ascribing their searches to foreign affairs matters, rather than criminal.

Warrantless searches of American communications may especially impact those communities that may be speaking frequently to family outside of the United States of which have historically faced unjust surveillance. As we wrote: “Existing policies make it far too easy for the government to engage in searches that disproportionately target Muslim Americans and immigrants with overseas connections based merely on the assertion of a nebulous ‘foreign intelligence’ purpose.”

These searches are happening. In 2016, the CIA and NSA reported they conducted 30,000 searches for information about U.S. persons. That number does not include metadata searches by the CIA, a related problem that can also be fixed by Congress before Section 702 sunsets in December.

Backdoor searches of 702-collected data about U.S. citizens and residents should require a warrant based on probable cause. Congress can protect the rights of countless Americans by closing this loophole.

Read the full letter. 

TAKE ACTION

SPEAK OUT AGAINST WARRANTLESS NSA SURVEILLANCE TODAY

Social Media Auto Publish Powered By : XYZScripts.com