Argentinian Government Bans Civil Society Organizations From Attending Upcoming WTO Ministerial Meeting

The World Trade Organization (WTO), the multilateral global trade body that has almost all countries as members, has been eyeing an expansion of its work on digital trade for some time. Its current inability to address such issues is becoming an existential problem for the organization, as its relevance is challenged by the rise of smaller regional trade agreements such as the Trans-Pacific Partnership (TPP), North American Free Trade Agreement (NAFTA), and Regional Comprehensive Economic Partnership (RCEP) that do contain digital trade rules.

That’s one reason why some experts are now arguing that the WTO ought to retake leadership over digital trade rulemaking. Their reasoning is that a global compact could be more effective than a regional one at combatting digital protectionism, such as laws that restrict Internet data flows or require platforms to install local servers in each country where they offer service.

Civil Society Barred from WTO Ministerial Meeting

It’s true that some countries do have protectionist rules that affect Internet freedom, and that global agreements could help address these rules. But the problem in casting your lot in with the WTO is that as closed and opaque as deals like the TPP, NAFTA, and RCEP are, the WTO is in most respects no better. That was underscored last week, when in a surprise move the Argentinian government blocked representatives from civil society organizations (CSOs) from attending the upcoming WTO biennial Ministerial Meeting of 164 member states, which is scheduled between 10-13 December in Buenos Aires.

Last week the WTO reached out to more than than 64 representatives from CSOs,  including digital rights organizations Access Now and Derechos Digitales, to inform them that “for unspecified reasons, the Argentine security authorities have decided to deny your accreditation.” The Argentine government later issued a press release claiming that activists had been banned as “they had made explicit calls to manifestations of violence through social networks”—a remarkable claim for which no evidence was presented, and which the groups in question have challenged

Most of the banned organizations belong to the Our World Is Not For Sale network (OWINFS), a global social-justice network which has been engaging in WTO activities, including organizing panels and sessions for over two decades. In a strongly-worded letter, Deborah James, OWINFS Network Coordinator has condemned Argentina’s actions and noted that the lack of explanation behind the decision “attacked the conference’s integrity” and violated “a key principle of international diplomacy”.

Even before these delegates were barred from the meeting, their ability to participate in the WTO Ministerial was tightly constrained. Unlike other international negotiation bodies such as WIPO, the WTO does not permit non-state actors to attend meetings even as observers, nor to obtain copies of documents under negotiation. Their admission into the meeting venue would only authorize them to meet with delegates in corridors and private side-meetings, and Argentina’s action has taken away even that. Instead, public interest groups will essentially be limited to meeting and protesting outside the Ministerial venue, out of sight and out of mind of the WTO delegates inside.

Multilateral v. Multistakeholder to Digital Trade

Thus the problem with the suggestion that the WTO should take on the negotiation of new Internet-related issues is that any such expansion of the WTO mandate would require a rehaul of its existing standards and procedures for negotiations. International trade negotiations are government-led, and allow for very limited public oversight or participation in the process. On the other hand, the gold standard for Internet-related policy development is for a global community of experts and practitioners to participate in an open, multistakeholder setting.

Transparent consultative practices are critical in developing rules on complex digital issues as prescriptions nominally about commerce and trade can affect citizens’ free speech and other fundamental individual rights. In this respect and others, digital issues are different from conventional trade issues such as quotas and tariffs, and it is important to involve users in discussion of such issues from the outset. Through documents such as our Brussels Declaration on Trade and the Internet, EFF has been calling upon governments to make trade policy making on Internet issues more transparent and accountable, whether it is conducted at a multilateral or a smaller plurilateral level.

The WTO’s lack of any institutional mechanisms to gather inputs from the public and its inability to assure participation for CSOs is a big blow to the WTO’s credibility as a leader on global digital trade policy. Argentina’s unprecedented ban on CSOs is especially worrying, as e-commerce is expected to be a key topic of discussion at the Ministerial.

E-commerce Agenda Up In The Air

Last week, WTO director general Roberto Azevedo announced that he will be appointing “minister facilitators” to work with sectoral chairs and identified e-commerce as an area for special focus. That doesn’t mean that it’s an entirely new issue for the WTO. E-commerce (now sometimes also called “digital trade”) entered the WTO in 1998, when member countries agreed not to impose customs duties on electronic transmissions, and the moratorium has been extended periodically, though no new substantive issues have been taken on.

This is changing. Since last year, developed and developing countries have been locked in a battle over whether the WTO’s digital trade work program should expand to include new digital trade issues such as cross-border data flows and localization, technology transfer, disclosure of source code of imported products, consumer protection, and platform safe harbors.

This push has come most strongly from developed countries including the United States, Japan Canada, Australia, and Norway. During an informal meeting at the WTO in October, the EU, Canada, Australia, Chile, Korea, Norway and Paraguay, among other countries, circulated a restricted draft ministerial decision to establish “a working party” at the upcoming WTO ministerial meeting in Buenos Aires and authorizing it to “conduct preparations for and carry out negotiations on trade-related aspects of electronic commerce on the basis of proposal by Members”.

Amongst these are a May 2017 proposal presented by the European Union in which the co-sponsors mapped out possible digital trade policy issues to be covered, including rules on spam, electronic contracts, and electronic signatures. The co-sponsors noted that the list they provided was not exhaustive, and they invited members to give their views on what additional elements should be added. 

But many developing nations have opposed the introduction of new issues, instead favoring the conclusion of pending issues from the Doha Round of WTO negotiations, which are on more traditional trade topics such as agriculture. In particular, India this week submitted a formal document at the WTO opposing any negotiations on e-commerce. Commerce and Industry minister Suresh Prabhu said, “We don’t want any new issues to be brought in because there is a tendency of some countries to keep discussing new things instead of discussing what’s already on the plate. We want to keep it focused.” India has maintained that although e-commerce may be good for development, it may not be prudent to begin talks on proposals supported by developed countries. A sometimes unspoken concern is that these rules provide “unfair” market access to foreign companies, threatening developing countries’ home-grown e-commerce platforms.

China has a somewhat different view, and has expressed openness to engage in discussions on new rules to liberalize cross-border e-commerce. Back in November 2016, China had also circulated a joint e-commerce paper with Pakistan, and has since called for informal talks to “ignite” discussions on new rules, with a focus on the promotion and facilitation of cross-border trade in goods sold online, taking into account the specific needs of developing countries.

A number of other developing nations have their own proposals for what the WTO’s future digital trade agenda might include. In March 2017, Brazil  circulated a proposal seeking “shared understandings” among member states on transparency in the remuneration of copyright, balancing the interests of rights holders and users of protected works, and territoriality of copyright. In December 2016, another document prepared by Argentina, Brazil, and Paraguay focused on the electronic signatures and authentication aspect of the work programme. And in February 2017, an informal paper co-sponsored by 14 developing countries identified issues such as online security, access to online payments, and infrastructure gaps in developing countries as important areas for discussion.

Expectations From the Ministerial Meeting

With so many different proposals in play, the progress on digital trade made at the Ministerial Conference is likely to be modest, reflecting the diverging interests of WTO Members on this topic. Reports suggest that India has built strong support amongst a large number of nations including some industrialized countries, for its core demands for reaffirming the principles of multilateralism, inclusiveness and development based on the Doha work program. Given India’s proactive stance opposing the expansion of the current work program on e-commerce, this suggests an underwhelming outcome for proponents of the expansion of the WTO’s digital trade agenda.

However India’s draft ministerial decision on e-commerce also instructs the General Council of the WTO to hold periodic reviews in its sessions in July and December 2018 and July 2019, based on the reports that may be submitted by the four WTO bodies entrusted with the implementation of its e-commerce work program, and to report to the next session of the Ministerial Conference. If enough members agree with India and relevant changes are made to suit all members, India’s draft agreement could become an actual declaration.

In other words, even if, as seems likely, no new rules on digital trade issues come out of the 2017 WTO Ministerial Meeting, that won’t be the end of the WTO’s ambitions in this field. It seems just as likely that whatever protests take place in the streets of Buenes Aires, from activists who were excluded from the venue, will be insufficient to dissuade delegates from this course. But what we believe is achievable is to make further progress towards changing the norms around public participation in trade policy development, with the objective of improving the conditions for civil society stakeholders not only at the WTO, but also in other trade bodies and negotiations going forward.

This is one of the topics that EFF will be focusing on at this month’s Internet Governance Forum (IGF), where we will be hosting the inaugural meeting of a new IGF Dynamic Coalition on Trade and the Internet, and hopefully announcing a new multi-stakeholder resolution on the urgent need to improve transparency and public participation in trade negotiations. The closed and exclusive 2017 WTO Ministerial Meeting is an embarrassment to the organization. If and when the WTO does finally expand its work program on digital trade issues, it is essential that public interest representatives be seated around the table—not locked outside the building.

Cybersecurity’s Dirty Little Secret

“Who got breached today?” It seems that rarely does a news cycle go by without a revelation of some company, government entity, or web service experiencing a major breach with implications for vast numbers of people. The thinking has shifted from a mindset of “how can I prevent a breach?” to “I know it’s going to happen, how can I minimize the impact?” And what are those impacts? They range from embarrassment and brand degradation to significant financial loss, careers in shambles, and even companies going out of business.

The most severe breaches inevitably stem from powerful credentials (typically those logins used for administration) falling into the wrong hands. No one in their right mind would hand over the keys to their kingdom to a bad actor. But these bad actors are sneaky. They’ll get their hands on a relatively harmless user credential through social engineering, phishing, or brute force and use escalation techniques and lateral movements to gain super user access – and then all bets are off.

One of the foundational pillars of identity and access management (IAM) is the practice of privileged access management (PAM). IAM is concerned with ensuring that the right people, have the right access, to the right systems, in the right ways, at the right times, and that all those people with skin in the game agree that all that access is right. And PAM is simply applying those principles and practices to “superuser” accounts and administrative credentials. Examples of these credentials are the root account in Unix and Linux systems, the Admin account in Active Directory (AD), the DBA account associated with business-critical databases, and the myriad service accounts that are necessary for IT to operate.

PAM is widely viewed as perhaps the top practice that can alleviate the risk of a breach and minimize the impact if one were to occur. Key PAM principles include eliminating the sharing of privileged credentials, assigning individual accountability to their use, implementing a least-privilege access model for day-to-day administration, and implementing an audit capability on activities performed with these credentials. Unfortunately, we now have clear indicators that most organizations have not kept their PAM program on par with ever-evolving threats.

One Identity recently conducted research that revealed some alarming statistics when it comes to this most important protective practice. The study of more than 900 IT security professionals found that too many organizations are using primitive tools and practices to secure and manage privileged accounts and administrator access, in particular:

  • 18 percent of those surveyed admit to using paper-based logs for managing privileged credentials
  • 36 percent manage them with spreadsheets
  • 67 percent rely on two or more tools (including paper-based and spreadsheets) to support their PAM program

Although many organizations are attempting to manage privileged accounts (even if that attempt is with inadequate tools) fewer are actually monitoring the activity performed with this “superuser” access:

  • 57 percent admit to only monitoring some or none of their privileged accounts
  • 21 percent admit that they do not have any ability to monitor privileged account activity
  • 31 percent report that they cannot identify the individuals that perform activities with administrative credentials. In other words nearly one in three cannot assign the mandatory individual accountability that is so critical to protection and risk mitigation.

And if those statistics weren’t scary enough, data indicates that way too many organizations (commercial, government, and worldwide) fail to do even the basic practices that common sense demands:

  • 88 percent admit that they face challenges when it comes to managing privileged passwords
  • 86 percent do not change admin password after they are used – leaving the door open for the aforementioned escalation and lateral movement activities
  • 40 percent leave the default admin password intact on systems, servers, and infrastructure, functionally eliminating the need for a bad actor to even try hard to get the access they covet.

The bottom line is simple, common-sense activities such as changing the admin password after each use and not leaving the default in place will solve many of the problems. But also an upgrade to practices and technologies to eliminate the possibility of human error or lags due to cumbersome password administration practices, will add an additional layer of assurance and individual accountability. And finally, expanding a PAM program to include all vulnerabilities – not just the ones that are easiest to secure – will yield exponential gains in security.

About the Author: Jackson Shaw is Vice President, Product Management at One IdentityHe has been involved with directory, meta-directory and security initiatives for 25 years.

Copyright 2010 Respective Author at Infosec Island

Four Ways to Protect Your Backups from Ransomware Attacks

Backups are a last defense and control from having to pay ransom for encrypted data, but they need protection also.This year ransomware has been rampant targeting every industry. Two highlight attacks, WannaCry and NotPetya, have caused, in excess, hundreds of millions in losses. Naturally, cybercriminals continue to rapidly increase ransomware attacks as they are effective.

Good Backups and Effective Recovery

Proactive, not reactive, organizations have choices when it comes to ransomware. The most reliable defense against ransomware continues to be good backups and well-tested restore processes. Companies that regularly back up their data and are able to quickly detect a ransomware attack have the opportunity to restore and minimize disruption.

In some less common cases, we see wiper malware like NotPetya imitating Petya ransomware delivering a similar ransom message. In this case, the victims are not able to recover their data even with paying a ransom, which makes the ability to restore from good backups even more critical.

Clever Attackers Target Backups

Because good backups are so effective, attackers, including nation-state agents, behind ransomware are now targeting the backup processes and tools themselves. Several forms of ransomware, such as WannaCry and the newer variant of CryptoLocker, delete the shadow volume copies created by Microsoft’s Windows OS. Shadow copies are an easy method Microsoft Windows offers for easy recovery. On Macs, attackers targeted backups from the outset. Researchers discovered deficient functions in the first Mac ransomware back in 2015 that targeted disks used by the Mac OS X’s automated backup process called Time Machine.

The scheme is straightforward: Encrypt the backup to cut off organizational control over ransomware and they are likely to pay the ransom. Cybercriminals are increasing their efforts and aim to destroy the backups as well.  Here are four recommendations to help organizations safe guard their backups against ransomware attempts.

One: Develop visibility into your backup process

The more quickly an organization can discover a ransomware attack, the better chances that business can avoid significant corruption of data. Data from the backup process can serve as an early warning of ransomware infections. Your backup log will show signs of a program that instantly encrypts data. Incremental backups will abruptly “blow up” as each file is effectively changed, and the encrypted files cannot be compressed or deduplicated.

Monitoring essential metrics like capacity utilization from the backups everyday will help organizations detect when ransomware has infiltrated an internal system and minimize the damage from the attack.

Two: Be wary using network file servers and online sharing services

Network file servers are easy to use and always available, which are two characteristics why network-accessible “home” directories are a well-liked method to centralize data and simplify backup. Yet, when presented with ransomware, this data architecture holds several critical security weaknesses. Many ransomware programs encrypt connected drives, so the target’s home directory would also be encrypted. Any server that runs on a commonly targeted and vulnerable operating system like Windows could also be infected; thus, every user’s data would be encrypted.

Any organization with a network file server must continuously back up the data to a separate system or service, and test the systems restore functionality introduced with ransomware specifically.

Cloud file services are also vulnerable to ransomware. A highlight example is the 2015 Children in Film ransomware attack. Children in Film, a business providing information for child actors and their parents, used the cloud extensively including a common cloud drive. According to KrebsOnSecurity, in less than 30 minutes after an employee clicked on a malicious email link, over four thousand files in the cloud were encrypted. Thankfully, the business’s backup provider was able to restore all of their files, but it took upwards of a week to do so.

Subject to whether the cloud service delivered incremental backups or easily managed file histories, recovery of data in the cloud could pose more difficult than an on-premises server.

Three: Test your recovery processes frequently

Backups are worthless unless you have the ability to recover both reliably and quickly. Organizations can have backups but still be forced to pay the ransom, because the backup schedule failed to perform backups with sufficient granularity, or they were not backing up the intended data. For example, Montgomery County, Alabama was forced to pay a ransom to retrieve their $5 million in data as a result of difficulties with their backup files unrelated to the ransomware.

Part of testing the recovery process is determining the window of data loss. Organizations that do an entire backup every week can potentially lose up to a week of data should it need to recover after its last backup. Performing daily or hourly backups significantly increases the level of protection. More granular backups and detecting ransomware events as early as possible are both key to preventing loss.

Four: Understand your solution options

If ransomware can access backup images directly, it will be almost impossible to prevent the attack from encrypting corporate backups. For that reason, a backup system engineered to abstract the backup data will stop ransomware in its tracks from encrypting historical data.

The process of separating backups from your standard operating environment and ensuring the process doesn’t run on a general-purpose server and operating system, can harden backups against attack. Backup systems running on the most targeted operating system, Microsoft Windows, are prone to attack and are much more difficult to protect from ransomware.

Ultimately, organizations must seek to detect ransomware attacks early with monitoring or anti-malware measures, use of purpose-built systems for separation between backup data and a potentially compromised system, and continuously tested backup and restore processes to ensure data is effectively protected. This approach will preserve backups from ransomware attacks and reduce the risk of losing data in the event of an infection. 

About the author: Rod Mathews is the SVP & GM, Data Protection Business for Barracuda. He directs strategic product direction and development for all data protection offerings, including Barracuda's backup and archiving products and is also responsible for Barracuda’s cloud operations team and infrastructure.

Copyright 2010 Respective Author at Infosec Island

Shadow IT: The Invisible Network

The term “shadow IT” is used in information security circles to describe the “invisible network” that user applications create within your network infrastructure. Some of these applications are helpful and breed more efficiency while others are an unwanted workplace distraction. However, all bypass your local IT security, governance and compliance mechanisms.

The development of application policies and monitoring technology have lagged far behind in comparison to the use of cloud-based business services, as researchers note in SkyHigh’s Cloud Adoption and Risk Report. It states, “The primary platform for software applications today is not a hard drive; it’s a web browser. Software delivered over the Internet, referred to as the cloud, is not just changing how people listen to music, rent movies, and share photos. It’s also transforming how business is conducted.” Recent studies show that businesses that follow this trend of migrating operations to the cloud actually increased productivity by nearly 20 percent above those who did not.

Shifting to a new security model before we determine the rules  

Traditional security thinking and products have focused solely on keeping the network and those within it safe from outside threats, and auditing information from users, devices and alerts. The application revolution is now pushing beyond the traditional network boundaries and into the cloud for security teams, before establishing acceptable-use policies and new auditing and compliance parameters. However, it is much more efficient to lay the auditing and policy groundwork first and then allow security operations to adapt to this new element of application awareness.

Why does application awareness change security operations so drastically? Because it:

  • Emphasizes outgoing (as opposed to incoming) communication
  • Requires relating users and devices to the applications (which older tools can’t perform)
  • Shifts the focus away from signature detection and into analytics and policy
  • Requires creating network and device use policy and implementing a means to track and measure it
  • Requires pulling logs from cloud services

Despite the security implications, there are important governance challenges when developing new application policies. While the discussion of implementing application awareness is mostly technical, the way employees use applications can also be deeply personal. Making a decision to allow or block Facebook, Twitter, Dropbox, Bit torrent, Tor and personal Gmail accounts touches a human factor that goes beyond merely stopping viruses and preventing breaches. Yet, allowing such applications (especially Tor) can increase the level of risk exponentially – even beyond the threats posed by many viruses.

Changing direction to a different point of view – the insider threat

Security follows business, and business is rapidly putting its information in the cloud. Most newer security products have evolved to focus both on what is entering the network and what is leaving the network. However, the shadow IT system often circumvents corporate monitoring and security measures, and allows corporate data to flow outside the organization into the public cloud without proper oversight or control.

Replacing the thread-bare notion that threats could only come into our systems from the outside is an ever-growing (and different) point of view that’s being complemented with products/devices that also monitor outgoing communications. Until recently, this capability has been limited to security interests in data loss prevention, policy filtering and compromised system detection.

Cloud Access Security Brokers (CASBs) are one type of outgoing protection for the network, and it does provide more visibility into network flows. It does add the burden of analysts having to sort through vast quantities of data. One Gartner analyst commented that the competitive forces currently amongst the CASB market providers “is a consequence of newness that limits the consistency and richness of the service they can provide.” He continued, “Data without action is kind of useless. Data has to be automatable so your team can solve the problem and move on to bigger projects.”

At this point, the point of view must pivot to gain vision into both the external threat and the internal or insider threat. The focus here is on your employees and their careless and maybe malicious behavior on network-connected devices. While some workers feel entitled to check social media or personal email applications at work, it is crucial that an organization develop smart and enforceable “acceptable-use” policies, along with regular, relevant training for all workers. This area of governance has lagged far behind the technological solutions; however, it is no less of an important piece of the visibility puzzle.

What about solid, consistent governance?

Governance is all about identifying risk and deciding what is acceptable. What is the risk of non-approved applications in a current enterprise environment? SkyHigh wrote a solid white paper on what they see as the risk in their Q4 2016 Cloud Adoption Risk Report (PDF). It should be noted that this report is biased in terms of the threat, but it does, at a minimum, provide a high-level explanation of the risk.

The above report prominently noted that email/phishing is the number one vector of attack, while web-based malware downloads are rarer by comparison. Buried deep in the SkyHigh study was the reason that we need to effectively capture application usage: while greater than 60 percent of organizations surveyed had a cloud use policy, almost all of that particular group lacked the needed enforcement capability. Roughly two-thirds of services that employees attempt to access are allowed based on policy settings, but most enterprises are still struggling to enforce blocking policies for the one-third in the remaining category that were deemed inappropriate for corporate use due to their high risk.

The ideal standard of control through enforcement is complicated even with a CASB in place, by security “silos,” and a struggle to consistently enforce polices across multiple cloud-based systems. Major violations still occur despite policies, such as: authorized users misusing cloud-based data, accessing data they shouldn’t be, synching data with uncontrolled PCs, and leaving data in “open shares,” in addition to authorized users having access despite termination or expiration. In short, before using a CASB you can implement use knowledge passively with other tools.

Implementing a means to passively detect applications and tracking that activity to the user and device is an essential aspect to governance and risk management. Shadow IT is the term most related to the risk associated with the threat that application awareness addresses, as opposed to the much more arduous task of drafting and implementing policies that could be controversial with fellow staff members.

About the Author: Chris Jordan is CEO of College Park, Maryland-based Fluency , a pioneer in Security Automation and Orchestration.

Copyright 2010 Respective Author at Infosec Island

4 Questions Businesses Must Ask Before Moving Identity into the Cloud

The cloud has transformed the way we work and it will continue to do so for the foreseeable future. While the cloud provides a lot of convenience for employees and benefits for companies in terms of cost savings, speed to value and simplicity, it also brings new challenges for businesses. When coupled with the fact that Gartner predicts 90 percent of enterprises will be managing hybrid IT infrastructures encompassing both cloud and on-premises solutions by 2020, the challenge becomes increasingly more complex.

As is the case with any significant technology initiative, moving infrastructure to the cloud requires forethought and preparation to be successful. For many enterprises, a cloud-first IT strategy means a chance to focus on the core drivers of the business versus managing technology solutions. As these enterprises consider a cloud-first approach, they will undoubtedly be moving their IT infrastructure and security to the cloud. And identity will not be left behind.

The big question for many IT and security operations departments is: can you move your identity governance solution to the cloud? And then, perhaps more importantly, should you? The answers to these questions will vary from company to company and are dependent on the needs of the business and the current structure of the identity program.

As such, here are 4 questions every organization must ask to determine if moving identity into the cloud is the right move for their business:

  • Have you already moved any infrastructure to the cloud?

While many business applications are relatively easy to use as a service, transferring a complex identity management program into the cloud can be more challenging to implement. If your organization is already using infrastructure-as-a-service (e.g. Amazon Web Services or Microsoft Azure) then you’re likely ready to move forward with implementing a cloud-based identity governance program. However, if you haven’t experimented with moving mission-critical apps into the cloud, you should carefully consider whether your organization is prepared before making the leap. 

  • How flexible is your organization?

Regardless of how it is deployed, an effective identity governance solution must provide complete visibility across all of your on-premises and cloud applications. This visibility provides the foundation required to build policies and controls essential for compliance and security.For organizations that don’t have the time or expertise to create custom identity policies or compliant processes from scratch, cloud-based solutions can make successful deployments more attainable. However, if your organization has rigid requirements about how identity management must be configured and deployed, it may be more of a challenge to move to a cloud-based solution.

  • Do you have limited resources?

Deploying an identity governance solution can be both time- and resource-intensive, and effective identity programs require a blend of people, processes and technology to be successful. The cloud is a great option for businesses with limited resources because it doesn’t involve hardware or infrastructure upgrades, making it faster and more cost-effective than on-premise solutions. Cloud-based identity is also great for organizations with smaller IT teams or those without as much specific expertise in the space.

  • How well do you understand your governance needs?

Identity governance is more than just modifying who has access to what. Effective identity governance must also answer the questions of should this user have access, what kind of access are they entitled to, and what can they do with that access. And while identity governance can be simple to use, what happens behind the scenes can be very complex. This is important to understand because SaaS-based identity governance is not as customizable as an on-premise solution. So, if your identity needs are fairly straight forward, the cloud might be for you, but if your organization requires more complexity and customization, on-premise might still be the best solution.

Whether you’re moving from an on-premise identity governance solution to the cloud or implementing a cloud-based identity governance solution for the first time, it’s important to take a close look at your organization and its needs before taking the next step. With these best practices in mind, you can properly manage identities and limit the risk of inappropriate access to your sensitive business data.

About the author: Dave Hendrix oversees the engineering, product management, development, operations and client services functions in his role as senior vice president of IdentityNow.

Copyright 2010 Respective Author at Infosec Island

Artificial Intelligence: A New Hope to Stop Multi-Stage Spear-Phishing Attacks

Cybercriminals are notorious for conducting attacks that are widespread, hitting as many people as possible, and taking advantage of the unsuspecting. Practically everyone has received emails from a Nigerian prince, foreign banker, or dying widow offering a ridiculous amount of money in return for something from you. There are countless creative examples of phishing, even health drugs promising the fountain of youth or skyrocketing your love life in return for your credit card.

In more recent times, cybercriminals are taking an “enterprise approach” to attacks. Just like business to business sales functions, they focus on a smaller number of targets, with an objective of obtaining an exponentially greater payload with extremely personalized and sophisticated techniques. These pointed attacks, labeled spear phishing, leverage impersonation of an employee, a colleague, your bank, or popular web service to exploit their victims. Spear phishing has steadily been on the rise, and according to the FBI, this means of social engineering has proven to be extremely lucrative for cybercriminals. Even more concerning, spear phishing is incredibly elusive and difficult to prevent with traditional security solutions. 

The most recent evolution in social engineering involves multiple premeditated steps. Cybercriminals hunt their victims instead of targeting company executives with a fake wire fraud out of the blue. They first infiltrate their target organization from an administrative mail account or low-level employee, then use reconnaissance and wait for the most opportune time to fool the executive by initiating an attack from a compromised mail account. Here are the abbreviated steps commonly taken in these spear phishing attacks and solutions to stop these attackers in their tracks. 

Step 1: Infiltration

Most phishing attempts are glaringly obvious for people that receive cyber security training (executives, IT teams) to sniff out. These emails contain strange addresses, bold requests, and grammar mistakes that often invoke deletion. However, there is a stark increase in personalized attacks that are extremely hard to sniff out, especially for people who aren’t trained. Many times, the only blemish to this attack is that malicious email links will be spotted only if you hover over them with your mouse. Highly trained individuals would spot this flaw but not common employees. 

This is why cybercriminals find easier targets at first. Mid-level sales, marketing, support and operations folks are the most usual. This initial attack is aimed to steal a username and password. When the attacker has control of this mid-level person, if they haven’t enabled multi-factor authentication (and many organizations do not), they can log into the account. 

Step 2: Reconnaissance

At this stage, cybercriminals will normally monitor the compromised account and study email traffic to learn about the organization. Often times, attackers will setup forwarding rules on the account to prevent logging in frequently. Analysis of the victim’s email traffic allows the attacker to understand more about the target and organization: who makes the decisions, who handles or influences financial transactions, has access to HR information, etc. It also opens the door for the attacker to spy on communications with partners, customers, and vendors.

This information is then leveraged for the final step of this spear phishing attack.

Step 3: Extract Value

Cybercriminals leverage this learned information to launch a targeted spear phishing attack. They often send customers fake bank account information precisely when they are planning to make a payment. They can hoax other employees to send HR information, wire money or easily sway them to click on links to collect additional credentials and passwords. Since the email is coming from a legitimate (albeit compromised) account like a colleague, it appears totally normal. The reconnaissance allows the attacker to precisely mimic the senders’ signature, tone and text style. So, how do you stop this attacker in his tracks? Thankfully there is a new hope and well-known methods for organizations to implement to thwart these cybercriminals from having their way, a multi-layer strategy.

End of the Line for Spear Phishing

There are three things that organizations should be employing now to combat spear phishing. The two obvious ones are user training and awareness and multi-factor authentication. The last, and newest technology to stop these attacks is real-time analytics and artificial intelligence. Artificial intelligence offers some of the strongest hope of shutting down spear phishing in the market today.  

AI Protection

Artificial intelligence to stop spear-phishing sounds futuristic and out of reach, but it’s in the market today and attainable for businesses of all sizes, because every business is a potential target. AI has the ability to learn and analyze an organization’s unique communication pattern and flag inconsistencies. The nature of AI is it becomes stronger, smarter and endlessly more effective over time to quarantine attacks in real-time while identifying high-risk individuals within an organization. For example, AI would have been able to automatically classify the email in the first stage of the attack as spear phishing, and would even detect anomalous activity in the compromised account, subsequently stopping stage two and three. It also has the ability to stop domain spoofing and authorized activity to prevent impersonation to customers, partners and vendors to steal credentials and gain access to their accounts.

Authentication

It is absolutely essential for organizations to implement multi-factor authentication (MFA). In the above attack, if multi-factor authentication was enabled, the criminal would not have been able to gain entry to the account. There are many effective methods for multi-factor authentication including SMS codes or mobile phone calls, key fobs, biometric thumb prints, retina scans and even face recognition.

Targeted User Training

Employees should be trained regularly and tested to increase their security awareness of the latest and most common attacks. Staging simulated attacks for training purposes is the most effective activity for prevention and promoting an employee mindset of staying on alert. For employees who handle financial transactions or are higher-risk, it’s worth giving them fraud simulation testing to assess their awareness. Most importantly, training should be companywide and not only focused on executives.  

About the author: Asaf Cidon is Vice President, Content Security Services at Barracuda Networks. In this role, he is one of the leaders for Barracuda Sentinel, the company's AI solution for real-time spear phishing and cyber fraud defense.

 

Copyright 2010 Respective Author at Infosec Island

Category #1 Cyberattacks: Are Critical Infrastructures Exposed?

Critical national infrastructures are the vital systems and assets pertaining to a nation’s security, economy and welfare. They provide light for our homes; the water in our taps; a means of transportation to and from work; and the communication systems to power our modern lives. The loss or incapacity of such necessary assets upon which our daily lives depend would have a truly debilitating impact on a nation’s health and wealth. One might assume then that the security of such assets, whether virtual or physical, would be a key consideration. Or to put that another way, failing to address security vulnerabilities of such important systems would surely be an inconceivable idea.

However, the worrying truth is that the security measures of many of our nation’s critical systems are not, in the large, what they should be. Perhaps this shouldn’t be a surprise. The rapid progression of technology has enabled critical systems to become increasingly connected and intelligent, but with little experience of the problems this connectivity could create, few thought about the systems’ security.

Although this new found connectivity has helped industries to realise great productivity and efficiency benefits, the attack on Ukraine’s power grid in 2015 opened the eyes of many in charge of such industries. After nationwide power-outages struck, it has now become clear that if security is not prioritised, the worst-case scenario could wreak havoc across our nations. Prevention is a must; a short-term fix will only delay the inevitable…

Critical infrastructures: an imminent attack

Not a case of if. But when.

It has been two years since news of Ukraine’s power grid cyberattack made headlines across the globe. And once again, critical infrastructure security has been propelled into the spotlight following a number of recent reports suggesting that a devastating attack is imminent.

The UK’s National Cyber Security Centre (NCSC) revealed in its first annual review that it received 1,131 incident reports, with 590 of these classed as ‘significant’. This included the WannaCry ransomware that took down the NHS. While none of these were identified as category one incidents, i.e. interfering with democratic systems or crippling critical infrastructures such as power, the head of the NCSC, Ciaran Martin, warned there could be damaging attacks in the not too distant future.

Furthermore, US-CERT recently issued an alert warning critical national infrastructure firms, including nuclear, energy and water providers, that they are now at an increased risk of ‘highly targeted’ attacks by the Dragonfly APT group. This follows a report by security researchers Symantec, who recently found that during a two-year period the group has been increasing its attempts to compromise energy industry infrastructure, most notably in the UK, Turkey and Switzerland.

Although no damage has yet been done, the group has been trying to determine how power supply systems work and what could be compromised and controlled as a result. If we know the group now has the potential ability to sabotage or gain control of these systems should it decide to do so, this should increase the urgency around the preventative measures needed to defend against a future attack.It is therefore hardly surprising that to combat the rise of such threats, the first piece of EU-wide cybersecurity legislation has been developed to boost the overall level of cybersecurity in the EU. This is called the NIS Directive.

Addressing security from the outset

The potential consequences are disturbing, so infrastructure owners need to consider working in closer collaboration with security experts to ensure the lights remain on. While most in the security industry recognise that there is no silver bullet to ensure total security, we recommend all of those in charge of critical infrastructures ensure they have enough barriers in place to safeguard industrial and critical assets. Proactive regimes that balance defensive and offensive countermeasures, as well as include regular retraining and security techniques such as penetration testing and “red teaming”, are vital to keep defences sharpened.

One of the greatest lessons that should be heeded is that the issue of security must be addressed from the outset of infrastructure development and deployment. It has become abundantly clear that cyberattacks against critical infrastructures are only going to increase in the coming months and years. Those in charge of securing such environments must deploy a new preventative mindset, ensuring strong barriers are in place to avert the hijacking of any critical infrastructures before there is a need to clean up its devastating result.

About the author: Jalal Bouhdada is the Founder and Principal ICS Security Consultant at Applied Risk. He has over 15 years’ experience in Industrial Control Systems (ICS) security assessment, design and deployment with a focus on Process Control Domain and Industrial IT Security.

Copyright 2010 Respective Author at Infosec Island

The Evolution from Waterfall to DevOps to DevSecOps and Continuous Security

Software development started with the Waterfall model, proposed in 1956, where the process was pre-planned, set in stone, with a phase for every step. Everything was predictably…sluggish. Every organization involved in developing web applications was siloed, and had its own priorities and processes. A common situation involved development teams with their own timelines, but quality assurance teams had to test another app, and operations hadn’t been notified in time to build out the infrastructure needed. Not to mention, security felt that they weren’t taken seriously. Fixing a bug that was made early in the application lifecycle was painful, because testing was much later in the process. Repeatedly, the end product did not address the business’s needs because the requirements changed, or the need for the product itself was long gone.

The Agile Manifesto

After give or take 45 years of this inadequacy, in 2001, the Agilemanifesto emerged. This revolutionary model advocated for adaptive planning, evolutionary development, early delivery, continuous improvement, and encouraged rapid and flexible response to change. Agile adoption increased and therefore sped up the software development process embracing smaller release cycles and cross-functional teams. This meant that stakeholders could navigate and course correct projects earlier in the cycle. Applications began to be released on time with translated to addressing immediate business needs.

The DevOps Culture

With this increased agile adoption from development and testing teams, operations now became the holdup. The remedy was to bring agility to operations and infrastructure, resulting in DevOps. The DevOps culture brought together all participants involved resulting in faster builds and deployments. Operations began building automated infrastructure, enabling developers to move significantly faster. DevOps led to the evolution of Continuous Integration/Continuous Delivery (CI/CD), basing the application development process around an automation toolchain. To convey this shift, organizations advanced from deploying a production application once annually to deploying production changes hundreds of time daily.

Security as a DevOps Afterthought

Although many processes had been automated with DevOps thus far, some functions had been ignored. A substantial piece that is not automated, but is increasingly critical to an organization’s very survival, is security. Security is one of the most challenging parts of application development. Standard testing doesn’t always catch vulnerabilities, and many times someone has to wake up at three in the morning to fix that critical SQL Injection vulnerability. Security is often perceived as being behind the times – and more commonly blamed for stalling the pace of development. Teams feel that security is a barrier to continuous deployment because of the manual testing and configuration halting automated deployments.  

As the Puppet State of DevOps report aptly states:

All too often, we tack on security testing at the end of the delivery process. This typically means we discover significant problems, that are very expensive and painful to fix once development is complete, which could have been avoided altogether if security experts had worked with delivery teams throughout the delivery process”

Birth of DevSecOps

The next iteration in this evolution of DevOps was integrating security into the process – with DevSecOps. DevSecOps essentially incorporates security into the CI/CD process, removing manual testing and configuration and enabling continuous deployments. As organizations move toward DevSecOps, there are substantial modifications they are encouraged to undergo to be successful. Instilling security into DevOps demands cultural and technical changes. Security teams must be included in the development lifecycle starting day one. Security stakeholders should be integrated right from planning to being involved with each step. They need to work closely with development, testing, and quality assurance teams to discover and address security risks, software vulnerabilities and mitigate them. Culturally, security should become accustom to rapid change and adapting to new methods to enable continuous deployment. There needs to be a happy medium to result in rapid and secure application deployments.

Security Automation is the Key

A critical measure moving toward DevSecOps is removing manual testing and configuration. Security should be automated and driven by testing. Security teams should automate their testing and integrate them into the overall CI/CD chain. However, based on each individual application, it’s not uncommon for some tests to be manual – but the overall portion can and should be automated. Especially tests that ensure applications satisfy certain defined baseline security needs. Security should be a priority from development to pre-production and should be automated, repeatable and consistent. When done correctly, responding to security vulnerabilities becomes much more trivial each step of the way which inherently reduces time taken to fix and mitigate flaws.

Continuous Security Beyond Deployment

Continuous security does not stop once an application is deployed. Continuous monitoring and incident response processes should be incorporated as well. The automation of monitoring and the ability to respond quickly to events is a fundamental piece toward achieving DevSecOps. Security is more important today than ever before. History shows that any security breach event can be catastrophic for both customers, end users and organizations themselves. With more services going online and hosted in the cloud or elsewhere the threat landscape is growing at an exponential rate. The more software written inherently results in more security flaws and more attack surface. Incorporating security into the daily workflow of engineering teams and ensuring that vulnerabilities are fixed or mitigated much ahead of production is critical to the success of any product and business today.

About the author: Jonathan Bregman is a Product Marketing Manager with Barracuda Networks focused on web application firewalls and DDoS prevention for customers. Prior to Barracuda, Jonathan was a research and development engineer with Google.

Copyright 2010 Respective Author at Infosec Island

From the Medicine Cabinet to the Data Center – Snooping Is Still Snooping

We’ve all done it in one form or another. You go to a friend’s house for a party and you have to use the restroom. While you are there, you look behind the mirror or open the cabinet in hopes of finding out some detail — something juicy — about your friend. What exactly are you looking for? And why? Are you feeding into some insecurity? You don’t really know, you just know you are compelled to look.

Turns out that same human reaction carries forward to your place of employment. 

At One Identity we recently conducted a global survey that revealed a lot of eye-opening facts about people’s snooping habits on their company’s network.  At a high level, the survey revealed that when given the opportunity to look through sensitive company data that employee may not be permitted to access — the instinct is to snoop. Before we get into specific  results, here are the demographics:

  • We surveyed over 900 people from around the world.
  • Countries include the U.S., U.K., Germany, France, Australia, Singapore and Hong Kong.
  • Eighty-seven percent have privileged access to something within their place of employment.
  • They all have some level of security responsibility with varied titles ranging from executive to front-line security pros.
  • Twenty-eight percent are from large enterprises (>5,000 employees)); 28 percent from mid-sized enterprises (2,000 to 5,000 employees); the remainder were from organizations with less than 2,000 employees.

Key Finding Number One: 92 percent of respondents stated that employees at their company attempt to access information that they do not need. 

Think about that. Ninety-two percent of us are trying to access the information we don’t need to get our jobs done. Imagine if any employee at your company could access sensitive data like salary. That would. Now imagine employees obtained access to financial data, customer data or merger information — and then shared it. The result could be catastrophic to your business.

Key Finding Number Two: 66 percent of the security professionals surveyed have tried to access the information they didn’t need.

Worse yet, these are security people that probably have some form of elevated privileges. This means not only are they attempting to access that information but in many cases, they are actually obtaining access and ultimately abusing that privilege.

Key Finding Number Three: Executives are more likely to snoop than managers or front-line workers.

Interestingly, IT security executives are the most likely to look for sensitive data not relevant to their job than any other job level. This is worrisome for many since they tend to have greater access rights and permissions — once again, indicated abuse of power.

The bottom line here is that organizations should be alarmed by these findings. A common myth among many is that data is safe when it’s on a company network and in the hands of its trusted employees — it’s the outsiders and hackers you have to look out for. While the latter is certainly true, the data shows that the majority of all employees — even those within the ranks of IT security groups — are nosy when given the opportunity to be. Implementing best practices around identity and access management — like role-based access rights and permissions and applying identity analytics to spot any signs of unusual access behavior — can help organizations safeguard themselves from letting sensitive data fall into the wrong hands before it’s too late.

About the author: Jackson Shaw is senior director of product management at One Identity, an identity and access management company formerly under Dell. Jackson has been leading security, directory and identity initiatives for 25 years.

Copyright 2010 Respective Author at Infosec Island

Healthcare Orgs in the Crosshairs: Ransomware Takes Aim

Criminals are using ransomware to extort big money from organizations of all sizes in all industries. But healthcare organizations are especially attractive targets. Healthcare organizations are entrusted with the most personal, intimate information that people have – not just their financial data, but their very private health and treatment histories. Attackers perceive healthcare IT security to be the least effective and outdated in comparison with other industries. They also know that healthcare organizations tend to have significant cash on hand and have a high cost of downtime, therefore are more likely to pay the ransom for encrypted data. If you fail to take the necessary steps to combat ransomware and other advanced malware and that trust is betrayed, the cost to your business could extend far beyond paying a ransom or a noncompliance fine. If your reputation for safeguarding patient data is damaged, not only will you be scrutinized under the microscope, in some cases, companies never recover and leadership is forced to resign.

Healthcare is making strides but isn’t there yet

There is good news. Healthcare organizations have made significant security improvements over the last year. According to the HIMMS 2017 Cybersecurity Survey, it is clear that IT security is an urgent business challenge for leadership, rather than solely an IT problem. There is a marked increase in the employment of CIOs and Chief Information Security Officers (CISOs) among healthcare organizations, and security shortcomings are being addressed.

Nonetheless, there is still room for improvement and ransomware attacks continue to be a serious and growing challenge. Those who continue to commit vital resources to implementing effective security measures will emerge as winners and you will never hear of them in the media. Effectively combating ransomware requires a well-thought-out combination of technical and cultural measures.

Detection: discovering the weaknesses

Keeping your network free of ransomware and other advanced malware requires a combination of effective perimeter filtering, strategically designed network architecture, and the capability to detect and eliminate resident malware that may already be inside your network. It’s an exercise of cleaning house as your infrastructure likely contains a number of latent threats. Email inboxes are full of malicious attachments and links just waiting to be clicked on. Similarly, all applications, whether locally hosted or cloud-based, must be regularly scanned and patched for vulnerabilities. There should be a regular vulnerability management schedule for scanning and patching of all network assets, which is checking the box for basics but extremely critical for thwarting threats. Building a solid foundation such as this is a fantastic start for effective ransomware detection and prevention.

Prevention: A non-negotiable requirement

There are some very effective security technologies that are a requirement in today’s threat landscape in order to prevent ransomware and other attacks. Prevention of threats entering the network requires a modern firewall or email gateway solution to filter out the majority of threats. An effective solution should scan incoming traffic using signature matching, advanced heuristics, behavioral analysis, sandboxing, and the ability to correlate findings with real-time global threat intelligence. This will ultimately prevent employees from having to be perfectly trained to spot these sophisticated threats. It’s recommended to control and segment network access to minimize the spread of threats that do get in. Ensure that patients and visitors can only spread malware within their own, limited domain, while also segmenting, for example, administration, caregivers, and technical staff, each with limited, specific access to online resources.Even with the most sophisticated methods like spear phishing, where attackers impersonate your coworker, there are now machine learning and artificial intelligence solutions that can spot and quarantine these threats before they ever get to an employee. The risk for healthcare organizations is immensely reduced when solutions such as these are deployed as part of an overall security posture.  However, when data is encrypted and held ransom, the fight isn’t over yet.

Backup—Your Last, Best Defense Against Ransomware

When a ransomware attack succeeds, your critical files—HR, payroll, electronic health records, patient financial and insurance info, strategic planning documents, email records, etc.—are encrypted, and the only way to obtain the decryption key is to pay a ransom. But if you’ve been diligent about using an effective backup system, you can simply refuse to pay and restore your files from your most recent backup—your attackers will have to find someone else to rob.Automated, cloud-based backup services can provide the greatest security. Reputable vendors offer a variety of very simple and secure backup service options, priced for organizations of any size, and requiring minimal staff time. Advanced solutions can even allow you to spin up a virtual copy of your servers in the cloud, restoring access to your critical files and applications within minutes of an attack or other disaster.

When all of these things are working simultaneously, healthcare organizations are well equipped to stop ransomware attacks effectively. Ransomware and other threats are not going away anytime soon and healthcare will continue to be a target for attackers. The hope is that healthcare professionals continue to keep IT security top of mind. 

About the author: Sanjay is a 20 year veteran in technology and has a passion for cutting edge technology and a desire to innovate at the intersection of technology trends. He currently leads product management, marketing and strategy for Barracuda’s security business worldwide

Copyright 2010 Respective Author at Infosec Island

Thinking Outside the Suite: Adding Anti-Evasive Strategies to Endpoint Security

Despite ever-increasing investments in information security, endpoints are still the most vulnerable part of an organization’s technology infrastructure. In a 2016 report with Rapid7, IDC estimates that 70% of attacks start from the endpoint. Sophisticated ransomware exploded into a global epidemic this year, and other forms of malware exploits, including mobile malware and malvertising are also on the rise.

 

The only logical conclusion is that existing approaches to endpoint security are not working. As a result, security teams are exposed to mounting, multifaceted challenges due to the ineffectiveness of their current anti-malware solutions, large numbers of security incidents requiring costly and intensive response, and added pressure from the board to undergo risky and expensive “rip and replace” endpoint security procedures.

 

Current endpoint security solutions employ varying approaches. Some restrict the actions that legitimate applications can take on a system, others aim to prevent malicious software from running, and some monitor activity for incident investigations. The challenge for most IT department heads is finding the right balance of solutions that will work for their particular business.

 

Endpoint Protection Platforms (EPP), usually offered by established endpoint security vendors, promote the benefits of packaging endpoint control, anti-malware, and detection and response all in one agent, managed by from one console. While EPP suites can be useful and practical, it’s important to understand their limitations. For starters, a “suite” does not always mean the products are integrated — you may end up with one vendor but multiple agents and management consoles. Second, no single vendor offers the best-in-breed or best-for-your-business options for all the component solutions. If you adopt the EPP approach, be aware that you will be making trade-offs of some sort. Finally, it is likely that even after going through the painful process of deploying a full endpoint protection suite, it will still fail to prevent many attacks.

 

All these solutions, whether installed separately or as a suite, produce alerts. Many work by finding attacks that have already “landed” to some degree. This means your team will be busy (if not overwhelmed) sorting through the alerts for priority threats, investigating incidents, and remediating any intrusions. This can lead to inefficiencies and escalating staffing requirements, which will quickly wipe out any cost savings you hoped would come from installing bundled solutions.

 

In the end, it is imperative to understand the strengths and weaknesses within each suite and evaluate whether a best-of-breed or “suite-plus” approach offers better protection for your investment — this is often the case. EPP implementation can help companies consolidate vendors in order to reduce administrative overhead and licensing costs. It may also help minimize complexity and reduce the impact on operations, end-users, and business agility. But none of this matters much if the shortcomings of the platform end up introducing unacceptable levels of risk, draining staff resources, or constraining productivity and agility.

 

For example, it’s important to recognize that accepting the low detection rates of your conventional antivirus solution also means accepting the high likelihood of a breach. That’s because there is one critical factor most platforms don’t adequately address: unknown malware that has been designed specifically to evade existing defenses. Innovative endpoint defense strategies have emerged that allow you to block evasive malware, regardless of whether there is a known signature, behavior pattern, or machine learning model. This is achieved through the creative use of deceptive tricks that control how the malware perceives its environment.

Endpoint defense solutions that can neutralize evasive malware use three primary strategies: creating a hostile environment, preventing injection through deception, and restricting document executable capabilities. All three strategies contain and disarm the malware before it ever unpacks or puts down roots. 

To create a hostile environment, the malicious program is tricked into believing the environment is not safe for execution, resulting in the malware suspending or terminating its execution. To prevent malicious software from hiding in legitimate processes, the malware is deceived into registering that memory space is unavailable, so it never establishes a foothold on the device. To block malicious actions initiated by document files (via macros, PowerShell, and other scripts), the malware is tricked into registering that system resources like shell commands are not accessible.

These new strategies reduce risk without requiring increased overhead (nothing malicious installed, so nothing to investigate) or replacement of existing solutions. Anti-evasion solutions work alongside installed AV solutions to provide an added layer of protection against sophisticated malware and ransomware. The threat intelligence they produce (identifying previously unknown malware exploits) enhances your overall security program. In addition, because incident responders have fewer alerts and incidents to sort through, they can focus their expertise on high-priority threats and investigating attacks where the intruder has already gained access to the network.

Working smarter is key to managing the growing and ever-shifting challenges and responsibilities faced by security teams. Reducing workload and manual processes while reducing risk is a tough balancing act. Ongoing cyber security talent shortages combined with multiplying threat vectors make effective automated defenses a critical priority. Getting the most value out of your security budget and skilled experts requires neutralizing threats upfront, preventing as many attacks as possible, and developing automated threat management processes. It’s essential to cover gaps and shortcomings, augmenting existing endpoint security by layering on innovative, focused solutions. Given the recent surge of virulent, global malware and ransomware, anti-evasion defenses are a smart place to start.

About the author: Eddy Boritsky is the CEO and Co-Founder of Minerva Labs, an endpoint security and anti-evasion technology solution provider. He is a cyber and information security domain expert. Before founding Minerva, Eddy was a senior cyber security consultant for the defense and financial sectors.

Copyright 2010 Respective Author at Infosec Island

Managing Cyber Security in Today’s Ever-Changing World

When it comes to victims of recent cyber-attacks, their misfortune raises a few critical questions:

  • Is anything really safe? 
  • Do the security recommendations of experts actually matter? 
  • Or do we wait for our turn to be victimized, possibly by an attack so enormous that it shuts down the entire data-driven infrastructure at the heart of our lives today?

As the Executive Director of the Information Security Forum (ISF), an organization dedicated to cyber security, my own response is that major disruptive attacks are indeed possible, however, they are not inevitable. A future in which we can enjoy the benefits of cyber technology in relative safety is within our reach. 

Nevertheless, unless we recognize and apply the same dynamics which have constructively channeled other disruptive technologies, the rate and severity of cyber attacks could easily grow. 

Technical Advances

It may seem surprising, particularly in light of the tremendous technological achievement represented by the Internet and digital technology generally, that further advances in technology – which are both desirable and inevitable – may be the least important of the forces taming cybercrime. Progress in the fields of encryption and related security measures will inevitably continue. And they will just as inevitably be followed by progress in developing countermeasures. Some of those countermeasures will be the creations of technically savvy individuals – even teenage whiz kids, born in the digital age, to whom every security regimen is simply another challenge to their hacking skills. 

Over time, the contours of cybercriminal enterprise have grown to become specialized, like that of mainstream business, operating out of conventional office spaces, providing a combination of customer support, marketing programs, product development, and other trappings of the traditional business world. Some organizations develop and sell malware to would-be hackers, often including adolescents and those with relatively little computer skill of their own. Others acquire and use those tools to break into corporate networks, harvesting their information for sale or ransoming it back to its owners. Still others wholesale those stolen data files to smaller operators who either resell them or try using them to siphon money from their owners’ accounts.

Artificial intelligence using advanced analytics could offer a significant, if temporary advance in thwarting potential attackers. IBM, for example, is teaching its Watson system the argot of cyber security, which could, at least in principle, help it to recognize and block threats before they cause significant harm. But technological advances tend to be a cat and mouse game, with hackers in close pursuit of security workers. And security workers themselves can be compromised to bring their best tools over to the dark side.

Still, having even modest security technology in place can slow the pace of malicious hacking. By making it more time-consuming for someone to hack into a digital device, an attacker is less likely to try. Yet many Internet-enabled consumer devices – elements of the so-called Internet of Things, or IoT, are largely unprotected, exposing them, among other risks, to becoming unwilling robots in a vast network of slave devices engaged in denial of service attacks.

That’s not inevitable; it’s a manufacturer’s choice, driven by economics. The fact is that security can be expensive, and these devices were never designed with security in mind. They were created from the outset to provide and process information at the lowest possible cost. But by maintaining an open connection to the individual’s home computer – a device which may, in turn, be connected to an employer’s network – it offers intruders a portal to inflicting damage that goes well beyond the owner’s home thermostat or voice-driven speaker device. Securing them may become an appropriate topic for government regulation.

Cyber Culture

Although no one is feeling nostalgic about it, there was a time, not terribly long ago, when conducting cyber mischief was a personal enterprise, often a lonely teen operating out of their home basement or bedroom. But today, in the eyes of institutions eager to secure sensitive digital files, the solitary teenage hacker is less a problem than a nuisance. 

What has largely taken his place – and the overwhelming majority of hackers are male – are well organized, highly resourced criminal enterprises, many of which are based overseas, with the ability to monetize stolen data on a scale rarely if ever achieved by the bedroom-based hacker. The most persistent of them – and the hardest to defend against – are state-sponsored. But it is among young people that cyber-culture, including its more malevolent forms, is spread and nourished. And they don’t need to be thugs to participate.

Last year alone, the value of cyber theft was estimated to have reached into the hundreds of billions of dollars, and it’s growing. But unlike bank robberies of years past, cyber-theft bypasses the need to confront victims with threats of harm to coerce them to hand over money. In fact, at the end of 2013, the British Bankers Association reported that “traditional” strong-arm bank robberies had dropped by 90 percent since 2003.  

Instead, with just a few keystrokes – often entered from thousands of miles away – the larcenous acts themselves, which produce neither injury nor fear, seem almost harmless. And, at least in the eyes of adolescent perpetrators – eyes which are frequently hidden behind a mantle of anonymity and under the influence of lawless virtual worlds that populate immersive online games – the slope leading from cyber mischief into cyber crime is very gradual and hard to discern. 

Other hackers have different motives – some feel challenged to probe and test the security of an institution’s firewalls; others to shame, expose, or seek revenge on an acquaintance, and a few posturing as highly principled whistleblowers unmasking an organization’s most sensitive secrets. But even the most traditional notions of privacy and secrecy have themselves undergone something of a metamorphosis lately. 

Examples are legion:

  • Earlier this year, as I was flying from Chicago to New York, I couldn’t help but overhear the gentleman on the opposite side of the aisle telling his seatmate – a complete stranger – all about his recent prostate surgery. 
  • Attractive and aspiring celebrities regularly leak – actually, a better term for it might be that they release – videos of the most intimate moments they’ve had with recent lovers.
  • Daytime TV are shows in which a host gleefully exploits the private family dysfunctions of his guests have become a programming staple.
  • People working for extremely sensitive government organizations self-righteously hand over the nation’s most confidential data files to be posted online, purportedly to serve the public interest.

A Seismic Shift

There’s a common thread running through each of these examples.  It’s that conventional notions of privacy and appropriate information sharing have changed dramatically. It is a shift which is particularly apparent in the way younger people use the Internet in their private lives, which frequently includes the exchange of highly personal information and images. 

However, for their employers, whose electronic files typically contain sensitive personnel, financial and trade information, that behavior is not only a security concern, it is a journey into treacherous legal territory. And it is a journey which knows no jurisdictional lines. Different national cultures exert a powerful influence on their citizens’ online behavior. What are considered harmless pranks and cyber horseplay and among young people in Iraq would be seen as hostile cyber attacks in the U.S.

What we find perplexing is not so much a rapid advance in technology as a profound cultural shift – a sea change that needs to be recognized, shaped and ultimately accommodated to support appropriate and lawful use of these powerful cyber tools. That shift has a direct impact on the workplace. While an employee’s online behavior can certainly damage the organization, those acts are rarely deliberate. In fact, the greater risk comes with behaving too trustfully – opening suspicious emails, clicking on links and uploading files which inadvertently create access to the organization’s network. From there, a malicious attack can move in any direction, creating massive damage.

A New Sheriff?

The heady combination of cyber whiz kids, seismic cultural change, anomic virtual realities, sophisticated criminal gangs, state-sponsored attacks and a vigorous, web-enabled marketplace for all sorts of contraband has produced a kind of Wild West on steroids – something like the early days of automobiles, only this time on a global scale with major incidents reported almost daily. 

At the same time, however, even the Wild West brought on by the motor car was eventually tamed, or at least absorbed into the mainstream of commerce and culture. That transformation was achieved through a trifecta of improved technology for both vehicles and infrastructure, more comprehensive laws coupled with better law enforcement, and a gradual shift in driving culture affecting the perceptions and behavior of motorists. 

In the cyber world, much the same dynamic applies. Improvements in technology will continue making private data more secure. A more encompassing regimen of laws and treaties affecting users and suppliers of equipment as well as service providers will help codify the public’s requirements for security. The European Union’s recently adopted General Data Protection Regulation (GDPR), which gives back control of citizens’ personal data while unifying regulation within the EU, is an encouraging example. And more imaginative forms of cyber education to strengthen the culture by supporting appropriate uses of the technology – some of which are already underway in elementary and high school classrooms – will help to crystalize public expectations and inform behavior for the next generation of cyber citizens.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island

Calming the Complexity: Bringing Order to Your Network 

In thinking about today’s network environments – with multiple vendors and platforms in play, a growing number of devices connecting to the network and the need to manage it all – it’s easy to see why organizations can feel overwhelmed, unsure of the first step to take towards network management and security. But what if there was a way to distill that network complexity into an easily-managed, secure, and continuously compliant environment?

Exponential Growth

Enterprise networks are constantly growing. Between physical networks, cloud networks, the hybrid network, and the fluctuations that mobile devices introduce to the network, the number of connection points to a network that need to be recognized and protected is daunting. Not to mention that in order to keep your organization running at optimal efficiency – and to keep it secure from potential intrusions – you must operate at the pace that the business dictates. New applications need to be deployed and ensuring connectivity is an absolute requisite, but the old now overly permissive rules need to be removed, and servers decommissioned – it’s a lot, but teams can trudge through it.

But getting through it isn’t all that you have to worry about – the potential for human error on a simple network misconfiguration needs to be factored in as well. As any IT manager knows, even slight changes to the network environment – intended or not – can have a knock-on effect across the entire network.  

What’s in Your Network?

Adding up all the moving parts that make up the network, the likely risk of introducing error through manual processes and the resulting consequences of such errors puts your network in a persistent state of jeopardy. This can take the form of lack of visibility, increased time for network changes, disrupted business continuity, or an increased attack surface that cybercriminals could find and exploit.

Considering how large enterprise networks are and the number of changes required to keep the business growing, – an organization’s security team can face hundreds of change requests each and every week. These changes are too numerous, redundant, and difficult to manage manually; in fact, one manual rule change error could inadvertently introduce new access points to your network zones that may be exposed to nefarious individuals. In a large organization, small problems can quickly escalate.

The network has also fundamentally changed. Long gone are the days of sole reliance on the physical data center as organizations incorporate the public cloud and hybrid networks into their IT infrastructure. Understanding your network topology is substantially more difficult when it’s no longer on premise. Hybrid networks are not always visible to the IT and security teams, and thus complicates the ability to maintain application connectivity and ensure security.

Network Segmentation & Complexity: A Balancing Act

Network segmentation limits the exposure that an attacker would have in the event that the network is breached. By segmenting the network into zones, any attacker that enters a specific zone would be able to access only that zone – nothing else. By dividing their enterprise networks into different zones, IT managers minimize access privileges, ensuring that only those who are permitted have access to the data, information, and applications they need.

However, by segmenting the network you’re inherently adding more complexity to be managed. The more segments you have, the more opportunity there is for changes to be made in the rules that govern access among these zones.

How can an IT manager turn an intricate, hybrid network into something manageable, secure, and compliant?

The Answer: Automation and Orchestration

As we have seen, the enterprise network changes all the time – so it’s imperative to ensure that you’re making the correct decisions so that changes do not put the company at risk. The easiest way to do this is to set a network security policy, and use that policy as the guide for all changes that are made in the network. Using a policy-based approach, any change within the network infrastructure is confirmed to be secure and compliant. With a centralized policy in place, now you have control.

The next step to managing complexity is removing the risks of manual errors. This is where automation and orchestration built on a policy-based approach is required.

Now you’re able to analyze the network, design network security rules, and develop and automate the rule approval process. This approach streamlines the change process and eradicates unintended errors.

Using the right automation and orchestration tools can add order and visibility to the network, manage policy violations and exceptions, and streamline operations with continuous compliance and risk management.

Together, automation and orchestration of network security policies ensures that you have a process in place that will enable you to make secure, compliant changes across the entire network – without compromising agility, risking network downtime, or investing valuable time on tedious, manual tasks.

Complexity is the reality of today’s enterprise networks. Rather than risk letting one small event cause a big ripple across your entire organization, with an automated and orchestrated approach to network security management, your network can become better-controlled – helping you improve visibility, compliance, and security.

About the author: Reuven Harrison is CTO and Co-Founder of Tufin. He led all development efforts during the company’s initial fast-paced growth period, and is focused on Tufin’s product leadership. Reuven is responsible for the company’s future vision, product innovation and market strategy. Under Reuven’s leadership, Tufin’s products have received numerous technology awards and wide industry recognition.

Copyright 2010 Respective Author at Infosec Island

#NCSAM: Third-Party Risk Management is Everyone’s Business

One of the weekly themes for National Cyber Security Awareness Month is “Cybersecurity in the Workplace is Everyone’s Business.”

And we couldn’t agree more. Cybersecurity is a shared responsibility that extends not just to a company’s employees, but even to the vendors, partners and suppliers that make up a company’s ecosystem. The average Fortune 500 company works with as many as 20,000 different vendors, most of whom have access to critical data and systems. As these digital ecosystems become larger and increasingly interdependent, the exposure to third-party cyber risk has emerged as one of the biggest threats resulting from these close relationships.

Third-party risk is only going to get more difficult, but collaboration – the pooling of information, resources and knowledge – represents the industry’s best chance to effectively mitigate this growing threat. The PwC Global State of Information Security Survey 2016 found that 65 percent of organizations are formally collaborating with partners to improve security and reduce risks.

Overall, organizations need to put more emphasis on understanding the cyber risks their third parties pose. What risks does each third party bring to your company? Do they have access to your network? What would the impact be if they were to be breached? One of the key ways to do this is by engaging with your third parties, and assessing them based of the appropriate level of risk they pose and collaborating with them on a prioritized mitigation strategy.

It’s unlikely that the pressure facing businesses to become more efficient will lessen, which means larger digital ecosystems and more cyber risks to businesses. The only way to protect your organization from suffering a data breach as a result of a third party is to put more emphasis on understanding the cyber risks your third parties pose and working together to mitigate them.

Learn more about NCSAM at: https://www.dhs.gov/national-cyber-security-awareness-month.

Help spread the word by joining in the online conversation using the #NCSAM hashtag!

About the author: As Head of Business Development, Scott is responsible for implementing CyberGRX’s go-to-market and growth strategy. Previous to CyberGRX, he led sales & marketing at SecurityScorecard, Lookingglass, iSIGHT Partners and iDefense, now a unit of VeriSign.

Copyright 2010 Respective Author at Infosec Island

Oracle CPU Preview: What to Expect in the October 2017 Critical Patch Update

The recent media attention focused on patching software could get a shot of rocket fuel on Tuesday with the release of the next Oracle Critical Patch Update (CPU). In a pre-release statement, Oracle has revealed that the October CPU is likely to see nearly two dozen fixes to Java SE, the most common language used for web applications. New security fixes for the widely used Oracle Database Server are also expected along with patches related to hundreds of other Oracle products.

Most of the Java related flaws can be exploited without needing user credentials, with the highest vulnerability score expected to be 9.6 on a 10.0 scale. The CPU could also include the first patches related to the latest version of Java – Java 9 – which was released in September.

Oracle is also expected to include advanced encryption capabilities included in Java 9 (JCE Unlimited Strength Policy Files) for previous Java versions 8 – 6.

The October CPU comes on the heels of a September out-of-cycle Security Alert from Oracle addressing flaws exploited in the Equifax attack. The Alert followed the announcement of vulnerabilities in the Struts 2 framework by Apache that were deemed too critical to wait for distribution in the quarterly patch update.

IBM also issued an out-of-cycle patch to address flaws in IBM’s Java related products in the wake of the Equifax breach.

The Equifax attack has put a spotlight on the vital importance of rapidly applying security patches as well as the continuing struggle of security teams to keep pace with the increasing pace and size of patches. So far in 2017, NIST’s National Vulnerability Database has catalogued 11,525 new software flaws and has tracked more than 95,000 known vulnerabilities.

Oracle will release the final version of the CPU mid-afternoon Pacific Daylight Time on Tuesday, 17 October.   

About the author: James E. Lee is the Executive Vice President and Chief Marketing Officer at Waratek Inc., a pioneer in the next generation of application security solutions.

Copyright 2010 Respective Author at Infosec Island

Surviving Fileless Malware: What You Need to Know about Understanding Threat Diversification

Businesses and organizations that have adopted digitalization have not only become more agile, but they’ve also significantly optimized budgets while boosting competitiveness. Despite these advances in performance, the adoption of these new technologies has also increased the attack surface that cybercriminals can leverage to deploy threats and compromise the overall security posture of organizations.

The traditional threat landscape used to involve threats designed to either covertly run as independent applications on the victim’s machine, or compromise the integrity of existing applications and alter their behavior. Commonly referred to as file-based malware, traditional endpoint protection solutions have incorporated technologies designed to scan files written to disk before execution.

File-based vs. Fileless

Some of the most common attack techniques involve victims either downloading a malicious application whose purpose is to silently run in the background and track the user’s behavior or to exploit a vulnerability in a commonly installed piece of software so that it can covertly download additional components and execute them without the victim’s knowledge.

Traditional threats must make it onto the victim’s disk before executing the malicious code. Signature-based detection exists specifically for this reason, as it can uniquely identify a file that’s known to be malicious and prevent it from being written or executed on the machine. However, new mechanisms such as encryption, obfuscation, and polymorphism have rendered traditional detection technologies obsolete, as cybercriminals cannot only manipulate the way the file looks for each individual victim, but also make it difficult for security scanning engines to analyze the code within them.

Traditional file-based malware is usually designed to gain unauthorized access to the operating system and its binaries, normally creating or unpacking additional files and dependencies, such as .dll, .sys or .exe files, that have different functions. They could also install themselves as drivers or rootkits to take full control of the operating system if they could obtain the use of a valid digital certificate to avoid triggering any traditional file-based endpoint security technologies. One such piece of file-based malware was the highly advanced Stuxnet, designed to infiltrate a specific target while remaining persistent. It was digitally signed and had various modules that enabled it to covertly spread from one victim to another until it reached its intended target.

Fileless malware is completely different than file-based malware in terms of how the malicious code is executed and how it dodges traditional file-scanning technologies. As the term implies, fileless malware does not involve any file written on-disk for it to be executed. The malicious code may be executed directly within the memory of the victim’s computer, meaning that it will not be persistent after a system reboot. However, various techniques have been adopted by cybercriminals that combine fileless abilities with persistence. For example, malicious code placed within registry entries and executed each time Windows reboots, allows for both stealth and persistency.

The use of scripts, shellcode and even encoded binaries is not uncommon for fileless malware leveraging registry entries, as traditional endpoint security mechanisms usually lack the ability to scrutinize scripts. Because traditional endpoint security scanning tools and technologies mostly focus on static file analysis between known and unknown malware samples, fileless attacks can go unnoticed for a very long time.

The main difference between file-based and fileless malware is where and how its components are stored and executed. The latter is becoming increasingly popular as cybercriminals have managed to dodge file scanning technologies while maintaining persistency and stealth.

Delivery mechanisms

While both types of attacks rely on the same delivery mechanisms, such as infected email attachments or drive-by downloads exploiting vulnerabilities in browsers or commonly used software, fileless malware is usually script-based and can leverage existing legitimate applications to execute commands. For example, PowerShell scripts that are attached to booby-trapped Word documents can automatically be executed by PowerShell – a native Windows tool. The resulting commands could either send detailed information about the victim’s system to the attacker or download an obfuscated payload that the local traditional security solution can’t detect.

Other possible examples involve a malicious URL that, once clicked, redirects the user to websites that exploit a Java vulnerability to execute a PowerShell Script. Because the script itself is just a series of legitimate commands that may download and run a binary directly within memory, traditional file-scanning endpoint security mechanisms will not detect the threat.

These elusive threats are usually targeted at specific organizations and companies with the purpose of covert infiltration and data exfiltration.

Next-gen endpoint protection platforms

These next-gen endpoint protection platforms are usually the type of security solutions that combine layered security – which is to say file-based scanning and behavior monitoring – with machine learning technologies and threat detection sandboxing. Some technologies rely on machine learning algorithms alone as a single layer of defense. Whereas, other endpoint protection platforms use detection technologies that involve several security layers augmented by machine learning. In these cases, the algorithms are focused on detecting advanced and sophisticated threats at pre-execution, during execution, and post-execution.

A common mistake today is to treat machine learning as a standalone security layer capable of detecting any type of threat. Relying on an endpoint protection platform that uses only machine learning will not harden the overall security posture of an organization.

Machine learning algorithms are designed to augment security layers, not replace them. For example, spam filtering can be augmented through the use machine learning models, and detection of file-based malware can also use machine learning to assess whether unknown files could be malicious.

Signature-less security layers are designed to offer protection, visibility, and control when it comes to preventing, detecting, and blocking any type of threat. Considering these new attack methods, it’s highly recommended that next-gen endpoint security platforms protect against attack tools and techniques that exploit unpatched known vulnerabilities – and of course, unknown vulnerabilities – in applications. 

It’s important to note, traditional signature-based technologies are not dead and should not be discarded. They’re an important security layer, as they’re accurate and quick to validate whether a file is known to be malicious or not. The merging of signatures, behavioral-based, and machine learning security layers create a security solution that’s not only able to deal with known malware, but also tackle unknown threats, which boosts the overall security posture of an organization. This comprehensive mix of security technologies is designed to not only increase the overall cost of attack for cybercriminals, but also offer security teams deep insight into what types of threats are usually targeting their organization and how to accurately mitigate them.

About the author: Bogdan Botezatu is living his second childhood at Bitdefender as senior e-threat analyst. When he is not documenting sophisticated strains of malware or writing removal tools, he teaches extreme sports such as surfing the Web without protection or how to rodeo with wild Trojan horses.

Copyright 2010 Respective Author at Infosec Island

Why Cloud Security Is a Shared Responsibility

Security professionals protect on-premises data centers with wisdom gained through years of hard-fought experience. They deploy firewalls, configure networks and enlist infrastructure solutions to protect racks of physical servers and disks.

With all this knowledge, transitioning to the cloud should be easy. Right?

Wrong. Two common misconceptions will derail your move to the cloud

  1. The cloud provider will take care of security
  2. On-premises security tools work just fine in the cloud

So, if you’re about to join the cloud revolution, start by answering these questions: how are security responsibilities shared between clients and cloud vendors? And why do on-premises security solutions fail in the cloud?

Cloud Models and Shared Security

A cloud model defines the services provided by the provider. It also defines how the provider splits security responsibilities with customers. Sometimes the split is obvious: cloud providers are, of course, tasked with physical security for their facilities. Cloud customers, obviously, control which users can access their apps and services. After that the picture can get a little murky.

The following three cloud models don’t comprehensively account for every cloud variation, but they help clarify who is responsible for what:

Software-as-a-Service (SaaS): SaaS providers are responsible for the hardware, servers, databases, data, and the application itself. Customers subscribe to the service and end users interact directly with the application(s) provided by the SaaS vendor. Salesforce and Office365 are two well-known SaaS offerings.

Platform as a Service (PaaS): PaaS vendors offer a turnkey environment for higher-level programming. The vendor manages the hardware, servers, and databases while the PaaS customer writes the code needed to deliver custom applications. Engine Yard and Google App Engine are examples of PaaS solutions.

Infrastructure as a Service (IaaS): An IaaS environment lets customers create and operate an end-to-end virtualized infrastructure. The IaaS vendor manages all physical aspects of the service as well as the virtualization services needed to build solutions. Customers are responsible for everything else – the applications, workloads, or containers deployed in the cloud. Amazon Web Services (AWS) and Microsoft Azure are popular IaaS solutions.

The key to understanding shared security lies in understanding who makes the decisions about a specific aspect of the cloud solution. For example, Microsoft calls the shots on Excel development for their Office 365 SaaS solution. Vulnerabilities in Excel are, therefore, Microsoft’s responsibility. In the same spirit, security vulnerabilities in an app you create on a PaaS service are your responsibility – but operating system vulnerabilities are not.

This all seems like common sense – but it means you’ll need to understand your cloud model to understand your security responsibilities. If you’re securing an IaaS solution you’ll need to take a broad perspective. Everything from server configurations to container provenance can impact your security posture – and they are your responsibility.

Security “Lift and Shift”

An IaaS solution can virtually replicate on-premises infrastructure in the cloud. So lifting and shifting your on-premises security to the cloud may seem like the best way to get up and running. But that approach has led many cloud transitions to ruin. Why? The cloud needs different security approaches for three important reasons:

Change Velocity

Hardware limits how fast a traditional data center can change. The cloud eliminates physical constraints and changes how we think about servers and storage. Cloud solutions, for example, scale by instantly and automatically bringing new servers online. But for traditional security tools, this cloud velocity is chaos. Metered usage costs rapidly spin out of control. Configuration and policy management becomes an overwhelming task. Interdependent security processes become brittle and unreliable.

Network Limitations

On-premises data centers take advantage of stable networks to establish boundaries. In the cloud, networks are temporary resources. Virtual entities join and leave instantaneously and across geographical boundaries. Network identifiers (like IP addresses) no longer provide the same stable control points as they once did and encryption makes it harder to observe application behavior from the network. Network-centric security tools leave cloud solutions vulnerable to lateral movement by attackers.

Cloud Complexity

When the cloud removes barriers to velocity, the number of machines, servers, containers, and networks explodes. As complex as on-premises data centers can be, cloud solutions are far worse: the number of cloud entities, configuration files, event logs, locations, networks, and connections are too much for even expert human analysis. Analyzing security incidents, assessing the impact of a breach, or even simply tracing an administrator’s activities isn’t possible with traditional data center security tools.

Cloud Security Needs New Solutions

Moving to the cloud is more than a simple lift-and-shift of existing servers and apps to a different set of servers. Granted, offloading infrastructure responsibilities to your provider is a huge win. Without capital expenses and the inertia of hardware, IT organizations do more with less, faster.

Fortunately, new cloud-centric security solutions make your move to the cloud easier. Three key capabilities can keep you out of trouble as you transition: automation, an expanded focus on apps and operations (in addition to networks), and behavioral baselining.

Automation makes it possible to keep up with cloud changes (and DevOps teams) during deployment, operations, and incident investigations. Moving the security focus up the stack reduces the impact of network impermanence in the cloud and delivers better visibility into high-level application and service operations. And behavioral baselining makes short work of otherwise tedious rule and policy development.

With the right technologies, and an understanding of differences, security pros can easily make the move to the cloud.

About the author: Sanjay Kalra is co-founder and CPO at Lacework, leading the company’s product strategy, drawing on more than 20 years of success and innovation in the cloud, networking, analytics, and security industries.

Copyright 2010 Respective Author at Infosec Island

Crypto is For Everyone—and American History Proves It

Over the last year, law enforcement officials around the world have been pressing hard on the notion that without a magical “backdoor” to access the content of any and all encrypted communications by ordinary people, they’ll be totally incapable of fulfilling their duties to investigate crime and protect the public. EFF and many others have pushed back—including launching a petition with our friends to SaveCrypto, which this week reached 100,000 signatures, forcing a response from President Obama.

Read more…



How to Protect Yourself from the NSA If You Use 1024-bit DH Encryption

In a post on Wednesday, researchers Alex Halderman and Nadia Heninger presented compelling research suggesting that the NSA has developed the capability to decrypt a large number of HTTPS, SSH, and VPN connections using an attack on common implementations of the Diffie-Hellman key exchange algorithm with 1024-bit primes. Earlier in the year, they were part of a research group that published a study of the Logjam attack, which leveragedoverlooked and outdated code to enforce “export-grade” (downgraded, 512-bit) parameters for Diffie-Hellman. By performing a cost analysis of the algorithm with stronger 1024-bit parameters and comparing that with what we know of the NSA “black budget” (and reading between the lines of several leaked documents about NSA interception capabilities) they concluded that it’s likely NSA has been breaking 1024-bit Diffie-Hellman for some time now.

Read more…



SETI: Snowden Should Stick to Human Affairs and Let Us Figure out How to Find Aliens

Edward Snowden may know a thing or two about encryption, but his remarks on encrypted alien signals aren’t sitting quite right with SETI. According to those in the business of searching for extraterrestrials, Snowden should probably keep his security advice limited to human affairs.

Read more…



Edward Snowden: Advanced Encryption May Stop Us Communicating With Aliens

On Friday, Neil deGrasse Tyson welcomed Edward Snowden to his StarTalk podcast. Along with the usual conversations about privacy and government, Snowden had another important warning to provide: encryption may hurt our abilities to see, or be seen by, extraterrestrials.

Read more…



Encrypted cloud storage service Wuala announced today they’re shutting down, going read-only on Sept

Encrypted cloud storage service Wuala announced today they’re shutting down, going read-only on September 30th and purging data on November 15th. Wuala was one of our favorite secure cloud storage services , but they recommend another of our faves, Tresorit. You can read more here.http://lifehacker.com/the-best-cloud…

Read more…



Social Media Auto Publish Powered By : XYZScripts.com