Argentinian Government Bans Civil Society Organizations From Attending Upcoming WTO Ministerial Meeting

The World Trade Organization (WTO), the multilateral global trade body that has almost all countries as members, has been eyeing an expansion of its work on digital trade for some time. Its current inability to address such issues is becoming an existential problem for the organization, as its relevance is challenged by the rise of smaller regional trade agreements such as the Trans-Pacific Partnership (TPP), North American Free Trade Agreement (NAFTA), and Regional Comprehensive Economic Partnership (RCEP) that do contain digital trade rules.

That’s one reason why some experts are now arguing that the WTO ought to retake leadership over digital trade rulemaking. Their reasoning is that a global compact could be more effective than a regional one at combatting digital protectionism, such as laws that restrict Internet data flows or require platforms to install local servers in each country where they offer service.

Civil Society Barred from WTO Ministerial Meeting

It’s true that some countries do have protectionist rules that affect Internet freedom, and that global agreements could help address these rules. But the problem in casting your lot in with the WTO is that as closed and opaque as deals like the TPP, NAFTA, and RCEP are, the WTO is in most respects no better. That was underscored last week, when in a surprise move the Argentinian government blocked representatives from civil society organizations (CSOs) from attending the upcoming WTO biennial Ministerial Meeting of 164 member states, which is scheduled between 10-13 December in Buenos Aires.

Last week the WTO reached out to more than than 64 representatives from CSOs,  including digital rights organizations Access Now and Derechos Digitales, to inform them that “for unspecified reasons, the Argentine security authorities have decided to deny your accreditation.” The Argentine government later issued a press release claiming that activists had been banned as “they had made explicit calls to manifestations of violence through social networks”—a remarkable claim for which no evidence was presented, and which the groups in question have challenged

Most of the banned organizations belong to the Our World Is Not For Sale network (OWINFS), a global social-justice network which has been engaging in WTO activities, including organizing panels and sessions for over two decades. In a strongly-worded letter, Deborah James, OWINFS Network Coordinator has condemned Argentina’s actions and noted that the lack of explanation behind the decision “attacked the conference’s integrity” and violated “a key principle of international diplomacy”.

Even before these delegates were barred from the meeting, their ability to participate in the WTO Ministerial was tightly constrained. Unlike other international negotiation bodies such as WIPO, the WTO does not permit non-state actors to attend meetings even as observers, nor to obtain copies of documents under negotiation. Their admission into the meeting venue would only authorize them to meet with delegates in corridors and private side-meetings, and Argentina’s action has taken away even that. Instead, public interest groups will essentially be limited to meeting and protesting outside the Ministerial venue, out of sight and out of mind of the WTO delegates inside.

Multilateral v. Multistakeholder to Digital Trade

Thus the problem with the suggestion that the WTO should take on the negotiation of new Internet-related issues is that any such expansion of the WTO mandate would require a rehaul of its existing standards and procedures for negotiations. International trade negotiations are government-led, and allow for very limited public oversight or participation in the process. On the other hand, the gold standard for Internet-related policy development is for a global community of experts and practitioners to participate in an open, multistakeholder setting.

Transparent consultative practices are critical in developing rules on complex digital issues as prescriptions nominally about commerce and trade can affect citizens’ free speech and other fundamental individual rights. In this respect and others, digital issues are different from conventional trade issues such as quotas and tariffs, and it is important to involve users in discussion of such issues from the outset. Through documents such as our Brussels Declaration on Trade and the Internet, EFF has been calling upon governments to make trade policy making on Internet issues more transparent and accountable, whether it is conducted at a multilateral or a smaller plurilateral level.

The WTO’s lack of any institutional mechanisms to gather inputs from the public and its inability to assure participation for CSOs is a big blow to the WTO’s credibility as a leader on global digital trade policy. Argentina’s unprecedented ban on CSOs is especially worrying, as e-commerce is expected to be a key topic of discussion at the Ministerial.

E-commerce Agenda Up In The Air

Last week, WTO director general Roberto Azevedo announced that he will be appointing “minister facilitators” to work with sectoral chairs and identified e-commerce as an area for special focus. That doesn’t mean that it’s an entirely new issue for the WTO. E-commerce (now sometimes also called “digital trade”) entered the WTO in 1998, when member countries agreed not to impose customs duties on electronic transmissions, and the moratorium has been extended periodically, though no new substantive issues have been taken on.

This is changing. Since last year, developed and developing countries have been locked in a battle over whether the WTO’s digital trade work program should expand to include new digital trade issues such as cross-border data flows and localization, technology transfer, disclosure of source code of imported products, consumer protection, and platform safe harbors.

This push has come most strongly from developed countries including the United States, Japan Canada, Australia, and Norway. During an informal meeting at the WTO in October, the EU, Canada, Australia, Chile, Korea, Norway and Paraguay, among other countries, circulated a restricted draft ministerial decision to establish “a working party” at the upcoming WTO ministerial meeting in Buenos Aires and authorizing it to “conduct preparations for and carry out negotiations on trade-related aspects of electronic commerce on the basis of proposal by Members”.

Amongst these are a May 2017 proposal presented by the European Union in which the co-sponsors mapped out possible digital trade policy issues to be covered, including rules on spam, electronic contracts, and electronic signatures. The co-sponsors noted that the list they provided was not exhaustive, and they invited members to give their views on what additional elements should be added. 

But many developing nations have opposed the introduction of new issues, instead favoring the conclusion of pending issues from the Doha Round of WTO negotiations, which are on more traditional trade topics such as agriculture. In particular, India this week submitted a formal document at the WTO opposing any negotiations on e-commerce. Commerce and Industry minister Suresh Prabhu said, “We don’t want any new issues to be brought in because there is a tendency of some countries to keep discussing new things instead of discussing what’s already on the plate. We want to keep it focused.” India has maintained that although e-commerce may be good for development, it may not be prudent to begin talks on proposals supported by developed countries. A sometimes unspoken concern is that these rules provide “unfair” market access to foreign companies, threatening developing countries’ home-grown e-commerce platforms.

China has a somewhat different view, and has expressed openness to engage in discussions on new rules to liberalize cross-border e-commerce. Back in November 2016, China had also circulated a joint e-commerce paper with Pakistan, and has since called for informal talks to “ignite” discussions on new rules, with a focus on the promotion and facilitation of cross-border trade in goods sold online, taking into account the specific needs of developing countries.

A number of other developing nations have their own proposals for what the WTO’s future digital trade agenda might include. In March 2017, Brazil  circulated a proposal seeking “shared understandings” among member states on transparency in the remuneration of copyright, balancing the interests of rights holders and users of protected works, and territoriality of copyright. In December 2016, another document prepared by Argentina, Brazil, and Paraguay focused on the electronic signatures and authentication aspect of the work programme. And in February 2017, an informal paper co-sponsored by 14 developing countries identified issues such as online security, access to online payments, and infrastructure gaps in developing countries as important areas for discussion.

Expectations From the Ministerial Meeting

With so many different proposals in play, the progress on digital trade made at the Ministerial Conference is likely to be modest, reflecting the diverging interests of WTO Members on this topic. Reports suggest that India has built strong support amongst a large number of nations including some industrialized countries, for its core demands for reaffirming the principles of multilateralism, inclusiveness and development based on the Doha work program. Given India’s proactive stance opposing the expansion of the current work program on e-commerce, this suggests an underwhelming outcome for proponents of the expansion of the WTO’s digital trade agenda.

However India’s draft ministerial decision on e-commerce also instructs the General Council of the WTO to hold periodic reviews in its sessions in July and December 2018 and July 2019, based on the reports that may be submitted by the four WTO bodies entrusted with the implementation of its e-commerce work program, and to report to the next session of the Ministerial Conference. If enough members agree with India and relevant changes are made to suit all members, India’s draft agreement could become an actual declaration.

In other words, even if, as seems likely, no new rules on digital trade issues come out of the 2017 WTO Ministerial Meeting, that won’t be the end of the WTO’s ambitions in this field. It seems just as likely that whatever protests take place in the streets of Buenes Aires, from activists who were excluded from the venue, will be insufficient to dissuade delegates from this course. But what we believe is achievable is to make further progress towards changing the norms around public participation in trade policy development, with the objective of improving the conditions for civil society stakeholders not only at the WTO, but also in other trade bodies and negotiations going forward.

This is one of the topics that EFF will be focusing on at this month’s Internet Governance Forum (IGF), where we will be hosting the inaugural meeting of a new IGF Dynamic Coalition on Trade and the Internet, and hopefully announcing a new multi-stakeholder resolution on the urgent need to improve transparency and public participation in trade negotiations. The closed and exclusive 2017 WTO Ministerial Meeting is an embarrassment to the organization. If and when the WTO does finally expand its work program on digital trade issues, it is essential that public interest representatives be seated around the table—not locked outside the building.

Cybersecurity’s Dirty Little Secret

“Who got breached today?” It seems that rarely does a news cycle go by without a revelation of some company, government entity, or web service experiencing a major breach with implications for vast numbers of people. The thinking has shifted from a mindset of “how can I prevent a breach?” to “I know it’s going to happen, how can I minimize the impact?” And what are those impacts? They range from embarrassment and brand degradation to significant financial loss, careers in shambles, and even companies going out of business.

The most severe breaches inevitably stem from powerful credentials (typically those logins used for administration) falling into the wrong hands. No one in their right mind would hand over the keys to their kingdom to a bad actor. But these bad actors are sneaky. They’ll get their hands on a relatively harmless user credential through social engineering, phishing, or brute force and use escalation techniques and lateral movements to gain super user access – and then all bets are off.

One of the foundational pillars of identity and access management (IAM) is the practice of privileged access management (PAM). IAM is concerned with ensuring that the right people, have the right access, to the right systems, in the right ways, at the right times, and that all those people with skin in the game agree that all that access is right. And PAM is simply applying those principles and practices to “superuser” accounts and administrative credentials. Examples of these credentials are the root account in Unix and Linux systems, the Admin account in Active Directory (AD), the DBA account associated with business-critical databases, and the myriad service accounts that are necessary for IT to operate.

PAM is widely viewed as perhaps the top practice that can alleviate the risk of a breach and minimize the impact if one were to occur. Key PAM principles include eliminating the sharing of privileged credentials, assigning individual accountability to their use, implementing a least-privilege access model for day-to-day administration, and implementing an audit capability on activities performed with these credentials. Unfortunately, we now have clear indicators that most organizations have not kept their PAM program on par with ever-evolving threats.

One Identity recently conducted research that revealed some alarming statistics when it comes to this most important protective practice. The study of more than 900 IT security professionals found that too many organizations are using primitive tools and practices to secure and manage privileged accounts and administrator access, in particular:

  • 18 percent of those surveyed admit to using paper-based logs for managing privileged credentials
  • 36 percent manage them with spreadsheets
  • 67 percent rely on two or more tools (including paper-based and spreadsheets) to support their PAM program

Although many organizations are attempting to manage privileged accounts (even if that attempt is with inadequate tools) fewer are actually monitoring the activity performed with this “superuser” access:

  • 57 percent admit to only monitoring some or none of their privileged accounts
  • 21 percent admit that they do not have any ability to monitor privileged account activity
  • 31 percent report that they cannot identify the individuals that perform activities with administrative credentials. In other words nearly one in three cannot assign the mandatory individual accountability that is so critical to protection and risk mitigation.

And if those statistics weren’t scary enough, data indicates that way too many organizations (commercial, government, and worldwide) fail to do even the basic practices that common sense demands:

  • 88 percent admit that they face challenges when it comes to managing privileged passwords
  • 86 percent do not change admin password after they are used – leaving the door open for the aforementioned escalation and lateral movement activities
  • 40 percent leave the default admin password intact on systems, servers, and infrastructure, functionally eliminating the need for a bad actor to even try hard to get the access they covet.

The bottom line is simple, common-sense activities such as changing the admin password after each use and not leaving the default in place will solve many of the problems. But also an upgrade to practices and technologies to eliminate the possibility of human error or lags due to cumbersome password administration practices, will add an additional layer of assurance and individual accountability. And finally, expanding a PAM program to include all vulnerabilities – not just the ones that are easiest to secure – will yield exponential gains in security.

About the Author: Jackson Shaw is Vice President, Product Management at One IdentityHe has been involved with directory, meta-directory and security initiatives for 25 years.

Copyright 2010 Respective Author at Infosec Island

The NSA Agent Who Inexplicably Exposed Critical Secrets, Featuring David Kennedy –

A SERIES OF leaks has rocked the National Security Agency over the past few years, resulting in digital spy tools strewn across the web that have caused real damage both inside and outside the agency. Many of the breaches have been relatively simple to carry out, often by contractors like the whistleblower Edward Snowden, who employed just a USB drive and some chutzpah. But the most recently revealed breach, which resulted in state secrets reportedly being stolen by Russian spies, was caused by an NSA employee who pleaded guilty Friday to bringing classified information to his home, exposing it in the process. And all, reportedly, to update his resume.

Read the Article: The NSA Agent Who Inexplicably Exposed Critical Secrets.

The post The NSA Agent Who Inexplicably Exposed Critical Secrets, Featuring David Kennedy – appeared first on TrustedSec.

Four Ways to Protect Your Backups from Ransomware Attacks

Backups are a last defense and control from having to pay ransom for encrypted data, but they need protection also.This year ransomware has been rampant targeting every industry. Two highlight attacks, WannaCry and NotPetya, have caused, in excess, hundreds of millions in losses. Naturally, cybercriminals continue to rapidly increase ransomware attacks as they are effective.

Good Backups and Effective Recovery

Proactive, not reactive, organizations have choices when it comes to ransomware. The most reliable defense against ransomware continues to be good backups and well-tested restore processes. Companies that regularly back up their data and are able to quickly detect a ransomware attack have the opportunity to restore and minimize disruption.

In some less common cases, we see wiper malware like NotPetya imitating Petya ransomware delivering a similar ransom message. In this case, the victims are not able to recover their data even with paying a ransom, which makes the ability to restore from good backups even more critical.

Clever Attackers Target Backups

Because good backups are so effective, attackers, including nation-state agents, behind ransomware are now targeting the backup processes and tools themselves. Several forms of ransomware, such as WannaCry and the newer variant of CryptoLocker, delete the shadow volume copies created by Microsoft’s Windows OS. Shadow copies are an easy method Microsoft Windows offers for easy recovery. On Macs, attackers targeted backups from the outset. Researchers discovered deficient functions in the first Mac ransomware back in 2015 that targeted disks used by the Mac OS X’s automated backup process called Time Machine.

The scheme is straightforward: Encrypt the backup to cut off organizational control over ransomware and they are likely to pay the ransom. Cybercriminals are increasing their efforts and aim to destroy the backups as well.  Here are four recommendations to help organizations safe guard their backups against ransomware attempts.

One: Develop visibility into your backup process

The more quickly an organization can discover a ransomware attack, the better chances that business can avoid significant corruption of data. Data from the backup process can serve as an early warning of ransomware infections. Your backup log will show signs of a program that instantly encrypts data. Incremental backups will abruptly “blow up” as each file is effectively changed, and the encrypted files cannot be compressed or deduplicated.

Monitoring essential metrics like capacity utilization from the backups everyday will help organizations detect when ransomware has infiltrated an internal system and minimize the damage from the attack.

Two: Be wary using network file servers and online sharing services

Network file servers are easy to use and always available, which are two characteristics why network-accessible “home” directories are a well-liked method to centralize data and simplify backup. Yet, when presented with ransomware, this data architecture holds several critical security weaknesses. Many ransomware programs encrypt connected drives, so the target’s home directory would also be encrypted. Any server that runs on a commonly targeted and vulnerable operating system like Windows could also be infected; thus, every user’s data would be encrypted.

Any organization with a network file server must continuously back up the data to a separate system or service, and test the systems restore functionality introduced with ransomware specifically.

Cloud file services are also vulnerable to ransomware. A highlight example is the 2015 Children in Film ransomware attack. Children in Film, a business providing information for child actors and their parents, used the cloud extensively including a common cloud drive. According to KrebsOnSecurity, in less than 30 minutes after an employee clicked on a malicious email link, over four thousand files in the cloud were encrypted. Thankfully, the business’s backup provider was able to restore all of their files, but it took upwards of a week to do so.

Subject to whether the cloud service delivered incremental backups or easily managed file histories, recovery of data in the cloud could pose more difficult than an on-premises server.

Three: Test your recovery processes frequently

Backups are worthless unless you have the ability to recover both reliably and quickly. Organizations can have backups but still be forced to pay the ransom, because the backup schedule failed to perform backups with sufficient granularity, or they were not backing up the intended data. For example, Montgomery County, Alabama was forced to pay a ransom to retrieve their $5 million in data as a result of difficulties with their backup files unrelated to the ransomware.

Part of testing the recovery process is determining the window of data loss. Organizations that do an entire backup every week can potentially lose up to a week of data should it need to recover after its last backup. Performing daily or hourly backups significantly increases the level of protection. More granular backups and detecting ransomware events as early as possible are both key to preventing loss.

Four: Understand your solution options

If ransomware can access backup images directly, it will be almost impossible to prevent the attack from encrypting corporate backups. For that reason, a backup system engineered to abstract the backup data will stop ransomware in its tracks from encrypting historical data.

The process of separating backups from your standard operating environment and ensuring the process doesn’t run on a general-purpose server and operating system, can harden backups against attack. Backup systems running on the most targeted operating system, Microsoft Windows, are prone to attack and are much more difficult to protect from ransomware.

Ultimately, organizations must seek to detect ransomware attacks early with monitoring or anti-malware measures, use of purpose-built systems for separation between backup data and a potentially compromised system, and continuously tested backup and restore processes to ensure data is effectively protected. This approach will preserve backups from ransomware attacks and reduce the risk of losing data in the event of an infection. 

About the author: Rod Mathews is the SVP & GM, Data Protection Business for Barracuda. He directs strategic product direction and development for all data protection offerings, including Barracuda's backup and archiving products and is also responsible for Barracuda’s cloud operations team and infrastructure.

Copyright 2010 Respective Author at Infosec Island

Shadow IT: The Invisible Network

The term “shadow IT” is used in information security circles to describe the “invisible network” that user applications create within your network infrastructure. Some of these applications are helpful and breed more efficiency while others are an unwanted workplace distraction. However, all bypass your local IT security, governance and compliance mechanisms.

The development of application policies and monitoring technology have lagged far behind in comparison to the use of cloud-based business services, as researchers note in SkyHigh’s Cloud Adoption and Risk Report. It states, “The primary platform for software applications today is not a hard drive; it’s a web browser. Software delivered over the Internet, referred to as the cloud, is not just changing how people listen to music, rent movies, and share photos. It’s also transforming how business is conducted.” Recent studies show that businesses that follow this trend of migrating operations to the cloud actually increased productivity by nearly 20 percent above those who did not.

Shifting to a new security model before we determine the rules  

Traditional security thinking and products have focused solely on keeping the network and those within it safe from outside threats, and auditing information from users, devices and alerts. The application revolution is now pushing beyond the traditional network boundaries and into the cloud for security teams, before establishing acceptable-use policies and new auditing and compliance parameters. However, it is much more efficient to lay the auditing and policy groundwork first and then allow security operations to adapt to this new element of application awareness.

Why does application awareness change security operations so drastically? Because it:

  • Emphasizes outgoing (as opposed to incoming) communication
  • Requires relating users and devices to the applications (which older tools can’t perform)
  • Shifts the focus away from signature detection and into analytics and policy
  • Requires creating network and device use policy and implementing a means to track and measure it
  • Requires pulling logs from cloud services

Despite the security implications, there are important governance challenges when developing new application policies. While the discussion of implementing application awareness is mostly technical, the way employees use applications can also be deeply personal. Making a decision to allow or block Facebook, Twitter, Dropbox, Bit torrent, Tor and personal Gmail accounts touches a human factor that goes beyond merely stopping viruses and preventing breaches. Yet, allowing such applications (especially Tor) can increase the level of risk exponentially – even beyond the threats posed by many viruses.

Changing direction to a different point of view – the insider threat

Security follows business, and business is rapidly putting its information in the cloud. Most newer security products have evolved to focus both on what is entering the network and what is leaving the network. However, the shadow IT system often circumvents corporate monitoring and security measures, and allows corporate data to flow outside the organization into the public cloud without proper oversight or control.

Replacing the thread-bare notion that threats could only come into our systems from the outside is an ever-growing (and different) point of view that’s being complemented with products/devices that also monitor outgoing communications. Until recently, this capability has been limited to security interests in data loss prevention, policy filtering and compromised system detection.

Cloud Access Security Brokers (CASBs) are one type of outgoing protection for the network, and it does provide more visibility into network flows. It does add the burden of analysts having to sort through vast quantities of data. One Gartner analyst commented that the competitive forces currently amongst the CASB market providers “is a consequence of newness that limits the consistency and richness of the service they can provide.” He continued, “Data without action is kind of useless. Data has to be automatable so your team can solve the problem and move on to bigger projects.”

At this point, the point of view must pivot to gain vision into both the external threat and the internal or insider threat. The focus here is on your employees and their careless and maybe malicious behavior on network-connected devices. While some workers feel entitled to check social media or personal email applications at work, it is crucial that an organization develop smart and enforceable “acceptable-use” policies, along with regular, relevant training for all workers. This area of governance has lagged far behind the technological solutions; however, it is no less of an important piece of the visibility puzzle.

What about solid, consistent governance?

Governance is all about identifying risk and deciding what is acceptable. What is the risk of non-approved applications in a current enterprise environment? SkyHigh wrote a solid white paper on what they see as the risk in their Q4 2016 Cloud Adoption Risk Report (PDF). It should be noted that this report is biased in terms of the threat, but it does, at a minimum, provide a high-level explanation of the risk.

The above report prominently noted that email/phishing is the number one vector of attack, while web-based malware downloads are rarer by comparison. Buried deep in the SkyHigh study was the reason that we need to effectively capture application usage: while greater than 60 percent of organizations surveyed had a cloud use policy, almost all of that particular group lacked the needed enforcement capability. Roughly two-thirds of services that employees attempt to access are allowed based on policy settings, but most enterprises are still struggling to enforce blocking policies for the one-third in the remaining category that were deemed inappropriate for corporate use due to their high risk.

The ideal standard of control through enforcement is complicated even with a CASB in place, by security “silos,” and a struggle to consistently enforce polices across multiple cloud-based systems. Major violations still occur despite policies, such as: authorized users misusing cloud-based data, accessing data they shouldn’t be, synching data with uncontrolled PCs, and leaving data in “open shares,” in addition to authorized users having access despite termination or expiration. In short, before using a CASB you can implement use knowledge passively with other tools.

Implementing a means to passively detect applications and tracking that activity to the user and device is an essential aspect to governance and risk management. Shadow IT is the term most related to the risk associated with the threat that application awareness addresses, as opposed to the much more arduous task of drafting and implementing policies that could be controversial with fellow staff members.

About the Author: Chris Jordan is CEO of College Park, Maryland-based Fluency , a pioneer in Security Automation and Orchestration.

Copyright 2010 Respective Author at Infosec Island

4 Questions Businesses Must Ask Before Moving Identity into the Cloud

The cloud has transformed the way we work and it will continue to do so for the foreseeable future. While the cloud provides a lot of convenience for employees and benefits for companies in terms of cost savings, speed to value and simplicity, it also brings new challenges for businesses. When coupled with the fact that Gartner predicts 90 percent of enterprises will be managing hybrid IT infrastructures encompassing both cloud and on-premises solutions by 2020, the challenge becomes increasingly more complex.

As is the case with any significant technology initiative, moving infrastructure to the cloud requires forethought and preparation to be successful. For many enterprises, a cloud-first IT strategy means a chance to focus on the core drivers of the business versus managing technology solutions. As these enterprises consider a cloud-first approach, they will undoubtedly be moving their IT infrastructure and security to the cloud. And identity will not be left behind.

The big question for many IT and security operations departments is: can you move your identity governance solution to the cloud? And then, perhaps more importantly, should you? The answers to these questions will vary from company to company and are dependent on the needs of the business and the current structure of the identity program.

As such, here are 4 questions every organization must ask to determine if moving identity into the cloud is the right move for their business:

  • Have you already moved any infrastructure to the cloud?

While many business applications are relatively easy to use as a service, transferring a complex identity management program into the cloud can be more challenging to implement. If your organization is already using infrastructure-as-a-service (e.g. Amazon Web Services or Microsoft Azure) then you’re likely ready to move forward with implementing a cloud-based identity governance program. However, if you haven’t experimented with moving mission-critical apps into the cloud, you should carefully consider whether your organization is prepared before making the leap. 

  • How flexible is your organization?

Regardless of how it is deployed, an effective identity governance solution must provide complete visibility across all of your on-premises and cloud applications. This visibility provides the foundation required to build policies and controls essential for compliance and security.For organizations that don’t have the time or expertise to create custom identity policies or compliant processes from scratch, cloud-based solutions can make successful deployments more attainable. However, if your organization has rigid requirements about how identity management must be configured and deployed, it may be more of a challenge to move to a cloud-based solution.

  • Do you have limited resources?

Deploying an identity governance solution can be both time- and resource-intensive, and effective identity programs require a blend of people, processes and technology to be successful. The cloud is a great option for businesses with limited resources because it doesn’t involve hardware or infrastructure upgrades, making it faster and more cost-effective than on-premise solutions. Cloud-based identity is also great for organizations with smaller IT teams or those without as much specific expertise in the space.

  • How well do you understand your governance needs?

Identity governance is more than just modifying who has access to what. Effective identity governance must also answer the questions of should this user have access, what kind of access are they entitled to, and what can they do with that access. And while identity governance can be simple to use, what happens behind the scenes can be very complex. This is important to understand because SaaS-based identity governance is not as customizable as an on-premise solution. So, if your identity needs are fairly straight forward, the cloud might be for you, but if your organization requires more complexity and customization, on-premise might still be the best solution.

Whether you’re moving from an on-premise identity governance solution to the cloud or implementing a cloud-based identity governance solution for the first time, it’s important to take a close look at your organization and its needs before taking the next step. With these best practices in mind, you can properly manage identities and limit the risk of inappropriate access to your sensitive business data.

About the author: Dave Hendrix oversees the engineering, product management, development, operations and client services functions in his role as senior vice president of IdentityNow.

Copyright 2010 Respective Author at Infosec Island

Artificial Intelligence: A New Hope to Stop Multi-Stage Spear-Phishing Attacks

Cybercriminals are notorious for conducting attacks that are widespread, hitting as many people as possible, and taking advantage of the unsuspecting. Practically everyone has received emails from a Nigerian prince, foreign banker, or dying widow offering a ridiculous amount of money in return for something from you. There are countless creative examples of phishing, even health drugs promising the fountain of youth or skyrocketing your love life in return for your credit card.

In more recent times, cybercriminals are taking an “enterprise approach” to attacks. Just like business to business sales functions, they focus on a smaller number of targets, with an objective of obtaining an exponentially greater payload with extremely personalized and sophisticated techniques. These pointed attacks, labeled spear phishing, leverage impersonation of an employee, a colleague, your bank, or popular web service to exploit their victims. Spear phishing has steadily been on the rise, and according to the FBI, this means of social engineering has proven to be extremely lucrative for cybercriminals. Even more concerning, spear phishing is incredibly elusive and difficult to prevent with traditional security solutions. 

The most recent evolution in social engineering involves multiple premeditated steps. Cybercriminals hunt their victims instead of targeting company executives with a fake wire fraud out of the blue. They first infiltrate their target organization from an administrative mail account or low-level employee, then use reconnaissance and wait for the most opportune time to fool the executive by initiating an attack from a compromised mail account. Here are the abbreviated steps commonly taken in these spear phishing attacks and solutions to stop these attackers in their tracks. 

Step 1: Infiltration

Most phishing attempts are glaringly obvious for people that receive cyber security training (executives, IT teams) to sniff out. These emails contain strange addresses, bold requests, and grammar mistakes that often invoke deletion. However, there is a stark increase in personalized attacks that are extremely hard to sniff out, especially for people who aren’t trained. Many times, the only blemish to this attack is that malicious email links will be spotted only if you hover over them with your mouse. Highly trained individuals would spot this flaw but not common employees. 

This is why cybercriminals find easier targets at first. Mid-level sales, marketing, support and operations folks are the most usual. This initial attack is aimed to steal a username and password. When the attacker has control of this mid-level person, if they haven’t enabled multi-factor authentication (and many organizations do not), they can log into the account. 

Step 2: Reconnaissance

At this stage, cybercriminals will normally monitor the compromised account and study email traffic to learn about the organization. Often times, attackers will setup forwarding rules on the account to prevent logging in frequently. Analysis of the victim’s email traffic allows the attacker to understand more about the target and organization: who makes the decisions, who handles or influences financial transactions, has access to HR information, etc. It also opens the door for the attacker to spy on communications with partners, customers, and vendors.

This information is then leveraged for the final step of this spear phishing attack.

Step 3: Extract Value

Cybercriminals leverage this learned information to launch a targeted spear phishing attack. They often send customers fake bank account information precisely when they are planning to make a payment. They can hoax other employees to send HR information, wire money or easily sway them to click on links to collect additional credentials and passwords. Since the email is coming from a legitimate (albeit compromised) account like a colleague, it appears totally normal. The reconnaissance allows the attacker to precisely mimic the senders’ signature, tone and text style. So, how do you stop this attacker in his tracks? Thankfully there is a new hope and well-known methods for organizations to implement to thwart these cybercriminals from having their way, a multi-layer strategy.

End of the Line for Spear Phishing

There are three things that organizations should be employing now to combat spear phishing. The two obvious ones are user training and awareness and multi-factor authentication. The last, and newest technology to stop these attacks is real-time analytics and artificial intelligence. Artificial intelligence offers some of the strongest hope of shutting down spear phishing in the market today.  

AI Protection

Artificial intelligence to stop spear-phishing sounds futuristic and out of reach, but it’s in the market today and attainable for businesses of all sizes, because every business is a potential target. AI has the ability to learn and analyze an organization’s unique communication pattern and flag inconsistencies. The nature of AI is it becomes stronger, smarter and endlessly more effective over time to quarantine attacks in real-time while identifying high-risk individuals within an organization. For example, AI would have been able to automatically classify the email in the first stage of the attack as spear phishing, and would even detect anomalous activity in the compromised account, subsequently stopping stage two and three. It also has the ability to stop domain spoofing and authorized activity to prevent impersonation to customers, partners and vendors to steal credentials and gain access to their accounts.


It is absolutely essential for organizations to implement multi-factor authentication (MFA). In the above attack, if multi-factor authentication was enabled, the criminal would not have been able to gain entry to the account. There are many effective methods for multi-factor authentication including SMS codes or mobile phone calls, key fobs, biometric thumb prints, retina scans and even face recognition.

Targeted User Training

Employees should be trained regularly and tested to increase their security awareness of the latest and most common attacks. Staging simulated attacks for training purposes is the most effective activity for prevention and promoting an employee mindset of staying on alert. For employees who handle financial transactions or are higher-risk, it’s worth giving them fraud simulation testing to assess their awareness. Most importantly, training should be companywide and not only focused on executives.  

About the author: Asaf Cidon is Vice President, Content Security Services at Barracuda Networks. In this role, he is one of the leaders for Barracuda Sentinel, the company's AI solution for real-time spear phishing and cyber fraud defense.


Copyright 2010 Respective Author at Infosec Island

Category #1 Cyberattacks: Are Critical Infrastructures Exposed?

Critical national infrastructures are the vital systems and assets pertaining to a nation’s security, economy and welfare. They provide light for our homes; the water in our taps; a means of transportation to and from work; and the communication systems to power our modern lives. The loss or incapacity of such necessary assets upon which our daily lives depend would have a truly debilitating impact on a nation’s health and wealth. One might assume then that the security of such assets, whether virtual or physical, would be a key consideration. Or to put that another way, failing to address security vulnerabilities of such important systems would surely be an inconceivable idea.

However, the worrying truth is that the security measures of many of our nation’s critical systems are not, in the large, what they should be. Perhaps this shouldn’t be a surprise. The rapid progression of technology has enabled critical systems to become increasingly connected and intelligent, but with little experience of the problems this connectivity could create, few thought about the systems’ security.

Although this new found connectivity has helped industries to realise great productivity and efficiency benefits, the attack on Ukraine’s power grid in 2015 opened the eyes of many in charge of such industries. After nationwide power-outages struck, it has now become clear that if security is not prioritised, the worst-case scenario could wreak havoc across our nations. Prevention is a must; a short-term fix will only delay the inevitable…

Critical infrastructures: an imminent attack

Not a case of if. But when.

It has been two years since news of Ukraine’s power grid cyberattack made headlines across the globe. And once again, critical infrastructure security has been propelled into the spotlight following a number of recent reports suggesting that a devastating attack is imminent.

The UK’s National Cyber Security Centre (NCSC) revealed in its first annual review that it received 1,131 incident reports, with 590 of these classed as ‘significant’. This included the WannaCry ransomware that took down the NHS. While none of these were identified as category one incidents, i.e. interfering with democratic systems or crippling critical infrastructures such as power, the head of the NCSC, Ciaran Martin, warned there could be damaging attacks in the not too distant future.

Furthermore, US-CERT recently issued an alert warning critical national infrastructure firms, including nuclear, energy and water providers, that they are now at an increased risk of ‘highly targeted’ attacks by the Dragonfly APT group. This follows a report by security researchers Symantec, who recently found that during a two-year period the group has been increasing its attempts to compromise energy industry infrastructure, most notably in the UK, Turkey and Switzerland.

Although no damage has yet been done, the group has been trying to determine how power supply systems work and what could be compromised and controlled as a result. If we know the group now has the potential ability to sabotage or gain control of these systems should it decide to do so, this should increase the urgency around the preventative measures needed to defend against a future attack.It is therefore hardly surprising that to combat the rise of such threats, the first piece of EU-wide cybersecurity legislation has been developed to boost the overall level of cybersecurity in the EU. This is called the NIS Directive.

Addressing security from the outset

The potential consequences are disturbing, so infrastructure owners need to consider working in closer collaboration with security experts to ensure the lights remain on. While most in the security industry recognise that there is no silver bullet to ensure total security, we recommend all of those in charge of critical infrastructures ensure they have enough barriers in place to safeguard industrial and critical assets. Proactive regimes that balance defensive and offensive countermeasures, as well as include regular retraining and security techniques such as penetration testing and “red teaming”, are vital to keep defences sharpened.

One of the greatest lessons that should be heeded is that the issue of security must be addressed from the outset of infrastructure development and deployment. It has become abundantly clear that cyberattacks against critical infrastructures are only going to increase in the coming months and years. Those in charge of securing such environments must deploy a new preventative mindset, ensuring strong barriers are in place to avert the hijacking of any critical infrastructures before there is a need to clean up its devastating result.

About the author: Jalal Bouhdada is the Founder and Principal ICS Security Consultant at Applied Risk. He has over 15 years’ experience in Industrial Control Systems (ICS) security assessment, design and deployment with a focus on Process Control Domain and Industrial IT Security.

Copyright 2010 Respective Author at Infosec Island

The Evolution from Waterfall to DevOps to DevSecOps and Continuous Security

Software development started with the Waterfall model, proposed in 1956, where the process was pre-planned, set in stone, with a phase for every step. Everything was predictably…sluggish. Every organization involved in developing web applications was siloed, and had its own priorities and processes. A common situation involved development teams with their own timelines, but quality assurance teams had to test another app, and operations hadn’t been notified in time to build out the infrastructure needed. Not to mention, security felt that they weren’t taken seriously. Fixing a bug that was made early in the application lifecycle was painful, because testing was much later in the process. Repeatedly, the end product did not address the business’s needs because the requirements changed, or the need for the product itself was long gone.

The Agile Manifesto

After give or take 45 years of this inadequacy, in 2001, the Agilemanifesto emerged. This revolutionary model advocated for adaptive planning, evolutionary development, early delivery, continuous improvement, and encouraged rapid and flexible response to change. Agile adoption increased and therefore sped up the software development process embracing smaller release cycles and cross-functional teams. This meant that stakeholders could navigate and course correct projects earlier in the cycle. Applications began to be released on time with translated to addressing immediate business needs.

The DevOps Culture

With this increased agile adoption from development and testing teams, operations now became the holdup. The remedy was to bring agility to operations and infrastructure, resulting in DevOps. The DevOps culture brought together all participants involved resulting in faster builds and deployments. Operations began building automated infrastructure, enabling developers to move significantly faster. DevOps led to the evolution of Continuous Integration/Continuous Delivery (CI/CD), basing the application development process around an automation toolchain. To convey this shift, organizations advanced from deploying a production application once annually to deploying production changes hundreds of time daily.

Security as a DevOps Afterthought

Although many processes had been automated with DevOps thus far, some functions had been ignored. A substantial piece that is not automated, but is increasingly critical to an organization’s very survival, is security. Security is one of the most challenging parts of application development. Standard testing doesn’t always catch vulnerabilities, and many times someone has to wake up at three in the morning to fix that critical SQL Injection vulnerability. Security is often perceived as being behind the times – and more commonly blamed for stalling the pace of development. Teams feel that security is a barrier to continuous deployment because of the manual testing and configuration halting automated deployments.  

As the Puppet State of DevOps report aptly states:

All too often, we tack on security testing at the end of the delivery process. This typically means we discover significant problems, that are very expensive and painful to fix once development is complete, which could have been avoided altogether if security experts had worked with delivery teams throughout the delivery process”

Birth of DevSecOps

The next iteration in this evolution of DevOps was integrating security into the process – with DevSecOps. DevSecOps essentially incorporates security into the CI/CD process, removing manual testing and configuration and enabling continuous deployments. As organizations move toward DevSecOps, there are substantial modifications they are encouraged to undergo to be successful. Instilling security into DevOps demands cultural and technical changes. Security teams must be included in the development lifecycle starting day one. Security stakeholders should be integrated right from planning to being involved with each step. They need to work closely with development, testing, and quality assurance teams to discover and address security risks, software vulnerabilities and mitigate them. Culturally, security should become accustom to rapid change and adapting to new methods to enable continuous deployment. There needs to be a happy medium to result in rapid and secure application deployments.

Security Automation is the Key

A critical measure moving toward DevSecOps is removing manual testing and configuration. Security should be automated and driven by testing. Security teams should automate their testing and integrate them into the overall CI/CD chain. However, based on each individual application, it’s not uncommon for some tests to be manual – but the overall portion can and should be automated. Especially tests that ensure applications satisfy certain defined baseline security needs. Security should be a priority from development to pre-production and should be automated, repeatable and consistent. When done correctly, responding to security vulnerabilities becomes much more trivial each step of the way which inherently reduces time taken to fix and mitigate flaws.

Continuous Security Beyond Deployment

Continuous security does not stop once an application is deployed. Continuous monitoring and incident response processes should be incorporated as well. The automation of monitoring and the ability to respond quickly to events is a fundamental piece toward achieving DevSecOps. Security is more important today than ever before. History shows that any security breach event can be catastrophic for both customers, end users and organizations themselves. With more services going online and hosted in the cloud or elsewhere the threat landscape is growing at an exponential rate. The more software written inherently results in more security flaws and more attack surface. Incorporating security into the daily workflow of engineering teams and ensuring that vulnerabilities are fixed or mitigated much ahead of production is critical to the success of any product and business today.

About the author: Jonathan Bregman is a Product Marketing Manager with Barracuda Networks focused on web application firewalls and DDoS prevention for customers. Prior to Barracuda, Jonathan was a research and development engineer with Google.

Copyright 2010 Respective Author at Infosec Island

From the Medicine Cabinet to the Data Center – Snooping Is Still Snooping

We’ve all done it in one form or another. You go to a friend’s house for a party and you have to use the restroom. While you are there, you look behind the mirror or open the cabinet in hopes of finding out some detail — something juicy — about your friend. What exactly are you looking for? And why? Are you feeding into some insecurity? You don’t really know, you just know you are compelled to look.

Turns out that same human reaction carries forward to your place of employment. 

At One Identity we recently conducted a global survey that revealed a lot of eye-opening facts about people’s snooping habits on their company’s network.  At a high level, the survey revealed that when given the opportunity to look through sensitive company data that employee may not be permitted to access — the instinct is to snoop. Before we get into specific  results, here are the demographics:

  • We surveyed over 900 people from around the world.
  • Countries include the U.S., U.K., Germany, France, Australia, Singapore and Hong Kong.
  • Eighty-seven percent have privileged access to something within their place of employment.
  • They all have some level of security responsibility with varied titles ranging from executive to front-line security pros.
  • Twenty-eight percent are from large enterprises (>5,000 employees)); 28 percent from mid-sized enterprises (2,000 to 5,000 employees); the remainder were from organizations with less than 2,000 employees.

Key Finding Number One: 92 percent of respondents stated that employees at their company attempt to access information that they do not need. 

Think about that. Ninety-two percent of us are trying to access the information we don’t need to get our jobs done. Imagine if any employee at your company could access sensitive data like salary. That would. Now imagine employees obtained access to financial data, customer data or merger information — and then shared it. The result could be catastrophic to your business.

Key Finding Number Two: 66 percent of the security professionals surveyed have tried to access the information they didn’t need.

Worse yet, these are security people that probably have some form of elevated privileges. This means not only are they attempting to access that information but in many cases, they are actually obtaining access and ultimately abusing that privilege.

Key Finding Number Three: Executives are more likely to snoop than managers or front-line workers.

Interestingly, IT security executives are the most likely to look for sensitive data not relevant to their job than any other job level. This is worrisome for many since they tend to have greater access rights and permissions — once again, indicated abuse of power.

The bottom line here is that organizations should be alarmed by these findings. A common myth among many is that data is safe when it’s on a company network and in the hands of its trusted employees — it’s the outsiders and hackers you have to look out for. While the latter is certainly true, the data shows that the majority of all employees — even those within the ranks of IT security groups — are nosy when given the opportunity to be. Implementing best practices around identity and access management — like role-based access rights and permissions and applying identity analytics to spot any signs of unusual access behavior — can help organizations safeguard themselves from letting sensitive data fall into the wrong hands before it’s too late.

About the author: Jackson Shaw is senior director of product management at One Identity, an identity and access management company formerly under Dell. Jackson has been leading security, directory and identity initiatives for 25 years.

Copyright 2010 Respective Author at Infosec Island

Healthcare Orgs in the Crosshairs: Ransomware Takes Aim

Criminals are using ransomware to extort big money from organizations of all sizes in all industries. But healthcare organizations are especially attractive targets. Healthcare organizations are entrusted with the most personal, intimate information that people have – not just their financial data, but their very private health and treatment histories. Attackers perceive healthcare IT security to be the least effective and outdated in comparison with other industries. They also know that healthcare organizations tend to have significant cash on hand and have a high cost of downtime, therefore are more likely to pay the ransom for encrypted data. If you fail to take the necessary steps to combat ransomware and other advanced malware and that trust is betrayed, the cost to your business could extend far beyond paying a ransom or a noncompliance fine. If your reputation for safeguarding patient data is damaged, not only will you be scrutinized under the microscope, in some cases, companies never recover and leadership is forced to resign.

Healthcare is making strides but isn’t there yet

There is good news. Healthcare organizations have made significant security improvements over the last year. According to the HIMMS 2017 Cybersecurity Survey, it is clear that IT security is an urgent business challenge for leadership, rather than solely an IT problem. There is a marked increase in the employment of CIOs and Chief Information Security Officers (CISOs) among healthcare organizations, and security shortcomings are being addressed.

Nonetheless, there is still room for improvement and ransomware attacks continue to be a serious and growing challenge. Those who continue to commit vital resources to implementing effective security measures will emerge as winners and you will never hear of them in the media. Effectively combating ransomware requires a well-thought-out combination of technical and cultural measures.

Detection: discovering the weaknesses

Keeping your network free of ransomware and other advanced malware requires a combination of effective perimeter filtering, strategically designed network architecture, and the capability to detect and eliminate resident malware that may already be inside your network. It’s an exercise of cleaning house as your infrastructure likely contains a number of latent threats. Email inboxes are full of malicious attachments and links just waiting to be clicked on. Similarly, all applications, whether locally hosted or cloud-based, must be regularly scanned and patched for vulnerabilities. There should be a regular vulnerability management schedule for scanning and patching of all network assets, which is checking the box for basics but extremely critical for thwarting threats. Building a solid foundation such as this is a fantastic start for effective ransomware detection and prevention.

Prevention: A non-negotiable requirement

There are some very effective security technologies that are a requirement in today’s threat landscape in order to prevent ransomware and other attacks. Prevention of threats entering the network requires a modern firewall or email gateway solution to filter out the majority of threats. An effective solution should scan incoming traffic using signature matching, advanced heuristics, behavioral analysis, sandboxing, and the ability to correlate findings with real-time global threat intelligence. This will ultimately prevent employees from having to be perfectly trained to spot these sophisticated threats. It’s recommended to control and segment network access to minimize the spread of threats that do get in. Ensure that patients and visitors can only spread malware within their own, limited domain, while also segmenting, for example, administration, caregivers, and technical staff, each with limited, specific access to online resources.Even with the most sophisticated methods like spear phishing, where attackers impersonate your coworker, there are now machine learning and artificial intelligence solutions that can spot and quarantine these threats before they ever get to an employee. The risk for healthcare organizations is immensely reduced when solutions such as these are deployed as part of an overall security posture.  However, when data is encrypted and held ransom, the fight isn’t over yet.

Backup—Your Last, Best Defense Against Ransomware

When a ransomware attack succeeds, your critical files—HR, payroll, electronic health records, patient financial and insurance info, strategic planning documents, email records, etc.—are encrypted, and the only way to obtain the decryption key is to pay a ransom. But if you’ve been diligent about using an effective backup system, you can simply refuse to pay and restore your files from your most recent backup—your attackers will have to find someone else to rob.Automated, cloud-based backup services can provide the greatest security. Reputable vendors offer a variety of very simple and secure backup service options, priced for organizations of any size, and requiring minimal staff time. Advanced solutions can even allow you to spin up a virtual copy of your servers in the cloud, restoring access to your critical files and applications within minutes of an attack or other disaster.

When all of these things are working simultaneously, healthcare organizations are well equipped to stop ransomware attacks effectively. Ransomware and other threats are not going away anytime soon and healthcare will continue to be a target for attackers. The hope is that healthcare professionals continue to keep IT security top of mind. 

About the author: Sanjay is a 20 year veteran in technology and has a passion for cutting edge technology and a desire to innovate at the intersection of technology trends. He currently leads product management, marketing and strategy for Barracuda’s security business worldwide

Copyright 2010 Respective Author at Infosec Island

Thinking Outside the Suite: Adding Anti-Evasive Strategies to Endpoint Security

Despite ever-increasing investments in information security, endpoints are still the most vulnerable part of an organization’s technology infrastructure. In a 2016 report with Rapid7, IDC estimates that 70% of attacks start from the endpoint. Sophisticated ransomware exploded into a global epidemic this year, and other forms of malware exploits, including mobile malware and malvertising are also on the rise.


The only logical conclusion is that existing approaches to endpoint security are not working. As a result, security teams are exposed to mounting, multifaceted challenges due to the ineffectiveness of their current anti-malware solutions, large numbers of security incidents requiring costly and intensive response, and added pressure from the board to undergo risky and expensive “rip and replace” endpoint security procedures.


Current endpoint security solutions employ varying approaches. Some restrict the actions that legitimate applications can take on a system, others aim to prevent malicious software from running, and some monitor activity for incident investigations. The challenge for most IT department heads is finding the right balance of solutions that will work for their particular business.


Endpoint Protection Platforms (EPP), usually offered by established endpoint security vendors, promote the benefits of packaging endpoint control, anti-malware, and detection and response all in one agent, managed by from one console. While EPP suites can be useful and practical, it’s important to understand their limitations. For starters, a “suite” does not always mean the products are integrated — you may end up with one vendor but multiple agents and management consoles. Second, no single vendor offers the best-in-breed or best-for-your-business options for all the component solutions. If you adopt the EPP approach, be aware that you will be making trade-offs of some sort. Finally, it is likely that even after going through the painful process of deploying a full endpoint protection suite, it will still fail to prevent many attacks.


All these solutions, whether installed separately or as a suite, produce alerts. Many work by finding attacks that have already “landed” to some degree. This means your team will be busy (if not overwhelmed) sorting through the alerts for priority threats, investigating incidents, and remediating any intrusions. This can lead to inefficiencies and escalating staffing requirements, which will quickly wipe out any cost savings you hoped would come from installing bundled solutions.


In the end, it is imperative to understand the strengths and weaknesses within each suite and evaluate whether a best-of-breed or “suite-plus” approach offers better protection for your investment — this is often the case. EPP implementation can help companies consolidate vendors in order to reduce administrative overhead and licensing costs. It may also help minimize complexity and reduce the impact on operations, end-users, and business agility. But none of this matters much if the shortcomings of the platform end up introducing unacceptable levels of risk, draining staff resources, or constraining productivity and agility.


For example, it’s important to recognize that accepting the low detection rates of your conventional antivirus solution also means accepting the high likelihood of a breach. That’s because there is one critical factor most platforms don’t adequately address: unknown malware that has been designed specifically to evade existing defenses. Innovative endpoint defense strategies have emerged that allow you to block evasive malware, regardless of whether there is a known signature, behavior pattern, or machine learning model. This is achieved through the creative use of deceptive tricks that control how the malware perceives its environment.

Endpoint defense solutions that can neutralize evasive malware use three primary strategies: creating a hostile environment, preventing injection through deception, and restricting document executable capabilities. All three strategies contain and disarm the malware before it ever unpacks or puts down roots. 

To create a hostile environment, the malicious program is tricked into believing the environment is not safe for execution, resulting in the malware suspending or terminating its execution. To prevent malicious software from hiding in legitimate processes, the malware is deceived into registering that memory space is unavailable, so it never establishes a foothold on the device. To block malicious actions initiated by document files (via macros, PowerShell, and other scripts), the malware is tricked into registering that system resources like shell commands are not accessible.

These new strategies reduce risk without requiring increased overhead (nothing malicious installed, so nothing to investigate) or replacement of existing solutions. Anti-evasion solutions work alongside installed AV solutions to provide an added layer of protection against sophisticated malware and ransomware. The threat intelligence they produce (identifying previously unknown malware exploits) enhances your overall security program. In addition, because incident responders have fewer alerts and incidents to sort through, they can focus their expertise on high-priority threats and investigating attacks where the intruder has already gained access to the network.

Working smarter is key to managing the growing and ever-shifting challenges and responsibilities faced by security teams. Reducing workload and manual processes while reducing risk is a tough balancing act. Ongoing cyber security talent shortages combined with multiplying threat vectors make effective automated defenses a critical priority. Getting the most value out of your security budget and skilled experts requires neutralizing threats upfront, preventing as many attacks as possible, and developing automated threat management processes. It’s essential to cover gaps and shortcomings, augmenting existing endpoint security by layering on innovative, focused solutions. Given the recent surge of virulent, global malware and ransomware, anti-evasion defenses are a smart place to start.

About the author: Eddy Boritsky is the CEO and Co-Founder of Minerva Labs, an endpoint security and anti-evasion technology solution provider. He is a cyber and information security domain expert. Before founding Minerva, Eddy was a senior cyber security consultant for the defense and financial sectors.

Copyright 2010 Respective Author at Infosec Island

Managing Cyber Security in Today’s Ever-Changing World

When it comes to victims of recent cyber-attacks, their misfortune raises a few critical questions:

  • Is anything really safe? 
  • Do the security recommendations of experts actually matter? 
  • Or do we wait for our turn to be victimized, possibly by an attack so enormous that it shuts down the entire data-driven infrastructure at the heart of our lives today?

As the Executive Director of the Information Security Forum (ISF), an organization dedicated to cyber security, my own response is that major disruptive attacks are indeed possible, however, they are not inevitable. A future in which we can enjoy the benefits of cyber technology in relative safety is within our reach. 

Nevertheless, unless we recognize and apply the same dynamics which have constructively channeled other disruptive technologies, the rate and severity of cyber attacks could easily grow. 

Technical Advances

It may seem surprising, particularly in light of the tremendous technological achievement represented by the Internet and digital technology generally, that further advances in technology – which are both desirable and inevitable – may be the least important of the forces taming cybercrime. Progress in the fields of encryption and related security measures will inevitably continue. And they will just as inevitably be followed by progress in developing countermeasures. Some of those countermeasures will be the creations of technically savvy individuals – even teenage whiz kids, born in the digital age, to whom every security regimen is simply another challenge to their hacking skills. 

Over time, the contours of cybercriminal enterprise have grown to become specialized, like that of mainstream business, operating out of conventional office spaces, providing a combination of customer support, marketing programs, product development, and other trappings of the traditional business world. Some organizations develop and sell malware to would-be hackers, often including adolescents and those with relatively little computer skill of their own. Others acquire and use those tools to break into corporate networks, harvesting their information for sale or ransoming it back to its owners. Still others wholesale those stolen data files to smaller operators who either resell them or try using them to siphon money from their owners’ accounts.

Artificial intelligence using advanced analytics could offer a significant, if temporary advance in thwarting potential attackers. IBM, for example, is teaching its Watson system the argot of cyber security, which could, at least in principle, help it to recognize and block threats before they cause significant harm. But technological advances tend to be a cat and mouse game, with hackers in close pursuit of security workers. And security workers themselves can be compromised to bring their best tools over to the dark side.

Still, having even modest security technology in place can slow the pace of malicious hacking. By making it more time-consuming for someone to hack into a digital device, an attacker is less likely to try. Yet many Internet-enabled consumer devices – elements of the so-called Internet of Things, or IoT, are largely unprotected, exposing them, among other risks, to becoming unwilling robots in a vast network of slave devices engaged in denial of service attacks.

That’s not inevitable; it’s a manufacturer’s choice, driven by economics. The fact is that security can be expensive, and these devices were never designed with security in mind. They were created from the outset to provide and process information at the lowest possible cost. But by maintaining an open connection to the individual’s home computer – a device which may, in turn, be connected to an employer’s network – it offers intruders a portal to inflicting damage that goes well beyond the owner’s home thermostat or voice-driven speaker device. Securing them may become an appropriate topic for government regulation.

Cyber Culture

Although no one is feeling nostalgic about it, there was a time, not terribly long ago, when conducting cyber mischief was a personal enterprise, often a lonely teen operating out of their home basement or bedroom. But today, in the eyes of institutions eager to secure sensitive digital files, the solitary teenage hacker is less a problem than a nuisance. 

What has largely taken his place – and the overwhelming majority of hackers are male – are well organized, highly resourced criminal enterprises, many of which are based overseas, with the ability to monetize stolen data on a scale rarely if ever achieved by the bedroom-based hacker. The most persistent of them – and the hardest to defend against – are state-sponsored. But it is among young people that cyber-culture, including its more malevolent forms, is spread and nourished. And they don’t need to be thugs to participate.

Last year alone, the value of cyber theft was estimated to have reached into the hundreds of billions of dollars, and it’s growing. But unlike bank robberies of years past, cyber-theft bypasses the need to confront victims with threats of harm to coerce them to hand over money. In fact, at the end of 2013, the British Bankers Association reported that “traditional” strong-arm bank robberies had dropped by 90 percent since 2003.  

Instead, with just a few keystrokes – often entered from thousands of miles away – the larcenous acts themselves, which produce neither injury nor fear, seem almost harmless. And, at least in the eyes of adolescent perpetrators – eyes which are frequently hidden behind a mantle of anonymity and under the influence of lawless virtual worlds that populate immersive online games – the slope leading from cyber mischief into cyber crime is very gradual and hard to discern. 

Other hackers have different motives – some feel challenged to probe and test the security of an institution’s firewalls; others to shame, expose, or seek revenge on an acquaintance, and a few posturing as highly principled whistleblowers unmasking an organization’s most sensitive secrets. But even the most traditional notions of privacy and secrecy have themselves undergone something of a metamorphosis lately. 

Examples are legion:

  • Earlier this year, as I was flying from Chicago to New York, I couldn’t help but overhear the gentleman on the opposite side of the aisle telling his seatmate – a complete stranger – all about his recent prostate surgery. 
  • Attractive and aspiring celebrities regularly leak – actually, a better term for it might be that they release – videos of the most intimate moments they’ve had with recent lovers.
  • Daytime TV are shows in which a host gleefully exploits the private family dysfunctions of his guests have become a programming staple.
  • People working for extremely sensitive government organizations self-righteously hand over the nation’s most confidential data files to be posted online, purportedly to serve the public interest.

A Seismic Shift

There’s a common thread running through each of these examples.  It’s that conventional notions of privacy and appropriate information sharing have changed dramatically. It is a shift which is particularly apparent in the way younger people use the Internet in their private lives, which frequently includes the exchange of highly personal information and images. 

However, for their employers, whose electronic files typically contain sensitive personnel, financial and trade information, that behavior is not only a security concern, it is a journey into treacherous legal territory. And it is a journey which knows no jurisdictional lines. Different national cultures exert a powerful influence on their citizens’ online behavior. What are considered harmless pranks and cyber horseplay and among young people in Iraq would be seen as hostile cyber attacks in the U.S.

What we find perplexing is not so much a rapid advance in technology as a profound cultural shift – a sea change that needs to be recognized, shaped and ultimately accommodated to support appropriate and lawful use of these powerful cyber tools. That shift has a direct impact on the workplace. While an employee’s online behavior can certainly damage the organization, those acts are rarely deliberate. In fact, the greater risk comes with behaving too trustfully – opening suspicious emails, clicking on links and uploading files which inadvertently create access to the organization’s network. From there, a malicious attack can move in any direction, creating massive damage.

A New Sheriff?

The heady combination of cyber whiz kids, seismic cultural change, anomic virtual realities, sophisticated criminal gangs, state-sponsored attacks and a vigorous, web-enabled marketplace for all sorts of contraband has produced a kind of Wild West on steroids – something like the early days of automobiles, only this time on a global scale with major incidents reported almost daily. 

At the same time, however, even the Wild West brought on by the motor car was eventually tamed, or at least absorbed into the mainstream of commerce and culture. That transformation was achieved through a trifecta of improved technology for both vehicles and infrastructure, more comprehensive laws coupled with better law enforcement, and a gradual shift in driving culture affecting the perceptions and behavior of motorists. 

In the cyber world, much the same dynamic applies. Improvements in technology will continue making private data more secure. A more encompassing regimen of laws and treaties affecting users and suppliers of equipment as well as service providers will help codify the public’s requirements for security. The European Union’s recently adopted General Data Protection Regulation (GDPR), which gives back control of citizens’ personal data while unifying regulation within the EU, is an encouraging example. And more imaginative forms of cyber education to strengthen the culture by supporting appropriate uses of the technology – some of which are already underway in elementary and high school classrooms – will help to crystalize public expectations and inform behavior for the next generation of cyber citizens.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island

Calming the Complexity: Bringing Order to Your Network 

In thinking about today’s network environments – with multiple vendors and platforms in play, a growing number of devices connecting to the network and the need to manage it all – it’s easy to see why organizations can feel overwhelmed, unsure of the first step to take towards network management and security. But what if there was a way to distill that network complexity into an easily-managed, secure, and continuously compliant environment?

Exponential Growth

Enterprise networks are constantly growing. Between physical networks, cloud networks, the hybrid network, and the fluctuations that mobile devices introduce to the network, the number of connection points to a network that need to be recognized and protected is daunting. Not to mention that in order to keep your organization running at optimal efficiency – and to keep it secure from potential intrusions – you must operate at the pace that the business dictates. New applications need to be deployed and ensuring connectivity is an absolute requisite, but the old now overly permissive rules need to be removed, and servers decommissioned – it’s a lot, but teams can trudge through it.

But getting through it isn’t all that you have to worry about – the potential for human error on a simple network misconfiguration needs to be factored in as well. As any IT manager knows, even slight changes to the network environment – intended or not – can have a knock-on effect across the entire network.  

What’s in Your Network?

Adding up all the moving parts that make up the network, the likely risk of introducing error through manual processes and the resulting consequences of such errors puts your network in a persistent state of jeopardy. This can take the form of lack of visibility, increased time for network changes, disrupted business continuity, or an increased attack surface that cybercriminals could find and exploit.

Considering how large enterprise networks are and the number of changes required to keep the business growing, – an organization’s security team can face hundreds of change requests each and every week. These changes are too numerous, redundant, and difficult to manage manually; in fact, one manual rule change error could inadvertently introduce new access points to your network zones that may be exposed to nefarious individuals. In a large organization, small problems can quickly escalate.

The network has also fundamentally changed. Long gone are the days of sole reliance on the physical data center as organizations incorporate the public cloud and hybrid networks into their IT infrastructure. Understanding your network topology is substantially more difficult when it’s no longer on premise. Hybrid networks are not always visible to the IT and security teams, and thus complicates the ability to maintain application connectivity and ensure security.

Network Segmentation & Complexity: A Balancing Act

Network segmentation limits the exposure that an attacker would have in the event that the network is breached. By segmenting the network into zones, any attacker that enters a specific zone would be able to access only that zone – nothing else. By dividing their enterprise networks into different zones, IT managers minimize access privileges, ensuring that only those who are permitted have access to the data, information, and applications they need.

However, by segmenting the network you’re inherently adding more complexity to be managed. The more segments you have, the more opportunity there is for changes to be made in the rules that govern access among these zones.

How can an IT manager turn an intricate, hybrid network into something manageable, secure, and compliant?

The Answer: Automation and Orchestration

As we have seen, the enterprise network changes all the time – so it’s imperative to ensure that you’re making the correct decisions so that changes do not put the company at risk. The easiest way to do this is to set a network security policy, and use that policy as the guide for all changes that are made in the network. Using a policy-based approach, any change within the network infrastructure is confirmed to be secure and compliant. With a centralized policy in place, now you have control.

The next step to managing complexity is removing the risks of manual errors. This is where automation and orchestration built on a policy-based approach is required.

Now you’re able to analyze the network, design network security rules, and develop and automate the rule approval process. This approach streamlines the change process and eradicates unintended errors.

Using the right automation and orchestration tools can add order and visibility to the network, manage policy violations and exceptions, and streamline operations with continuous compliance and risk management.

Together, automation and orchestration of network security policies ensures that you have a process in place that will enable you to make secure, compliant changes across the entire network – without compromising agility, risking network downtime, or investing valuable time on tedious, manual tasks.

Complexity is the reality of today’s enterprise networks. Rather than risk letting one small event cause a big ripple across your entire organization, with an automated and orchestrated approach to network security management, your network can become better-controlled – helping you improve visibility, compliance, and security.

About the author: Reuven Harrison is CTO and Co-Founder of Tufin. He led all development efforts during the company’s initial fast-paced growth period, and is focused on Tufin’s product leadership. Reuven is responsible for the company’s future vision, product innovation and market strategy. Under Reuven’s leadership, Tufin’s products have received numerous technology awards and wide industry recognition.

Copyright 2010 Respective Author at Infosec Island

#NCSAM: Third-Party Risk Management is Everyone’s Business

One of the weekly themes for National Cyber Security Awareness Month is “Cybersecurity in the Workplace is Everyone’s Business.”

And we couldn’t agree more. Cybersecurity is a shared responsibility that extends not just to a company’s employees, but even to the vendors, partners and suppliers that make up a company’s ecosystem. The average Fortune 500 company works with as many as 20,000 different vendors, most of whom have access to critical data and systems. As these digital ecosystems become larger and increasingly interdependent, the exposure to third-party cyber risk has emerged as one of the biggest threats resulting from these close relationships.

Third-party risk is only going to get more difficult, but collaboration – the pooling of information, resources and knowledge – represents the industry’s best chance to effectively mitigate this growing threat. The PwC Global State of Information Security Survey 2016 found that 65 percent of organizations are formally collaborating with partners to improve security and reduce risks.

Overall, organizations need to put more emphasis on understanding the cyber risks their third parties pose. What risks does each third party bring to your company? Do they have access to your network? What would the impact be if they were to be breached? One of the key ways to do this is by engaging with your third parties, and assessing them based of the appropriate level of risk they pose and collaborating with them on a prioritized mitigation strategy.

It’s unlikely that the pressure facing businesses to become more efficient will lessen, which means larger digital ecosystems and more cyber risks to businesses. The only way to protect your organization from suffering a data breach as a result of a third party is to put more emphasis on understanding the cyber risks your third parties pose and working together to mitigate them.

Learn more about NCSAM at:

Help spread the word by joining in the online conversation using the #NCSAM hashtag!

About the author: As Head of Business Development, Scott is responsible for implementing CyberGRX’s go-to-market and growth strategy. Previous to CyberGRX, he led sales & marketing at SecurityScorecard, Lookingglass, iSIGHT Partners and iDefense, now a unit of VeriSign.

Copyright 2010 Respective Author at Infosec Island

Oracle CPU Preview: What to Expect in the October 2017 Critical Patch Update

The recent media attention focused on patching software could get a shot of rocket fuel on Tuesday with the release of the next Oracle Critical Patch Update (CPU). In a pre-release statement, Oracle has revealed that the October CPU is likely to see nearly two dozen fixes to Java SE, the most common language used for web applications. New security fixes for the widely used Oracle Database Server are also expected along with patches related to hundreds of other Oracle products.

Most of the Java related flaws can be exploited without needing user credentials, with the highest vulnerability score expected to be 9.6 on a 10.0 scale. The CPU could also include the first patches related to the latest version of Java – Java 9 – which was released in September.

Oracle is also expected to include advanced encryption capabilities included in Java 9 (JCE Unlimited Strength Policy Files) for previous Java versions 8 – 6.

The October CPU comes on the heels of a September out-of-cycle Security Alert from Oracle addressing flaws exploited in the Equifax attack. The Alert followed the announcement of vulnerabilities in the Struts 2 framework by Apache that were deemed too critical to wait for distribution in the quarterly patch update.

IBM also issued an out-of-cycle patch to address flaws in IBM’s Java related products in the wake of the Equifax breach.

The Equifax attack has put a spotlight on the vital importance of rapidly applying security patches as well as the continuing struggle of security teams to keep pace with the increasing pace and size of patches. So far in 2017, NIST’s National Vulnerability Database has catalogued 11,525 new software flaws and has tracked more than 95,000 known vulnerabilities.

Oracle will release the final version of the CPU mid-afternoon Pacific Daylight Time on Tuesday, 17 October.   

About the author: James E. Lee is the Executive Vice President and Chief Marketing Officer at Waratek Inc., a pioneer in the next generation of application security solutions.

Copyright 2010 Respective Author at Infosec Island

Surviving Fileless Malware: What You Need to Know about Understanding Threat Diversification

Businesses and organizations that have adopted digitalization have not only become more agile, but they’ve also significantly optimized budgets while boosting competitiveness. Despite these advances in performance, the adoption of these new technologies has also increased the attack surface that cybercriminals can leverage to deploy threats and compromise the overall security posture of organizations.

The traditional threat landscape used to involve threats designed to either covertly run as independent applications on the victim’s machine, or compromise the integrity of existing applications and alter their behavior. Commonly referred to as file-based malware, traditional endpoint protection solutions have incorporated technologies designed to scan files written to disk before execution.

File-based vs. Fileless

Some of the most common attack techniques involve victims either downloading a malicious application whose purpose is to silently run in the background and track the user’s behavior or to exploit a vulnerability in a commonly installed piece of software so that it can covertly download additional components and execute them without the victim’s knowledge.

Traditional threats must make it onto the victim’s disk before executing the malicious code. Signature-based detection exists specifically for this reason, as it can uniquely identify a file that’s known to be malicious and prevent it from being written or executed on the machine. However, new mechanisms such as encryption, obfuscation, and polymorphism have rendered traditional detection technologies obsolete, as cybercriminals cannot only manipulate the way the file looks for each individual victim, but also make it difficult for security scanning engines to analyze the code within them.

Traditional file-based malware is usually designed to gain unauthorized access to the operating system and its binaries, normally creating or unpacking additional files and dependencies, such as .dll, .sys or .exe files, that have different functions. They could also install themselves as drivers or rootkits to take full control of the operating system if they could obtain the use of a valid digital certificate to avoid triggering any traditional file-based endpoint security technologies. One such piece of file-based malware was the highly advanced Stuxnet, designed to infiltrate a specific target while remaining persistent. It was digitally signed and had various modules that enabled it to covertly spread from one victim to another until it reached its intended target.

Fileless malware is completely different than file-based malware in terms of how the malicious code is executed and how it dodges traditional file-scanning technologies. As the term implies, fileless malware does not involve any file written on-disk for it to be executed. The malicious code may be executed directly within the memory of the victim’s computer, meaning that it will not be persistent after a system reboot. However, various techniques have been adopted by cybercriminals that combine fileless abilities with persistence. For example, malicious code placed within registry entries and executed each time Windows reboots, allows for both stealth and persistency.

The use of scripts, shellcode and even encoded binaries is not uncommon for fileless malware leveraging registry entries, as traditional endpoint security mechanisms usually lack the ability to scrutinize scripts. Because traditional endpoint security scanning tools and technologies mostly focus on static file analysis between known and unknown malware samples, fileless attacks can go unnoticed for a very long time.

The main difference between file-based and fileless malware is where and how its components are stored and executed. The latter is becoming increasingly popular as cybercriminals have managed to dodge file scanning technologies while maintaining persistency and stealth.

Delivery mechanisms

While both types of attacks rely on the same delivery mechanisms, such as infected email attachments or drive-by downloads exploiting vulnerabilities in browsers or commonly used software, fileless malware is usually script-based and can leverage existing legitimate applications to execute commands. For example, PowerShell scripts that are attached to booby-trapped Word documents can automatically be executed by PowerShell – a native Windows tool. The resulting commands could either send detailed information about the victim’s system to the attacker or download an obfuscated payload that the local traditional security solution can’t detect.

Other possible examples involve a malicious URL that, once clicked, redirects the user to websites that exploit a Java vulnerability to execute a PowerShell Script. Because the script itself is just a series of legitimate commands that may download and run a binary directly within memory, traditional file-scanning endpoint security mechanisms will not detect the threat.

These elusive threats are usually targeted at specific organizations and companies with the purpose of covert infiltration and data exfiltration.

Next-gen endpoint protection platforms

These next-gen endpoint protection platforms are usually the type of security solutions that combine layered security – which is to say file-based scanning and behavior monitoring – with machine learning technologies and threat detection sandboxing. Some technologies rely on machine learning algorithms alone as a single layer of defense. Whereas, other endpoint protection platforms use detection technologies that involve several security layers augmented by machine learning. In these cases, the algorithms are focused on detecting advanced and sophisticated threats at pre-execution, during execution, and post-execution.

A common mistake today is to treat machine learning as a standalone security layer capable of detecting any type of threat. Relying on an endpoint protection platform that uses only machine learning will not harden the overall security posture of an organization.

Machine learning algorithms are designed to augment security layers, not replace them. For example, spam filtering can be augmented through the use machine learning models, and detection of file-based malware can also use machine learning to assess whether unknown files could be malicious.

Signature-less security layers are designed to offer protection, visibility, and control when it comes to preventing, detecting, and blocking any type of threat. Considering these new attack methods, it’s highly recommended that next-gen endpoint security platforms protect against attack tools and techniques that exploit unpatched known vulnerabilities – and of course, unknown vulnerabilities – in applications. 

It’s important to note, traditional signature-based technologies are not dead and should not be discarded. They’re an important security layer, as they’re accurate and quick to validate whether a file is known to be malicious or not. The merging of signatures, behavioral-based, and machine learning security layers create a security solution that’s not only able to deal with known malware, but also tackle unknown threats, which boosts the overall security posture of an organization. This comprehensive mix of security technologies is designed to not only increase the overall cost of attack for cybercriminals, but also offer security teams deep insight into what types of threats are usually targeting their organization and how to accurately mitigate them.

About the author: Bogdan Botezatu is living his second childhood at Bitdefender as senior e-threat analyst. When he is not documenting sophisticated strains of malware or writing removal tools, he teaches extreme sports such as surfing the Web without protection or how to rodeo with wild Trojan horses.

Copyright 2010 Respective Author at Infosec Island

Why Cloud Security Is a Shared Responsibility

Security professionals protect on-premises data centers with wisdom gained through years of hard-fought experience. They deploy firewalls, configure networks and enlist infrastructure solutions to protect racks of physical servers and disks.

With all this knowledge, transitioning to the cloud should be easy. Right?

Wrong. Two common misconceptions will derail your move to the cloud

  1. The cloud provider will take care of security
  2. On-premises security tools work just fine in the cloud

So, if you’re about to join the cloud revolution, start by answering these questions: how are security responsibilities shared between clients and cloud vendors? And why do on-premises security solutions fail in the cloud?

Cloud Models and Shared Security

A cloud model defines the services provided by the provider. It also defines how the provider splits security responsibilities with customers. Sometimes the split is obvious: cloud providers are, of course, tasked with physical security for their facilities. Cloud customers, obviously, control which users can access their apps and services. After that the picture can get a little murky.

The following three cloud models don’t comprehensively account for every cloud variation, but they help clarify who is responsible for what:

Software-as-a-Service (SaaS): SaaS providers are responsible for the hardware, servers, databases, data, and the application itself. Customers subscribe to the service and end users interact directly with the application(s) provided by the SaaS vendor. Salesforce and Office365 are two well-known SaaS offerings.

Platform as a Service (PaaS): PaaS vendors offer a turnkey environment for higher-level programming. The vendor manages the hardware, servers, and databases while the PaaS customer writes the code needed to deliver custom applications. Engine Yard and Google App Engine are examples of PaaS solutions.

Infrastructure as a Service (IaaS): An IaaS environment lets customers create and operate an end-to-end virtualized infrastructure. The IaaS vendor manages all physical aspects of the service as well as the virtualization services needed to build solutions. Customers are responsible for everything else – the applications, workloads, or containers deployed in the cloud. Amazon Web Services (AWS) and Microsoft Azure are popular IaaS solutions.

The key to understanding shared security lies in understanding who makes the decisions about a specific aspect of the cloud solution. For example, Microsoft calls the shots on Excel development for their Office 365 SaaS solution. Vulnerabilities in Excel are, therefore, Microsoft’s responsibility. In the same spirit, security vulnerabilities in an app you create on a PaaS service are your responsibility – but operating system vulnerabilities are not.

This all seems like common sense – but it means you’ll need to understand your cloud model to understand your security responsibilities. If you’re securing an IaaS solution you’ll need to take a broad perspective. Everything from server configurations to container provenance can impact your security posture – and they are your responsibility.

Security “Lift and Shift”

An IaaS solution can virtually replicate on-premises infrastructure in the cloud. So lifting and shifting your on-premises security to the cloud may seem like the best way to get up and running. But that approach has led many cloud transitions to ruin. Why? The cloud needs different security approaches for three important reasons:

Change Velocity

Hardware limits how fast a traditional data center can change. The cloud eliminates physical constraints and changes how we think about servers and storage. Cloud solutions, for example, scale by instantly and automatically bringing new servers online. But for traditional security tools, this cloud velocity is chaos. Metered usage costs rapidly spin out of control. Configuration and policy management becomes an overwhelming task. Interdependent security processes become brittle and unreliable.

Network Limitations

On-premises data centers take advantage of stable networks to establish boundaries. In the cloud, networks are temporary resources. Virtual entities join and leave instantaneously and across geographical boundaries. Network identifiers (like IP addresses) no longer provide the same stable control points as they once did and encryption makes it harder to observe application behavior from the network. Network-centric security tools leave cloud solutions vulnerable to lateral movement by attackers.

Cloud Complexity

When the cloud removes barriers to velocity, the number of machines, servers, containers, and networks explodes. As complex as on-premises data centers can be, cloud solutions are far worse: the number of cloud entities, configuration files, event logs, locations, networks, and connections are too much for even expert human analysis. Analyzing security incidents, assessing the impact of a breach, or even simply tracing an administrator’s activities isn’t possible with traditional data center security tools.

Cloud Security Needs New Solutions

Moving to the cloud is more than a simple lift-and-shift of existing servers and apps to a different set of servers. Granted, offloading infrastructure responsibilities to your provider is a huge win. Without capital expenses and the inertia of hardware, IT organizations do more with less, faster.

Fortunately, new cloud-centric security solutions make your move to the cloud easier. Three key capabilities can keep you out of trouble as you transition: automation, an expanded focus on apps and operations (in addition to networks), and behavioral baselining.

Automation makes it possible to keep up with cloud changes (and DevOps teams) during deployment, operations, and incident investigations. Moving the security focus up the stack reduces the impact of network impermanence in the cloud and delivers better visibility into high-level application and service operations. And behavioral baselining makes short work of otherwise tedious rule and policy development.

With the right technologies, and an understanding of differences, security pros can easily make the move to the cloud.

About the author: Sanjay Kalra is co-founder and CPO at Lacework, leading the company’s product strategy, drawing on more than 20 years of success and innovation in the cloud, networking, analytics, and security industries.

Copyright 2010 Respective Author at Infosec Island

Joining Apple

I’m pleased to announce that I’ve accepted a position with Apple’s Security Engineering and Architecture team, and am very excited to be working with a group of like minded individuals so passionate about protecting the security and privacy of others.

This decision marks the conclusion of what I feel has been a matter of conscience for me over time. Privacy is sacred; our digital lives can reveal so much about us – our interests, our deepest thoughts, and even who we love. I am thrilled to be working with such an exceptional group of people who share a passion to protect that.

Attacking the Phishing Epidemic

As long as people can be tricked, there will always be phishing (or social engineering) on some level or another, but there’s a lot more that we can do with technology to reduce the effectiveness of phishing, and the number of people falling victim to common theft. Making phishing less effective ultimately increases the cost to the criminal, and reduces the total payoff. Few will argue that our existing authentication technologies are stuck in a time warp, with some websites still using standards that date back to the 1990s. Browser design hasn’t changed very much since the Netscape days either, so it’s no wonder many people are so easily fooled by website counterfeits.

You may have heard of a term called the line of death. This is used to describe the separation between the trusted components of a web browser (such as the address bar and toolbars) and the untrusted components of a browser, namely the browser window. Phishing is easy because this is a farce. We allow untrusted elements in the trusted windows (such as a favicon, which can display a fake lock icon), tolerate financial institutions that teach users to accept any variation of their domain, and use a tiny monochrome font that can make URLs easily mistakable, even if users were paying attention to them. Worse even, it’s the untrusted space that we’re telling users to conduct the trusted operations of authentication and credit card transactions – the untrusted website portion of the web browser!.

Our browsers are so awful today that the very best advice we can offer everyday people is to try and memorize all the domains their bank uses, and get a pair of glasses to look at the address bar. We’re teaching users to perform trusted transactions in a piece of software that has no clear demarcation of trust.

The authentication systems we use these days were designed to be able to conduct secure transactions with anyone online, not knowing who they are, but most users today know exactly who they’re doing business with; they do business with the same organizations over and over; yet to the average user, a URL or an SSL certificate with a slightly different name or fingerprint means nothing. The average user relies on the one thing we have no control over: What the content looks like.

I propose we flip this on its head.

When Apple released Apple Pay on the Web, they did something really unique, but it wasn’t the payment mechanism that was revolutionary to me – it was the authentication mechanism. It’s not perfect, but it does have some really great concepts that I think we can, and should, adopt into browser technology.  Let’s break down the different concepts of Apple’s authentication design.

Trusted Content

When you pay with Apple Pay, a trusted overlay pops up over the content you’re viewing and presents a standardized, trusted interface to authenticate your transaction. Having a trusted overlay is completely foreign to how most browsers operate. Sure, http authentication can pop up a window asking for a username and password, but this is different. Safari uses an entirely separate component with authentication mechanisms that execute locally, not as part of the web content, and that the web browser can’t alter. Some of these components run in a separate execution space than the browser, such as the Secure Element on an iPhone. The overlay itself is code running in Safari and the operating system, instead of being under the control of the web page.

A separate trusted user interface component is unmistakable to the user, but many such components can be spoofed by a cleverly designed phishing site. The goal here is to create a trusted compartment for the authentication mechanism to live that extends beyond the capabilities of what can typically be done in a web browser. Granted, overlays and even separate windows can be spoofed, and so creating a trusted user interface is no easy task.

Trusted Organization

From the user’s perspective, it doesn’t matter what the browser is connecting to, only what the web page looks like. One benefit Apple Pay has over typical authentication is that, because the execution code for it lives outside of the web page (and in code), it has control over what systems it connects to, what certificates it’s pinned to, and how that information gets encrypted. We don’t really have this with web-based authentication mechanisms. The phishing site might have no SSL at all, or might use a spoofed certificate. The responsibility of authenticating the organization is left up to the user, which was simply an awful idea.

Authenticating the User Interface

Usually when you think about an authentication system, you think about the user authenticating with the website, but before that happens with Apple Pay, the Apple Pay system first authenticates with the user to demonstrate that it’s not a fake.

In the case of Apple Pay, the overlay displays your various billing and shipping addresses and credit cards on file; sensitive information that Apple knows, but a phishing site won’t. Some of this is stored locally on your computer so that it’s never transmitted.

We’ve seen less effective versions of this with “SiteKey”, sign-on pictures and so on, but those can easily be proxied by the man-in-the-middle because the user is relying on the malicious website to perform the authentication. In Apple’s model, Apple code performs the authentication completely irrespective of what content is loaded into the browser.

No Passwords Transmitted

The third important component to note of Apple Pay is that passwords aren’t being sent, and in fact aren’t being entered at all. There’s nothing to scam the user out of except for some one-time use cryptograms that aren’t valid for any other use. While TouchID is cool, there are also a number of other forms of password-less authentication mechanisms you can deploy once you’re executing in trusted execution space.

One of the most common forms of password-free authentication is challenge/response. C/R authentication has been around for a long time, and allow legacy systems to continue using passwords, but greatly reduces the risk of interception by not sending the password. As much as a fan of biometrics fused with hardware I am, this isn’t very portable. That is, I can’t just jump on my friend’s computer and pay for something with Apple Pay without reprovisioning it.

Let’s assume that the computer has control over the authentication mechanism, instead of the website. The server knows your password, and so do you. The server can derive a cryptographic challenge based on that password. Your computer can compute the proper response based on the password you enter. Challenge/response can be done many different ways. Even the ancient Kerberos protocol supported cryptographic challenge response. That secure user interface can flat out refuse to send your password anywhere, and so a phishing site would have to convince the user to type it not just into a different site, but into a completely different authentication mechanism that they’ll be able to identify as different. Sure, some people are gullible to this, but a lot fewer than are gullible to a perfect copy of a website. That small percentage of gullible people is a smaller problem to manage.

Why don’t we use challenge/response on web pages today? For one, because we’re still authenticating in untrusted space (the browser window). The user has no idea (and doesn’t care) what happens to their password when they type it into some web browser window, and it’s just as easy to phish someone no matter what authentication mechanism you’re using in the background. What makes this feasible now is that in our ideal model, we’re doing authentication in trusted execution space – space that’s independent of the web page. This changes the game. Take the Touch Bar for example. TouchID is authenticated on the Touch Bar, but password entry could also be authenticated on it from the web browser.

An Optimal Authentication Mechanism

The ultimate goal is to condition the user to a standardized interface that can both authenticate the validity of the resource as well as authenticate itself to the user before the user is willing to accept its legitimacy and input a password.

Conditioning the User

A user interface element that is very difficult to counterfeit can also be quite difficult to create, but the benefits are considerable: If someone spends enough time around real money, they’ll be able to spot a counterfeit with a much higher success rate. On the other hand, having to look at a dozen different, poorly implemented authentication pages will condition users to accept anything they see as being real.

Our ideal authentication mechanism has an unmistakable and unreproducible user interface element. The user visits a website requiring authentication, and that website includes the necessary tags to invoke the browser’s authentication code, executed separately. Regardless of the website, this standardized authentication component is activated with a standard look; as a trusted component of the browser. Plain Jane, this could easily be an overlay that appears over the portion of the web browser that’s out of reach by the website (e.g. the address bar area). Get a bit fancier, and we’re talking about incorporating the Touch Bar or other “out of band” mechanisms on equipped machines to notify the user that an authentic authorization is taking place.

Get the user used to seeing the same authentication mechanism over and over again, and they’ll be able to spot cheap counterfeits much easier. Needle moved.

Authenticating the User Interface

The user interface itself needs to be authenticated with the user in ways that make cheap knockoffs stand out. Since the browser controls this, and not the website itself, we can do a number of different things here:

  • Display the user’s desktop login icon and full name in the window.
  • Display personal information specified by the user when the browser is first set up; e.g. “show me my first card in Apple Pay” or “show me my mailing address” whenever I am presented with an authentication window.
  • Display information in non-browser areas, such as on devices equipped with a Touch Bar, change the system menu bar to blue or green, or present other visual cues not accessible to a web browser.
  • Provide buttons that interact with the operating system in a way that a browser can’t (one silly example would be to invert the colors of the entire screen when held down).
  • Suspend and dim the entire browser window during authentication.

Authenticating the Resource

Authenticating the resource that the user is connecting to is one of the biggest challenges in phishing. How do you tell the user that they’re connecting to a potentially malicious website without knowing what that website is? We’re off to a good start by executing code locally (rather than remote website code) to perform the authentication. Because of this, we can do a few interesting things that we couldn’t do before:

  • We can validate that the destination resource is using a valid SSL certificate. Granted, this can be spoofed, however it also increases the cost of running a phishing site; not just in dollars, but in the amount of time required to provision new SSL certificates against the amount of time it takes to add one to a browser blacklist.
  • We can automatically pin SSL certificates to specific websites when the user first enrolls their account, and keep track of websites they’ve set up authentication with, so that we can warn them when asked to authenticate on a website that they never enrolled on.
  • Existing black lists and white lists can now be tied to SSL certificate information, allowing us to make better automated determinations on the user’s behalf.
  • We can share all of this information across all of the user’s devices e.g. via iCloud, Firefox’s cloud sync, and so on, to make it portable.

Other elaborate things we can do with protocol might include storing a cached copy of an icon provided by the website when the site is first provisioned, giving the user a visual cue. In order for a phishing site to copy that visual cue, the user would have to step through a very obvious enrollment process that is designed to look noticeably different from the authentication process. Icons for any previously unknown sites could display a yellow exclamation mark or similar, to warn the user. In other words, that piece of content can only be displayed by websites the user has previously set up, because we’re in control of that content in local code.

We can also do some things that we are doing now, but better. For example, we can display the organization name and website name very clearly in our trusted window, in large text, and perhaps with additional visual cues, such as underlining similarities to other websites (e.g. in red, and highlighting numbers in red (e.g. There’s no other content now to distract the user, because this is all happening in a trusted overlay, presumably even dimming the browser window.

The user will still receive warnings when authentication on someone else’s computer, and this is a good thing. The idea is to draw attention to the fact that your’e conducting a non-standard transaction and could potentially be giving our your credentials.

The goal with all of this is to remove the website content as the authenticating component. This is the #1 visual element the end-user is going to use to determine the legitimacy of a website: what it looks like. What I am suggesting is to dim that content completely and force them to focus on some very specific information and warnings.


Authentication With and Without Passwords

To improve upon our ideal authentication mechanism, we can deploy some better authentication protocols. Sending passwords back and forth can be omitted as a function of this mechanism. Websites adopting this new authentication mechanism present a great opportunity to force better protocol alternatives. Password authentication can be removed completely, using biometrics, when possible.

Two-Factor Authentication can be phished, but requiring it at enrollment (either by SMS, email, or authenticator) can dramatically limit a victim’s exposure to phishing. Requiring a secondary form of authentication for any passworded mechanisms will certainly diminish the success rate of a phish, and also increase the cost, requiring the man in the middle to be present and able to log in at that very moment.

For passworded authentication, challenge/response using cryptographic challenges can be forced, because we are running local code, and not website code. Once you’ve resolved that this standard will not support sending passwords in any way, shape, or form, you can reduce the transit attack surface significantly.


The overall benefit of an authentication mechanism that executes locally as a component of the browser (and potentially the operating system), rather than as a component of the website, is significant. This would mean the standardization of user interface components, protocol and security elements, resource validation, and provide a single point of entry to examine for further anti-phishing efforts that could extend far beyond URL validation, as we’re limited in doing now.

Given, this won’t address many other forms of social engineering. It’s very easy to send an email telling someone their account is limited, and direct them to some insecure site, but the idea is to condition and familiarize the user with one common set of authentication visuals so that they will question the legitimacy of any alternative visual elements if they appear. At the present, the visual elements between a legitimate authentication page and a malicious one are identical. This approach sets out to stop that.

Not only would such a scheme greatly diminish the overall effectiveness of phishing attacks, but it would simultaneously help to get rid of all the awful custom code by organizations doing authentication completely wrong. We see this every day; authentication has become a hodge podge of developer ineptitude. Placing this responsibility on the browser’s code, rather than the website’s, will help to provide what would hopefully become an accepted standard (should a working group address this subject), and at the very worst a few web browsers “doing it wrong” and needing to be fixed, than thousands of websites all needing to be fixed.

As long as people can be tricked, there will always be phishing (or social engineering) on some level or another, but there’s a lot more that we can do with technology to reduce the effectiveness of phishing, and the number of people falling victim to common theft.


Confide: A Quick 20 Minute Look

My inbox has been lighting up with questions about Confide, after it was allegedly found to have been used by staffers at the White House. I wish I had all of the free time that reporters think I have (I’d be so happy, living life as a broke beach bum). I did spend a little bit of time, however reverse engineering the binary and doing a simple forensic examination of it. Here’s my “literature in a rush” version…

  1. A forensic analysis didn’t yield any messages in the application’s backup files. This is the same information that comes off with a forensic imaging from a vast majority of forensics tools. I did get basic configuration information, such as the number of times the app had been launched, last use, some unique identifiers, and so on. If someone were to get a hold of the device, using normal forensic acquisition techniques, messages don’t appear to be stored anywhere they would normally come off the phone.
  2. What does get stored, and this is obvious through the application’s GUI, are undelivered messages you sent to any of your contacts. This is part of Confide’s retraction feature, and if anyone gets UI access to your device (e.g. compels you for a passcode, or looks over your shoulder), they can read any undelivered messages, the content, who they were sent to, and the time they were sent. Even if you don’t pay for the retraction feature, Confide conveniently leaves the messages there so that you can see their advertising, in hopes that you will one day pay for this feature.
  3. The encryption itself appears to be a fusion of PKI (public key cryptography) with some symmetric encryption components. I can’t really describe it completely because all of the encryption appears to be home brew. That is, the encryption and decryption routines, random key generation, and so on all appear to be custom coded as part of its internal KFCoreCrypto classes. Home grown encryption is nearly, but not quite, almost entirely nothing like tea.
  4. The encryption appears to try and operate like most other e2e apps, where users have a public key and that public key is used to encrypt messages that are later decrypted with a private key. What seems different about this encryption (other than being home brew) is that it appears to regenerate the public key under certain circumstances. It’s unclear why, but unlike Signal and WhatsApp, which consider it something to alert you about if your public key changes, Confide appears to consider this part of its function. Key exchange is always the most difficult part of good encryption routines, and so if the application is desensitized to changing public keys, it’s possible (although not confirmed) that the application could be susceptible to the same types of man-in-the-middle attacks that we’ve seen theorized in WhatsApp (if you leave the alerts off) and iMessage.
  5. Because it has home grown encryption and because I am not a specialized encryption expert, I cannot vouch for the sanity of this encryption, except to say that I don’t think home grown encryption is ever a good thing, especially when talking about a closed source application that isn’t subject to peer review by top cryptographers. I would be glad to provide a top cryptographer (such as Matt Green) any information needed in order to evaluate the sanity of the encryption. I have some reservations about the random number generator and other code that appears to incorporate some semblance of what looks like normal cryptography, but is also mixed in with a lot of weird arithmetic shift operations and other dog food.

In short, the application doesn’t smell fully kosher, and warrants a further review before I could endorse its use in the White House. If I were the White House’s CIO, I would – other than hate my life – not endorse any third party mobile application that didn’t rely on FIPS 140-2 accepted cryptographic routines, such as Apple’s common crypto.

I sure hope they’re not using apps like this (or even the good e2e apps) to send classified material over. Has anyone asked them?

Protecting Your Data at a Border Crossing

With the current US administration pondering the possibility of forcing foreign travelers to give up their social media passwords at the border, a lot of recent and justifiable concern has been raised about data privacy. The first mistake you could make is presuming that such a policy won’t affect US citizens.  For decades, JTTFs (Joint Terrorism Task Forces) have engaged in intelligence sharing around the world, allowing foreign governments to spy on you on behalf of your home country, passing that information along through various databases. What few protections citizens have in their home countries end at the border, and when an ally spies on you, that data is usually fair game to share with your home country. Think of it as a backdoor built into your constitutional rights. To underscore the significance of this, consider that the president signed an executive order just today stepping up efforts at fighting international crime, which will likely result in the strengthening of resources to a JTTFs to expand this practice of “spying on my brother’s brother for him”.

Once policies that require surrendering passwords (I’ll call them password policies from now on) are adopted, the obvious intelligence benefit will no doubt inspire other countries to establish reciprocity in order to leverage receiving better intelligence about their own citizens traveling abroad. It’s likely that the US will inspire many countries, including many oppressive nations, to institute password policies at the border. This ultimately can be used to bypass search and seizure laws by opening up your data to forensic collection. In other words, you don’t need Microsoft to service a warrant, nor will the soil your data sits on matter, because it will be a border agent connecting directly your account with special software.

I am not a lawyer, and I can’t provide you with legal advice about your rights, or what you can do at a border crossing to protect yourself legally, but I can explain the technical implications of this, as well as provide some steps you can take to protect your data regardless of what country you’re entering.

The implications of a password policy is quite severe. Forensics software is designed to collect, index, organize, and make searchable every artifact possible from an information source. Often times, weak design can allow these tools to even recover deleted data, as was evidenced recently by Elcomsoft’s tool to recover deleted Safari history. Once in an intelligence database, this can be correlated with other data, even including your interests, shopping habits, and other big data bought from retailers. All of this can be fed into even basic ML to spit out a confidence score that you are a terrorist based on some N-dimensional computation, or plot you on a K-nearest neighbor chart to see how close you plot to others under suspicion. The possibilities really are endless.

You might think that you can simply change your passwords after a border encounter, but what you may not realize is that a forensics tool is capable of imaging potentially your entire life from a single access to your account. Whether it’s old iPhone backups sitting in iCloud that can date back years, or your entire Facebook private message history, once an API is wired into a forensics tool, that one moment in time exposes all of your historical data to the border agent, which ultimately exposes all of your historical data to an intelligence database.

With that said, the goal is to avoid exposing your account information at the border so that it can’t be stolen from you in the first place. The key to mastering the art of protecting your data at a border is to forward plan for continuity of access outside of the constraints of the border crossing, while positioning yourself as if you were the adversary during this encounter. There are a number of different ways to do this which can range from social engineering to compartmentalization of data. How you choose to do it depends on your data needs while abroad.

All of these suggestions attempt to provide a technical basis to get you to “can’t”; that is, so you “can’t” expose your own data even if you were compelled to. In my experience, “can’t” will often get you better mileage than “won’t”, however depending on the country you’re entering, it’s possible that “can’t” could also get you jailed. It’s your responsibility to decide what information you need to be able to expose if compelled or threatened; this, you can keep at the front of your memory, like passwords. Getting to “can’t”, however, is much harder than getting to “won’t”, and since you probably already know how to do the latter, I’ll focus on the art of “can’t”.


Obviously, you want all of your devices encrypted and powered off at the border. There are plenty of ways to access content on devices (even locked ones) if the encryption is already unlocked in memory. This is kind of a given, but I felt the need to mention it anyway. Encryption only gets you to “won’t”, of course, which is why it’s not a significant part of this post. Encryption alone won’t get you to “can’t”, but it is a good starting point.


Throughout this post, be thinking about the different layers of data. Your most personal crucial data is the data that you don’t want to bring with you; your inner-ring data. There are other layers around this, outer-rings of data that you consider sacrificial to certain degrees. Learning how to divide this data up before copying to your devices will help minimize the exposure of your content in the event all of your devices are compromised. Compartmentalizing your data into different layers is designed to help you organize what information you won’t be bringing with you, or what you will be protecting with various techniques discussed here or otherwise.

Burner Devices

The first, and most common piece of travel advice an information security expert will give is to use burner devices when possible. This is because the best way to avoid having your data stolen is to simply not have the data with you. In our threat model here, that also means that you cannot have any means to access the data remotely. For this reason, a burner device will get you only so far, but can still be an important ingredient.

Any data that you do not need to have with you on your trip should be backed up at home, including accounts and passwords that you won’t need to connect to while abroad. Ideally, use multiple drives and keep copies of the data at multiple sites, encrypted. If your house burns down (or is ransacked) while you’re away, you really want to have an off-site backup somewhere.

Properly wiped burner devices containing minimal data will reduce your exposure; one of the benefits of using a burner is that you’ve got a device that’s never been exposed to your most important data (let’s call this your “inner ring”), but only your outer rings of data. You’ll also want to keep the burner devices isolated from accounts that could sync old data back onto them, such as old call history databases from an iCloud backup. It’s not just the data you’re putting on now that matters, but having a clean system with no forensic trace on it.

Typically, people use burner devices to secure their exit from a hostile country. The rationale is that your device may have been compromised at some point during your trip, resulting in malware or even an implant being installed on the device to provide persistent surveillance capabilities. So not only does a burner device help in providing a clean room to carefully place outer-ring data, but it is more useful when exiting, to ensure you don’t bring any bugs back with you. If you can discard it before getting to the border, then you won’t even need to give it a second thought.

Budget constraints may not make this possible, but keep in mind that your laptop could be seized at any time and kept for months, even by the US government. If you are overly concerned about your device being searched at the border, and can’t “burn” it, mail it to yourself at some discreet name and location, and overnight. Of course that has risks too. There are some great physical anti-tamper primers out there that can be used to help ensure security while in transit.

2-Factor Authentication

You will no doubt have some online accounts that you’ll need access to while abroad; if you can’t live without your Twitter or Facebook account, or access to your source code repositories, etc., the next important step is to activate 2FA for these accounts. 2FA requires that you not only have a password, but also a one time use code that is either sent to or generated by your device.

2FA in itself isn’t a solution, as many forensics tools can prompt the examiner for a 2FA token, and you can potentially be compelled to provide a token at the border. This is where a bit of ingenuity comes into play, which we’ll discuss next. The takeaway from this section, however, is not to bring any accounts across the border that don’t have 2FA enabled. If you are compelled to give up any password, you’re giving away access to the account.

Any accounts that you cannot protect with 2FA are best left to burner accounts with only outer-ring data,, but bear in mind that simply deactivating an account doesn’t protect you. With the same password, a border agent can easily re-activate a dead account. Should they obtain knowledge of the account through forensic technique, etc., you may still risk exposure.

Locking Down 2FA

There are a few different forms of 2FA, but all generally provide you with backup codes when you activate it. Store these backup codes either at home (if coming back into your home country), or keep them in a safe place in electronic form where you know you can get to them securely from the other side of the border. If you must use snail mail, encipher them using one of many ciphers that can still be done by hand. Other options include use of steganography, secure comms with an affiliate, or hardware token.

To lock down 2FA at a border crossing, you’ll need to disable your own capability to access the resources you’ll be asked to surrender. For example, if your 2FA sends you an SMS message when you log in, either discard or mail yourself the SIM for that number, and bring a prepaid SIM with you through the border crossing; one with a different number. If you are forced to provide your password, you can do so, however you can’t produce the 2FA token required in order to log in. Purchasing a prepaid SIM in a foreign country is a fairly common behavior.

If you use an authenticator application, such as Google Authenticator or 1Password, delete the application from your devices. Worse case scenario, the border agent can force you to re-download the applications, but you won’t be able to re-provision them without the backup codes you have waiting for you on the other side. There is a social element here, of course, such as “oh, I can only access my account from my home computer, I’m sorry, I don’t have it installed on this phone. I guess I’m locked out too!”

Locking Down Email

Once 2FA fails, preventing you from accessing your own accounts, a border agent may attempt to access your email to reset the accounts. Ensure that your devices have all been signed out of your email, and that no passwords are stored on the device. Ideally, use a completely different email account to provision your accounts – one that is not normally synced with your devices. This is sound security advice too for protection from everyday phishing.

You can go through the same dance to lock yourself out of that email account as well, of course, making those backup codes only available to yourself on the other side of the border. The 2FA for that email account can forward to some dead account that you’ve since closed, or take this as far as you want to go with it. Chances are a border agent is only willing to go so far down the rabbit hole before giving up, but YMMV.

Data Redundancy and the Cloud

While you may wipe your devices of personal data, a traveler often needs at a minimum access to their basic contacts and calendar. This information can be synced in iCloud; before arriving at the border, wiping your device will remove all of your personal information, including iCloud data, from the phone. Once you’ve arrived at your destination, using your 2FA backup code to re-sync your iCloud content will give you back your minimum working data to be functional again.

Your iCloud information is, of course, subject to warrants, however border crossings often go by much looser rules. The probability of obtaining a warrant is generally going to be low at a border crossing, unless you’ve got reason to believe otherwise, but there are also rules involving what soil your data sits on (rules that have been pushed on recently, mind you, in this country). Keeping your data in any online system will no doubt expose it to a warrant, but that’s not what we’re trying to protect ourselves from here.

Pair Locking

I’ve written about Pair Locking extensively in the past. It’s an MDM feature that Apple provides allowing you to provision a device in such a way that it cannot be synced with iTunes. It’s intended for large business enterprises, but because forensics software uses the same interfaces that iTunes does, this also effectively breaks every mainstream forensics acquisition tool on the market as well. While a border agent may gain access to your handset’s GUI, this will prevent them from dumping all of the data – including deleted content – from it.

Without pair locking, giving UI access to a border agent allows them to image much of the raw data on the device, which ultimately can give them a six month or even a twelve month picture of your activity, rather than just what’s available from the screen.

Now, backup encryption is a great mechanism, and this too will break forensics tools, but you can also be compelled out of a backup password. If you are, all of the social media account passwords and other information can be extracted from your device. This is why I recommend pair locking on top of backup encryption: It completely prevents any such tools from connecting to the phone, even if your device UI and backup password have been compromised.

Of course, this means that you also can’t carry the pairing records around with you on the laptop you’re crossing the border with. These pair records, found in /var/db/lockdown on a Mac, need to go in with the backup codes and other files you have prepared for yourself in advance.

Fingerprint Readers

If any of your devices include fingerprint readers, it’s best to disable them and delete your prints before going through a checkpoint, for obvious reasons. Of course, this really plays into the position of “won’t” versus “can’t”, if you can still be compelled to give up your device passcode. Nonetheless, it raises the bar considerably, even against warrants, which can compel a fingerprint in the US, but in most cases cannot compel a passcode.

Misdirection vs. Lying

I never recommend lying to a border agent, no matter what country you’re in. Misdirection is also a far better alternative to securing your data. If, by happenstance, you’ve set up your security so that you cannot access what they need yourself, this in my opinion is far better than simply telling someone that you don’t have a social media account. Everything that you say and do may end up in a file on you for next time you pass through the border, and if you’re found to be lying, you’ll be denied entry.

Get your method down before you leave home. “My Twitter account only works from my home computer” is an honest and accurate response, and much better than getting caught in a lie later on about not having a social media account. Remember, many countries have access to open source social intelligence and already know the answers to some of the questions they ask you.

Use Your Brain

Depending on what country you’re trying to protect yourself in, it’s most important to use your brain and know what the country’s laws are. It’s easy to poke your chest out and refuse to give up any information, but that’s not always the path of least resistance. Disavowing yourself of your ability to access your own data, temporarily, may give you better results on a security level, but remember the beaten-with-a-wrench policy typically overrides a lot of your politics. If you’re going to be serious about protecting your data, then that means you also need to consider and weigh the consequences of that.

DISCLAIMER: You accept all of the liability yourself in taking any of this advice.




Slides: Crafting macOS Root Kits

Here are the slides from my talk at Dartmouth College this week; this was a basic introduction / overview of the macOS kernel and how root kits often have fun with the kernel. There’s not much new here, but the deck might be a good introduction for anyone looking to get into develop security tools or conduct security research in macOS. Note: Root kits aren’t exploits; there’s no exploit code in this deck. Sorry!

Crafting macOS Root Kits

Resolving Kernel Symbols in a Post-ASLR macOS World

There are some 21,000 symbols in the macOS kernel, but all but around 3,500 are opaque even to kernel developers. The reasoning behind this was likely twofold: first, Apple is continually making changes and improvements in the kernel, and they probably don’t want kernel developers mucking around with unstable portions of the code. Secondly, kernel dev used to be the wild wild west, especially before you needed a special code signing cert to load a kext, and there were a lot of bad devs who wrote awful code making macOS completely unstable. Customers running such software probably blamed Apple for it, instead of the developer. Apple now has tighter control over who can write kernel code, but it doesn’t mean developers have gotten any better at it. Looking at some commercial products out there, there’s unsurprisingly still terrible code to do things in the kernel that should never be done.

So most of the kernel is opaque to kernel developers for good reason, and this has reduced the amount of rope they have to hang themselves with. For some doing really advanced work though (especially in security), the kernel can sometimes feel like a Fisher Price steering wheel because of this, and so many have found ways around privatized functions by resolving these symbols and using them anyway. After all, if you’re going to combat root kits, you have to act like a root kit in many ways, and if you’re going to combat ransomware, you have to dig your claws into many of the routines that ransomware would use – some of which are privatized.

Today, there are many awful implementations of both malware and anti-malware code out there that resolve these private kernel symbols. Many of them do idiotic things like open and read the kernel from a file, scan memory looking for magic headers, and other very non-portable techniques that risk destabilizing macOS even more. So I thought I’d take a look at one of the good examples that particularly stood out to me. Some years back, Nemo and Snare wrote some good in-memory symbol resolving code that walked the LC_SYMTAB without having to read the kernel from disk, scan memory, or do any other disgusting things, and did it in a portable way that worked on whatever new versions of macOS came out. 

The __LINKEDIT segment and LC_SYMTAB weren’t loaded into kernel memory util around Snow Leopard, and so prior to that a number of root kits had no choice but to read the symbol table off disk by opening up /mach_kernel, which of course has also been moved around. Today’s versions of macOS make it much easier for a developer to skirt around the privatized kernel symbols, and this is a positive thing, because developers don’t have to be so dangerous with their resolving code.

Nemo and Snare’s code has gotten a bit old and stale, so I thought I’d freshen it up a bit under the hood. Two things in particular needed some work to get the engine to turn over. There were some pointer offsets in LC_SYMTAB that weren’t being used right which broke on any recent version of macOS, and it also didn’t handle kernel ASLR, which made it unusable. I fixed the symbol table pointers so that we’re reading the right parts of LC_SYMTAB now, and I’ve also come up with a novel way to deduce the kernel base address by using some maths and a command that Apple has exposed to the public KPI to unslide memory, which subtracts vm_kernel_slide out for you.

The functon vm_kernel_unslide_or_perm_external was originally added to expose an address to userspace from the kernel or heap. Exposing kernel address space to userspace seems like a really awful idea, but the function can be used for just that; if you feed it the usual kernel load address (0xffffff8000200000), it will subtract vm_kernel_slide for you, which isn’t exposed to the KPI, and give you the base kernel address in memory – really quite simple and elegant. No ugly hacks required. You don’t have to back-read memory to find 0xfeedfacr or anything else. Apple’s code is pretty intentional, so this isn’t a hack either; they’ve provided you with a way to unslide kernel ASLR from within the kernel, which is a lot safer than some of the ways devs were doing it before.

In addition to these fixes to the code, I’ve also added a simple usage example to demonstrate how to call a function once you’ve actually found the symbol. There are a few different conventions that are possible, I used a less old school and more implicit technique to invoke proc_task to obtain the task for launchd in this example.

Click the link below to read the full source of the new and improved version of Snare’s kernel resolver. Special thanks to Snare for making his original code available.

Open Letter to the Law Enforcement Community

To my friends in law enforcement, and many whom I don’t know serving our country:

First, thank you. You do an incredibly difficult job that often goes unseen, and you put your life at risk to make this great country safer. For that, I am deeply grateful.

Many of you have suddenly found yourselves on the wrong side of history. Our country has what, by many appearances, seems to be an illegitimate president who may be the product of the Russian intelligence community, and possibly also the head of the FBI, both of whom played a key role in manipulating or defrauding our election system. Within one week of taking office, Trump has shown himself a madman who uses racism and personal prejudice to fill in the gaps that his incompetence affords. Within just one week, our country has been transformed from what many had considered a free country struggling to overcome their indifferences, now into a place of fear through the threatening of human rights and enabling of racists deeply rooted in our own country, igniting hostility against anyone who is different from the majority in skin tone, religion, or sexual orientation.

With the stroke of a pen, livelihoods and families have been discarded, as many who have lived legally in our country for years now risk being violently deported, or banned from re-entering the country they call home – not for committing a crime, but are now counted among the enemy merely for existing. Science and technology are also being harmed through this racist practice, many of whom are scientists, engineers, or other productive human beings working for large technology innovators or defense contractors within this country. All of them went through several layers of vetting far beyond what the president has even submitted to, just to be in this country and get the jobs they have. We’re in very troubling times – times that frighten everyone, except those in power.

The key to abusing power, as has been done throughout history, and that we are no doubt in the midst of in this country, is the ability to control the chain of command from a high position. I’ve worked with many different individuals from  various agencies, and I know most of you to be good people at the core who got into law enforcement to make a difference in the world; to make our country safer. I also know that there’s a chain of command and that is held in high regard. When that chain of command is corrupted from the top down, as has already begun happening, there will come a time when you will have to choose between the brotherhood that holds your agency together, and the brotherhood that is your fellow man (and woman).

Our country has a long history of holding the line, whether it’s the thin blue line, the protest line, or other alliances within federal government. Over the next four years, you will very likely be forced to choose between doing your job, or doing what’s moral and humane. If you allow one small compromise, eventually you’ll make more, and like a frog boiling in a pot, will find yourself to have lost all that you believed in when you took this job.

Quitting your job isn’t going to fix anything, and that’s not what I’m asking you to do; any government will always find somebody to take your place. What our country needs is for men and women like you – those who still believe in our constitution, in freedom, and in human rights – to continue filling these positions, but to demonstrate firm and public acts of disobedience when given orders that violate our constitution and your conscience, and to hold your superiors accountable to unconstitutional orders that violate human rights. When you’re  ordered by your chain of command to violate the freedoms that your relatives died to protect, no matter how small of a compromise it may feel at the time, it is not only your decision, but it is your duty to refuse such orders.

I understand the camaraderie and the sense of family you have in the many different areas of law enforcement. When I first started in forensics, many of you took me in like a brother, had me to your homes, introduced me to your wonderful families, and told me your stories. I understand that you would literally take a bullet for other agents or officers in the field. But there are other ways to protect your brothers too – namely, by saving them from the corruption that comes from being tainted by those among you that follow after racism, prejudice, and hate. The only way to do that is to hold the bad ones accountable, whether they’re your partner or a superior. You have an internal affairs department for a reason – it’s to protect your agency from corruption and from destroying the lives of both your brothers and their families. If you fail to use that when necessary, you run the risk of something far worse than betraying a brother… you run the risk of allowing many more of them to become corrupt, and eventually destroy the fabric of your agency, and the values that led you to take this job in the first place and put your life in danger daily for.

What I’m asking you to do is simple: I’m asking you to stand up for our constitution, and when you’re asked to do something that violates human rights, to refuse and publicly disclose. Refuse to carry out the orders you’re given, and immediately go to the press or at least to your internal affairs department to report what you’ve been ordered to do. As the White House continues to black out the media in order to establish their own state run propaganda, public disclosure will become more and more vital to the world learning of such atrocities. This may cost you your job, but it may also save your soul. It’s cost many others that came before you far more. I hope you will find it your duty and your privilege to stand among those that fought to defend our country, rather than just go quietly along with new marching orders that will no doubt trickle down into your agency over the next four years.

We are in uncertain times, and it’s hard to tell just where the landslide begins, or if it exists at all. I see the writing on the wall, that we’re in store for some really bad human rights violations. I urge you to choose not to play a role in it, no matter how small. The slipper slope argument is one that’s been used in the legal world for a very long time. It’s no cliche; the small decisions you make now will ultimately affect the much bigger decisions you may have to make down the road. I hope you’ll be on the right side of history when whatever plays out finally does.


Technical Analysis: Meitu is Crapware, but not Malicious

Last week, I live tweeted some reverse engineering of the Meitu iOS app, after it got a lot of attention on Android for some awful things, like scraping the IMEI of the phone. To summarize my own findings, the iOS version of Meitu is, in my opinion, one of thousands of types of crapware that you’ll find on any mobile platform, but does not appear to be malicious. In this context, I looked for exfiltration or destruction of personal data to be a key indicator of malicious behavior, as well as performing any kind of unauthorized code execution on the device or performing nefarious tasks… but Meitu does not appear to go beyond basic advertiser tracking. The application comes with several ad trackers and data mining packages compiled into it – which appear to be primarily responsible for the app’s suspicious behavior. While it’s unusually overloaded with tracking software, it also doesn’t seem to be performing any kind of exfiltration of personal data, with some possible exceptions to location tracking. One of the reasons the iOS app is likely less disgusting than the Android app is because it can’t get away with most of that kind of behavior on the iOS platform.

Over the life span of iOS, Apple has tried to harden privacy controls, and much of what Meitu wishes it could do just isn’t possible from within the application sandbox. The IMEI has been protected since very early on, so that it can’t be extracted from within the sandbox. Unique identifiers such as the UDID have been phased out for some years, and some of the older techniques that Meitu’s trackers do try and perform (such as using the WiFi or Bluetooth’s hardware address) have also been hardened in recent years, so that it’s no longer possible.

Tracking Features

Some of the code I’ve examined within Meitu’s trackers include the following. This does not mean these features are turned on, however many features appear to be managed by a configuration that can be loaded remotely. In other words, the features may or may not be active at any given time, and it’s up to the user to trust Meitu.

  1. Checking for a jailbreak. This code exists in several trackers, and so there are a handful of lousy, ineffective ways that Meitu checks to see if your device is jailbroken, such as checking for the presence of Cydia, /etc/apt, and so on. What I didn’t find, however, was any attempt to exploit the device if it was found to be jailbroken. There didn’t appear to be any attempts to spawn new processes, invoke bash, or exfiltrate any additional data that it would likely have access to on a jailbroken device. Apple’s App Review team would have likely noticed this behavior if it existed, also. Apple 1, Meitu 0.
  2. Attempts to extract the MAC address of the NIC (e.g. WiFi). There were a few different trackers that included routines to extract the MAC address of the device. One likely newer tracker realized that this was futile and just returned a bogus address. Another performed the sysctl calls to attempt to obtain it, however the sandbox would similarly return a bogus address. Apple 2, Meitu 0.
  3. Meitu uses a tool called JSPatch, which is a very sketchy way of downloading and executing encrypted JavaScript from the server. This is basically an attempt to skirt iOS’ unsigned code execution by downloading, decrypting, then eval’ing… but isn’t quite evil enough that Apple thinks it’s necessary to forbid it. Nonetheless, it does extend the functionality of the application beyond what is likely visible in an App Store review, and by using Meitu you may be allowing some of its behavior to be changed without an additional review. No points awarded to either side here.
  4. The various trackers collect a number of different bits of information about your hardware and OS version and upload that to tracker servers. This uses your network bandwidth and battery, so using Meitu on a regular basis could consume more resources. There wasn’t any evidence that this collection is done when the application is in the background, however. If the application begins to use a lot of battery, it should gravitate towards the top of the battery usage application list. Apple 3, Meitu 0.
  5. Code did exist to track the user’s location directly, however it does not appear to be active when I used it, as I was never prompted to allow access; if it does become active, iOS will prompt the user for permission. Apple 4, Meitu 0.
  6. Code also existed to indirectly track the user’s location by extracting location information from the EXIF data in images in the user’s photo album. Any time you take a photo, the GPS position where the photo was taken is written into the metadata for the picture, and other applications have access to read that (if they’re granted access to the photo album or camera). This can be aggregated to determine your home address and potentially your identity, especially if it’s correlated with existing data at a data mining company. This is a very clever way to snoop on the user’s location without them being prompted. It was not clear if this feature was active, however the hooks did exist to send this data through at least some trackers compiled into Meitu, and appeared to include MLAnalytics and Google AdMob trackers. Apple 4, Meitu 1.

Other Observations

  1. Code existed to use dlopen to link directly to a number of frameworks, which can often be used by developers to invoke undocumented methods that are normally not allowed by the App Store SDK. Chronic reported this in his own assessment, but indicated that it was never called. I have since discussed some of my findings with him – namely, suspicious long jumps in the code involving pointer arithmetic that indicate the calls may have been obfuscated. It is very likely, however, that these calls no longer work in recent versions of iOS due to ASLR. The entire issue is a moot one anyway, as I’ve been informed that weak linking in this fashion is now permitted in the App Store, so long as the developer isn’t using it as a means to call unsupported methods. I did not see evidence of that happening.
  2. Meitu does obtain your cellular provider’s name, using an authorized framework on the device, as well as observes when that carrier changes (possibly to determine if you’re switching between WiFi and LTE). This appears to be permitted by Apple and does not leak information beyond what’s stated here.
  3. Code was found that makes private framework calls, but as Chronic pointed out, they no longer work. This was likely also old code lying around from earlier versions of the app or trackers.

A number of these trackers were likely written at different times in the iOS life cycle, and so while some trackers may attempt to perform certain privacy-invading functions, many of these would fail against recent versions of iOS. A number of broken functions no longer used likely also were at one point, until Apple hardened the OS against them.


Meitu, in my opinion, is the quintessential data mining app. Apps like this often provide menial functionality, such as fart and flashlight apps do, in order to get a broad audience to use them and add another data point into a series of marketing databases somewhere. While Meitu denies making any money off of using these trackers, there’s very little other reason in my mind to justify seeing so many built into one application – but that is a judgment call for the user to make.

Because of all of the tracking packages baked in, Meitu is a huge app. I cannot vouch for its safety. There may very well be something malicious that I haven’t found, or perhaps something malicious delivered later through their JSPatch system. It’s a big app, and I’m not about to give them a free complete static binary analysis.

At the end of the day, using Meitu isn’t likely to adversely affect your system or steal your data, however it’s important to understand that there is a fair bit of information that could be used to track you as if cattle in some marketing / data mining system used by advertisers. Your adversary here isn’t China, it’s likely the department store down the street (or perhaps a department store in China), but feel free to insert your favorite government conspiracy theory here – it could possibly be true, but they have better ways to track you. If you don’t mind being tracked in exchange for giving yourself bug eyes and deleting your facial features, then Meitu might be the right app for you.

Configuring the Touch Bar for System Lockdown

The new Touch Bar is often marketed as a gimmick, but one powerful capability it has is to function as a lockdown mechanism for your machine in the event of a physical breach. By changing a few power management settings and customizing the Touch Bar, you can add a button that will instantly lock the machine’s screen and then begin a countdown (that’s configurable, e.g. 5 minutes) to lock down the entire system, which will disable the fingerprint reader, remove power to the RAM, and discard your FileVault keys, effectively locking the encryption, protecting you from cold boot attacks, and prevent the system from being unlocked by a fingerprint.

One of the reasons you may want to do this is to allow the system to remain live while you step away, answer the door, or run to the bathroom, but in the event that you don’t come back within a few minutes, lock things down. It can be ideal for the office, hotels, or anywhere you feel that you feel your system may become physically compromised. This technique offers the convenience of being able to unlock the system with your fingerprint if you come back quickly, but the safety of having the system secure itself if you don’t.

To configure this, we’ll first add a sleep button to the Touch Bar, then look to command-line power management settings to customize its behavior.

Adding a sleep button to the Touch Bar is pretty straight forward. Launch System Preferences, then click on Keyboard. At the bottom of the window is a button labeled Customize Control Strip.


To add a sleep button to the Touch Bar, choose which of the four existing buttons you can live without. Most people choose the Siri button, because it’s accessible from both the dock and the menubar as well. Drag the icon labeled Sleep from the window onto the Siri button on the Touch Bar, and the button will turn into a sleep button. If you would also like a screen lock that does not perform any lockdown function while on AC power, you can also drag the Screen Lock button onto the Touch Bar, and use that for when you don’t want lock down (it may still lock down on battery, as the system will sleep whenever it’s on battery). Once you’re finished customizing the Touch Bar, click Done.

OK! So we’ve got a sleep button on the Touch Bar – this is our future lockdown button; it can be triggered a lot faster than holding in the power button, and even better, will be able to lock down the system without losing all your work.

By default, however, putting the machine to sleep on its own doesn’t really lock anything down, and you can still unlock it with your fingerprint when it wakes, so next we’re going to need to change the system’s sleep behavior. There are a number of hidden knobs that can be set on the command-line to change how power management behaves on sleep.

We need to set a few different options. First, we need the system to go from sleep mode into what’s called hibernate mode after a preset period of time. In our example, we’ll use 300 seconds (five minutes). Hibernate mode is a deep sleep, where the system commits its memory contents to disk and shuts down the processor. Until the system is in hibernate, you’ll be able to unlock the device with your fingerprint, which we don’t want. From a terminal window, run the following commands to adjust the various sleep and hibernate timers:

sudo pmset -a autopoweroffdelay 300
sudo pmset -a standbydelay 300
sudo pmset -a standby 1
sudo pmset -a networkoversleep 0
sudo pmset -a sleep 0

Next, there is a parameter named hibernatemode that alters the behavior of hibernate in a wonderful way. When set to the value 25, this parameter will cause macOS to remove power to the RAM, which thwarts future cold boot attacks against the system (a few minutes after the power is removed, at least).

sudo pmset -a hibernatemode 25

Lastly, a hidden setting named DestroyFVKeyOnStandby can be set that will cause hibernate mode to destroy the File Vault keys from memory (or stored memory), effectively locking the encryption of the system.

sudo pmset -a DestroyFVKeyOnStandby 1

With all of these put into place, you can now put your system on a timed lockdown. Here’s how it works:

  1. The user presses the sleep button on the Touch Bar
  2. The screen immediately locks, the system goes to sleep and a five minute timer starts
  3. If the user unlocks the machine within the five minute period, all services are restored and they can use their fingerprint to authenticate
  4. Once the timer expires, the system transitions from sleep mode to hibernate mode
  5. Upon entering hibernate mode, power is removed from the RAM and the File Vault keys are destroyed in memory
  6. When the user wakes the machine, they will be prompted for their password in order to unlock File Vault
  7. Once the user has authenticated with a password, they will be prompted a second time to authenticate with their fingerprint (or password); this is the restored state from when the system was first locked

This type of setup works well in the workplace, where you may walk away from your machine often, or while in public or any other venue where you may temporarily leave your system for a short period, but are concerned about physical security. If you are a political dissident or someone else who may be targeted, using this system provides a convenient way to manage your system to keep the fingerprint reader useful, but also lock down if an unexpected event occurs and your devices are physically compromised.

You can restore all power management defaults in System Preferences if you decide to back out of this configuration, and of course depending on your level of paranoia, you may wish to adjust the hibernate timer to one minute or ten, to your liking.

On Christianity

I’ve often been asked why an intellectual type guy such as myself would believe in God – a figure most Americans equate to a good bedtime story, or a religious symbol for people who need that sort of thing. Quite the contrary, what I’ve discovered in my years of being a Christian is that it is highly intellectually stimulating to strive to understand God, and that my faith has been a thought-provoking and captivating journey.  I wasn’t raised in a Christian home, nor did I have any real preconceived notions about concepts such as church or the Bible. Like most, I didn’t really understand Christianity with anything other than an outside perception for the first part of my life – all I had surmised was that he was a religious symbol for religious people.

Today’s perception of Christianity is that of a hate-filled, bigoted group of racists, a title that many self-proclaimed Christians have rightfully earned for themselves. This doesn’t represent Christianity any more than the other stereotypes do, and most people know enough about the Bible to know that such a position is hypocritical. Since 1993, I’ve been walking in the conviction that the God of the Bible is more than just a story, that he’s nothing like the stereotypes, and that it takes looking outside of typical American culture to really get an idea of what God is about. In this country, I’ve all of the different notions of what a church should be; I think most people already know in their heart who God is, and that’s why they’re so averse to the church.

The term born again Christian is a difficult term to figure out. The question most people ask is what in the world do they believe?.

This largely depends on who you ask.

Ironically, it’s become quite difficult to get past the dogma and the cultural facade to truly understand the Christian faith in this country. It can be a life long process to try and truly understand the questions of not only what we believe, but also desire to know whom it is we believe in and why. Much of the church tends to go off-course and read too deeply into things, leaving a lot of churches representing less than what most would consider the basic tenets of Christian behavior; the ones you rarely hear about, though, are the ones who usually got it right – they’re out there doing what they’re supposed to do in loving people, trying to live right, and live their life with God in mind and heart. They’re not judging people, pushing political agendas, or the sort. The Bible teaches to know whom we believe, and in today’s world, there is an overwhelming amount of information available to accomplish this. I believe qualifying one’s own faith is critical to having real faith, and the basis of what I believe marks true believers – a strong desire to know their creator. If God really is the most important thing to us, shouldn’t we be taking every opportunity to study him?

A Little Background

I began with a simple faith, just like everyone, and was at an early point believing solely out of conviction as a teenager. Over time, my experience and faith began to build as I watched God work in my life; I’d undergone a dramatic personal change, and directly credited that to my faith. As I matured, I developed a strong and healthy curiosity in wanting to know just why my beliefs were qualified from a purely intellectual (as opposed to experiential) point of view. I spent several years studying textual criticism, apologetics, ecclesiology, eschatology, apocryphal manuscripts, writings of early church fathers, and the writings of great historians such as Josephus, Pliny, and other sources. I got a bit miffed at some of the academics in the field who came off as knowing better than the rest of us, so I taught myself the Greek language and put my hands on copies of the manuscripts we base our canon on (such as the Codex Sinaiticus, Codex Vaticanus, and many smaller digital fragments), and observed firsthand what was written about God without all the messy English translation to get in the way. Unlike most students who study this in seminary and later abandon their faith, I found my studies to only strengthen mine.

All of this information eventually led me to put together a solid context to better understand what it was that I believed, and how it reconciled to science. All of the historical evidence eventually began to paint a context around this collection of books we call a Bible, and granted me a deeper understanding, reconciling what God has said with a deeper understanding.

Ironically, I received a rather significant amount of pushback from certain other Christians about studying history – something you’d think everyone should be doing. I don’t quite understand why most evangelical Christians fail to study anything beyond the store-bought Bible they have, which, in English, is quite possibly the most poorly translated version of scripture in existence. Many argue that because the Bible is inspired directly by God that it is the only relevant text to read. That statement makes a lot of dangerous assumptions though – namely that God’s inspired word depends largely upon what time period and geographical location you happen to live in. Even today, there are many different scriptural canons – they can’t all be right. The Ethiopic Canon, for example, includes the book of Enoch as well as many other books not present in the Bible we use, and the Catholic Bible also includes additional books (old testament apocrypha). Throughout history there have been councils, debates, criticisms, and crimes committed over the issue of what is God-inspired. To simply trust that everyone throughout history was directed by the hand of God in putting together a Biblical canon is simply naive and leaves truth as relative. Our canon is in better condition than I believe it ever has been, but it’s still imperfect. Even our Bibles go through periodic rewrites (such as the latest NIV translation, which was redone during the short time I have been a Christian).

But more importantly, without digging deeper and studying what was written in history, what was written by early church fathers and by other sources, you can’t really establish a true context around what it is the Bible really is saying, and where many events originated from. A more well known example of this is Josephus’ documentation of possibly the earliest use of the terminology behind “binding” and “loosing”; he used it to describe punishing and liberating people. Many completely misappropriate that terminology when used in the Bible to imply some sort of “name it and claim it” scam. He also set the tone in explaining historical events that help to explain much of the context around many areas of the New Testament.

Reading the Bible today is much like looking through a series of foggy windows. The first window may be the translation from the original Greek language, or possibly even further back to the oratories in which many manuscripts were copied by ear. The next is the indoctrination of the translation, followed by the historical context. Each window further distorts what God actually said, bending the light just enough to miss important concepts. Other fogged up shards of glass might involve understanding the Gnostic and Separatist movements, and their attempts to redefine Christianity, or Apocryphal texts by early church fathers or forgeries from pseudonymous authors. The point is, the finished product in the store went through several processes to get there, and each process slightly bleached out a little of the meaning.

In my own journey to understand God, my beliefs have stemmed from the sum total of what I’ve learned to be inspired scripture, on a personal conviction, in the context of historical and literary knowledge – as far separated from orthodoxy and indoctrination as this knowledge can get without losing its meaning. My conclusions of the true meanings behind scripture is pretty on par with what many modern-day theologians also believe, but it is at the very least tempered with some sobriety and discernment over and above typical dogmatic church folk.

You’re probably already starting to surmise that my beliefs are not merely rays of sunshine and ginger snaps, void of intellectual reflection. While it’s true that Christianity is ultimately based on faith, I’ve found that I didn’t need to commit intellectual suicide to accept Jesus as God, and the claims about him as true. After a bit of introspection, I’ve arrived to some characteristics about Christianity that I have considered in the intellectual part of my faith.

Christianity Defined

Christianity is sometimes obscure to outsiders, and with the many different subcultures inside of Christianity, it can be difficult to get more than a looking-through-wet-glass resolution of what it really is. The focus of Christianity (and its root word) is obviously Christ. Christianity is focused around the man / God Jesus Christ and what Christians believe as a reconciliation between man and God through His resurrection from the dead.

Christians believe that God created the world for man.

No, we can’t agree on how he did it, and the truth is many of us don’t care.

Most of us believe that God had a direct hand in the design and manifestation of life on Earth. Some take Genesis to refer to literal days. Others use a scriptural loophole to change it into a thousand years per day. Yet others believe that the Biblical account of creation was an allegory altogether, and even have reasonable claims from the Septuagint to back it up. One thing most everyone believes, at least, is that it makes perfect sense that, as the chief architect and scientist behind the universe, whatever design he used would be fantastic and ingenious. Do I believe God used science to create the world? Of course. He probably created the science the world now studies, and had a hand in everything from the laws of physics to the natural elements we now use to dismiss His own existence.

What’s more important than how God created the world is why he did it. To have a people to call his own. The New Testament makes no mistake about our being grafted into the family of God upon our conversion. The second reason we were created is to thrive. Christians believe that we were put here with purpose, and science is beginning to reveal to us that we were put in an ideal place in the universe to discover, to learn, and to thrive. The key issue surrounding our origin on this planet isn’t so much as the how, as it is the purpose with which it came to be.

Shortly after man came about, man also fell into sin very early on – during a time where it is written that God walked among his people. Sin could be defined as rebellion toward God. This left the rest of the human race subclassing a sinful lineage, and led God to hide his face from us because he can’t be in the presence of sin. Christians believe that we (mankind) fall short of the perfection and honor God originally created us for.

Christianity teaches that we’re all deserving of death and separation from God because of our sinful nature. Christianity is based on Jesus (as God’s son) voluntarily sacrificing his life to absorb our share of the death penalty we earned from sin, freeing us from the slavery we were born into. This was qualified when Jesus rose from the dead three days after his death accompanied by a full verification by the brutal Roman military. Jesus was mocked, beaten beyond recognition, and then crucified. History writes of his miracles and accounts of Roman guards initially ordered to remain silent. The death of Christ was for the purpose of absorbing the full penalty for our sins (on our behalf) so that we didn’t have to suffer the fate of a very real hell we all deserve. All of this was prophesied in what were, at the time, Jewish manuscripts 400 years before his coming. These ended up forming much of our Old Testament today.

Jesus’ ministry set into play the notion that human beings have value simply because they’re human; that we all have intrinsic value and deserve love. In spite of the bigotry many act with today, loving your neighbor is still 50% of the entire gospel.

With that said, the looming question is why believe any of this. There are countless religions out there, and then of course there’s the religion of not having a religion. There are many observations I’ve made about Christianity over my life as a Christian that make compelling intellectual arguments, or at least things I myself have strongly considered.

Christianity’s Timeline is Complete

The books of the Bible are, by far, the oldest and most reproduced documents in existence. Not only are they extremely old, but more importantly they claim to cover history, sometimes through allegory, from the beginning of human life. Having these qualities, the scriptures are most likely to be authoritative in explaining why we ended up where we are,  even if they don’t attempt to tackle the how. If God is true, then he would have been around when everything started, and that is expressed throughout the literary works of Genesis, and acknowledged in the later historical books.

It’s much more difficult to give credit to a religion established centuries later. Take Islam for example. We didn’t see it rise until seven hundred years after Christ had already come; if the Hebrew God was a false god, then certainly any religion to show up seven hundred years later is also false. If the Judeo-Christian God is real, he has bragging rights that he was here first.

NOTE: It’s interesting to note that many Muslims claim that the qur’an is a pure text, and that the Bible is the corrupted text. As a point of fact, the qur’an is widely known to have had many different manuscript variants, just as the OT and NT do, however they were believed to be later redacted into a final copy, the originals burned. This, in a post-constantine era where reproduction of manuscript had become much more reliable (in fact, block printing was starting to be introduced). In contrast, many variations of Biblical scripture still exist today, and are reconciled through a process called textual criticism, explained next.

Textually, the Scriptures are Reliable

At some point you’ve got to validate the credibility of the manuscripts themselves, and not just take it on someone’s word that they’re reliable or that they say what other people believe they say. What is and isn’t the inspired word of God has been a debate we’ve been having for hundreds of years. What we have today isn’t the word of God – it’s a critical text put together by scholars reconciling thousands of variants of manuscripts of the word of God to reflect our best assessment of what we think the real word of God was.

Reconciling hundreds of manuscripts is difficult enough, however the English language is one of the least precise languages to translate into. The Greek doesn’t translate cleanly into English like many Germanic languages do, and so the translators are often forced to compromise by substituting a more dumbed down word or phrase to prevent a passage from being misread. Those meanings are decided by scholars who apply various indoctrinations to translate words the way they believe they were intended based on current orthodoxy (which is, of course, ultimately based on past translations), and so you end up with a slowly degrading feedback loop over time; a garbage-in-garbage-out problem of sorts. If this doesn’t seem bad enough, many publishers, such as Zondervan, have gone to great pains to make their version of the Bible easy to read at the expense of castrating what the original manuscript intended to say.

This kind of accidental (or reckless) indoctrination happens all the time. To give one example of this, Dr. Peter Williams outlined slavery in a lecture; the word “slave” rarely ever appeared in Bibles until the 1980s, but is ubiquitous in modern Bibles today of all languages. If you watch his lecture, you’ll see just how that kind of leap was made, and the intricacies about how biblical meanings intertwine with social understanding – a dangerous way to treat scripture.

So in light of all this, the obvious question is: how reliable is scripture? Well, if you can cut through all of this by applying some critical thinking, and look at scripture as a literary source, there are very few theology-shattering differences among the more trusted manuscripts we use. That doesn’t stop many churches from believing what they want, based on tradition, rather than digging into the deep theology of manuscript. Unfortunately, there are many churches that choose to remain lay and leave that kind of research to the same scholars who have been recklessly adding random words to the Bible for the past 30 years.

So when I talk about reliability of scripture, I’m speaking directly to its integrity, as opposed to its literary interpretation or its translation. Much of the Bible’s integrity was confirmed with the discovery of the Dead Sea scrolls between 1947 and 1956. Although most of these scrolls were written in Hebrew, they provided samples of scripture written before AD100, and were surprisingly close to the manuscripts we already had in our posession.

These, along with hundreds of other manuscripts, helped in building a critical text of high integrity. The most widely accepted New Testament critical text is called the Nestle-Aland text. This critical text incorporates hundreds of different manuscripts and papyrus fragments from all over the world from dozens of languages, and documents notable variations. From this is where a majority of translations ultimately stem from, which is why I bought my own copy. The old testament masoretic text has also gone under heavy review and new research in this field is particularly promising in restoring the origins of a language which has become convoluted due to many mistakes and lack of vowels in the language’s infancy.

NOTE: The original King James was based on translations traced back to Erasmus’ Textus Receptus, which was later discredited as a corrupt translation, due to the fact that Erasmus was unable to find a high quality source copy to work from. Ironically, many “tweaks” originally made by King James remain in today’s translations; for example, the book of ‘James’ is really the book of ‘Jacob’ in the Greek, but many believe it was renamed to ‘James’ to make it sound more English in nature. For some reason, scholars are too afraid to fix the name in our English bibles for fear that it will cause a revolution.

The Nestle-Aland critical text of the New Testament is fairly solid, although errors have been found and some controversial decisions have been made. For example, the verse in Paul’s letter to Timothy about women remaining silent in the synagogue was suspiciously moved around in several different manuscripts, suggesting that it may have been added at some point by a scribe, and then moved around by other scribes to make it fit grammatically. Most Christians explain the logic away anyway, or pass it off as Jewish culture (although there are a few misogynistic sects that take it literally), but it’s entirely possible the verse might not have even been part of the original manuscript! Most larger issues have been resolved in the past decade. The newest release of the NIV has included additional warnings about passages which were not found to have strong witnesses, such as the story of the adultress being stoned. The mad rush towards Gnosticism spurned by literary works such as the Da Vinci Code seems to have made Bible scholars more honest and forthcoming in recent years. What’s nice is that you can read the critical text and see footnotes containing the many different variants from manuscript to manuscript. It’s very easy to see how things originally got out of whack by having all of the information right there to review.

What essentially decides whether a piece of manuscript is reliable is how many larger witnesses and root texts it has, and where those texts originated from. I spent a lot of time researching scriptures in the New Testament regarding wine (as many of the Christians in the south have adopted a modern-day version of asceticism). I grabbed some electronic copies of the Codex Vaticanus and Codex Sinaiticus and studied the verses in their uncial character format (all caps, no spaces). It’s impressive to cross-reference 4th century and 13th century uncial manuscript, when you find that the text is either perfect, or nearly perfect.

I continued to perform several different examinations of key verses in both the manuscripts and digital copies of fragments I had available, as well as my copy of Nestle-Aland (which is much more thorough), and pleasantly found that not only were they consistent with the manuscripts we had, but that decisions made throughout the criticism process appeared to be quite sound, and almost noneventful. I’m fully convinced in my own mind that the critical text we have today is by far the best we’ve ever had.

To add some geek-worthiness to my endeavor, I’ve recently fed much of the Greek New Testament into a Markovian-based language classifier. This is a technique used in machine learning that allows the computer to identify and weigh the presence of syntactic patterns across various texts. It can effectively compare different types of documents with precise accuracy. I used it for a different purpose here, which was to extract critical patterns of authorship. I found that, on a syntactic level, even a computer found a significant consistency between various manuscripts of the same author. In other words, even a computer is capable of seeing such a striking resemblance between various books, that it believes they are consistent with their purported authors. Cool stuff – I’ll write a paper about it some day. (Note to scholars: the first critical pattern that popped out was “the kingdom of God”, in Greek of course.)

Once you get out of the critical Greek text, things get to become a little dicey however. As I mentioned, the English language simply isn’t a very good one, and on top of that many publishers now improvise the language to make it more readable. Mind you, there are no significant doctrinal changes (unless you use a completely broken text like the CEV, ESV, GW, etc.), but there’s a level of clarity that goes missing. For example, a verse in the Greek says, “bind up the loins of the thoughts of your minds”. In Greek, loins literally means “reproductive parts”; so the scripture is saying to bind up the reproductive parts of your mind – significant imagery there, and lots of power. One English translation replaced it with a mere, “be sound minded”. What was wrong with the Greek version? Granted, there is room for idiomatic phrases, but at some point you’ve got to draw the line. If you’re studying Greek, I’d advise picking up a lexicon that isn’t indoctrinated (such as Oxford’s Greek-English Lexicon – the big, heavy one) to get the full realm of meaning.

As for historical integrity, it is remarkable how well the texts have survived.

Harmony in Logic

It seems odd to equate a God who became man, then died and was resurrected for the purpose of reconciling man to God, as logical. To most, logical is an amalgamation of science, mathematics, and reasoning – like the Matrix or a good quality toaster. I’ve come to find a beautiful harmony of logic in many parts of the big picture of the story. Logic is constructed in layers, just as any great architecture is. If a single piece of the framework doesn’t fit, the entire structure falls.

There are countless examples of beautiful logic in the Bible. Some of them have taken me years to understand, others have taken scholars lifetimes. Now consider that the Bible, as we know it, was written through more than 40 different authors from all walks of life, over a period of about 2,000 years and through times of peace and war. It should be a mess, but it’s not. It’s relatively easy to forget this when we see the polished leather-bound book sitting at the book store. In spite of the Bible’s unique and diverse background, it has some of the most beautiful logic to be found transcending hundreds of years of writing, surpassing any philosophical wisdom of others. Someone once said, “If Jesus was made up by someone, I want to know who so I can worship him.”

If indeed God is real, then you’d expect that he’d also be quite smart, with a firm grasp on logical concepts – something most don’t normally picture in God. If God really is God, then it certainly doesn’t make any sense that he’d be an all-powerful, but stupid God. He must surely have invented the blueprints for the atom, particles, matter, energy, and biology. It’s very non-traditional in our culture to view God as a really smart individual, but if he’s real, then genius and science are a part of his makeup. With that, comes logic.

If Man Made God, He’d Be More Passive

I often hear individuals dismiss Christianity as a belief created by man. If this is true, then I wonder why we didn’t create a God that was more passive and catered to our emotional needs, who would tell us that everything was going to be alright and let us live for ourselves.  The Bible isn’t for the weak minded or the emotionally incomplete. It speaks of a powerful, perfect God and explains that if one is to follow Christ, they must deny themselves of their own wants and desires and take up their cross. It tells us that God hates sin and that we should deny our own personal pleasures if they are sinful. It guarantees trials, tribulation, and persecution – and that’s for the ones who are living right. Christianity is one of the hardest things to live out – especially if you live in a country that persecutes Christians.

God’s gift of salvation is free, but it costs everything to follow Christ – our entire life. If man created God, we certainly went to a lot of trouble to inconvenience ourselves. I don’t know anyone who, if given the option, would choose to share their money, live a moral lifestyle, and constantly deny their own desires – except, of course, those who have been changed by the power of God.

In America, it’s very easy to become a mediocre Christian, go to church once in a while and try not to swear too much. If anyone is guilty of creating a God, it seems that this dubious honor goes to the individuals who have taken the Bible and made themselves a pansy, self-serving God that serves their own desires. That’s not the God of the Bible, however. The American god of comfort is undoubtedly a god created by man, but the God of the Bible is most definitely not a work that would have been voluntarily created by any human, let alone 40.

Scientifically, Evidence Points to a Master Architect

You don’t have to believe in literal creation or intelligent design to be a Christian. You can even believe in Darwinian evolution if you want. I used to accept many theories blindly. Once I began researching them, I realized that the evidence we have today overwhelmingly contradicts some of them – including Darwinism, in my personal opinion. Now, I’ll be the first to admit, I have no idea how we got here – I just wish more people of science would admit the same thing, and I wish more Christians would admit that we don’t need to know the answer to that in order to be a Christian.

Personally, and for me, macro-evolution theories are lacking, but so are many of the Christian theories. Both have become just as much a religion, and just as un-falsible. Now if, by some amazingly sound evidence and scientific method, Darwinian evolution were somehow proven in a lab, it would by no means trainwreck my faith. I’m not naive enough to suppose that there may be more to God’s way of getting things done than what Sunday School teaches. After all, Genesis gave us the end results and the purpose, it didn’t give us a method, and many scholars even argue that the book was an allegorical account of an event later revealed. As I said before, I believe that the science we study today was inspired by God; a belief that is in good company with many respected mathematicians. I see God’s fingerprints all over what look like could be design patterns and anthropic tuning of physics and life in a way that would have gone any other way, but didn’t. The natural processes that we observe, and many use to explain away God, could have just as easily been the scientific building blocks used by a master architect.

Unfortunately, though, modern science isn’t always about finding truth, and anyone who’s been in the scientific community knows that papers need to get published, and data needs to get used to confirm findings. Sometimes, these things take precedence over better science. Sometimes not. Scientific authority is subject to the same signal-to-noise problem in its feedback loop as theology is. For many reasons, science can sometimes be married to an agenda. It’s not the scientific process of Darwinian evolution that many Christians take issue with, but rather the philosophical conclusions seemingly married to it by its constituents.

Stephen J. Gould’s theory of punctuated equilibrium ironically uses the same basic observations that Christians use in their creation theories, but with an atheist’s spin on things. The problem, I suppose, is that sometimes science can be predisposed to philosophically discount the divine, and so therefore no matter how compelling the evidence of God is, the conclusions of any evidence already presume atheism.  As I said, I have no idea how we got here… intelligent design? Darwinian evolution? Literal creation? Punctuated equilibrium? I haven’t seen a theory yet that doesn’t have some big holes in it. We might as well believe that God really did bury dinosaur bones if we’re going to accept any of those without criticism.

God Became One of Us

Usually when we think about God, we think about an all-powerful king in richly colored clothing who would most likely want to be served on earth, and contribute to amazing social advances that would solve world problems – all from the comfort of his bathtub. It’s very difficult to imagine a God that became one of us and washed our feet, or took on our sin to become as dirty as us. The climactic peak of the Gospel for me is not the resurrection (although that’s the most important aspect), but rather when Jesus spoke these words while on the cross:

Eloi, Eloi, Lama Sabachthani?

Which translate to “My God, My God, why have you forsaken me?”. These words bring a rush of images in thinking about Jesus as the bearer of all sin. Jesus himself never sinned – He was the perfect human. He overcame sin. When he was on the cross, however, the weight of the sin of the entire world was upon him, which parallels the sin offering required in the old testament (where the sin of the camp would be transferred onto the sacrifice). The undeniable realization is that God (the father) had to turn His back on His son because of the sin that was on Him – our sin.

And so we get a brief glimpse into the relationship between God and his son and observe just how much of a stench the sins of the world are to God. The act of bearing the sins of the world illustrates the love Jesus had for his people, as he (God) was willing to be separated (for a time) from His father by bearing our sin. Never before has anyone else’s god shown such a love to the point where they would themselves become just as dirty and detestable as the world for the purpose of redeeming us. In fact, no other god written about has ever been interested in redeeming us at all, but rather more in controlling us. They hold our souls for ransom, where Jesus became a ransom for our souls. Other gods offered ascetecism while Jesus offered freedom. He knew we would never be worthy and for a time stooped to our level of filth so that we could be raised to His level of purity.

One can’t even begin to imagine the sorrow that must have been on Jesus as he felt our dirt on him for the first time. A righteous and holy God picked up our filth and wore it as clothing. Our best fiction authors can’t come up with stuff this good. At the very least, Jesus did what no other gods worshipped by the people ever did: dwelled among the people, became common among the people, and sacrificed himself for the people.

He’s Nothing We Would Have Expected God to be

Jesus was a different savior than anyone had expected, as evidenced by many people’s hatred of him during that time and the pharisees’ plots to kill him. The Jews were expecting a savior who would deliver them from the Roman government, who would validate all of their religious traditions, and who would reward those who had kept the law – much like the other gods written about and worshipped that required obedience over faith. If Jesus had a publicist, they probably would have told him this is the god he needed to become. He could have lived a very rich and worshipped life, but he decided to take the road he was destined for.

Jesus came without political motivation. He did not destroy the government, and he did not set himself up as an idol to worship. Jesus’ mission was completely incomprehensible to even his own disciples until the opportune time, yet was completely logical. What may have seemed like the important issues of the time were really not very important in the grand scheme of eternal salvation. Jesus came with a single purpose: to offer himself up as a sacrifice to God for our reconciliation. He stuck to the plan without marketing it, without asking for recognition, and without even taking his rightful place as a king on the earth.

Before Jesus came, keeping God’s law was the way to get closest to God. Many zealots such as the apostle Paul had been schooled in Judaism and could recite the scripture from memory. Great pride was taken by the pharisees who kept the letter of the law and performed their due service to God.

Nobody thought we needed someone like Jesus.

When Jesus came and fulfilled prophesy, it was only apparent after the fact that he was doing something even more important than putting a social band-aid on the political issues of the day. The sacrifice Jesus made would end up setting generations of the world free and provide for the eternal needs of people rather than their immediate comfort needs while temporarily on the earth.

He’s Everything We Expected God to be

At the same time that Jesus was fulfilling a mission that nobody understood, He fulfilled over 300 prophesies that were written about the Messiah. This started with the time and place of his birth, which involved a massive astronomical event, came from the lineage of King David, came out of Egypt, was praised on a colt on the way to Jerusalem, crucified between two thieves, and eventually rose from the dead, as prophesied. He had no control over many of these. Others, he fulfilled intentionally. I found one (somewhat tacky) site with many of these references documented: here.

Along with over 300 prophecies being fulfilled, this amazing man was like no other. He spent his days healing people of disease and ministering to the needs of everyone he came in contact with. He illustrated the kind of wisdom and temperance only God would have, and the supernatural knowledge and ability that could only be expected from God.

At the same time, Jesus was filled with love and compassion for people. Something that modern Christianity lacks. Thousands followed him around and Jesus made sure they were all fed. He physically touched the sick. He showed love to people that many today consider refuse. The all powerful God of the universe established physical contact with diseased, rotting lepers and healed them, others sinners that our society would mock. People came to him and out of his compassion, he healed their sick children and even raised loved ones from the dead. Even though he was on Earth with a much more important purpose, Jesus cared enough to take the time to make people’s lives better on a personal level.

Jesus was everything we would expect in a perfect human being, and what you’d expect God to have for character. He had human emotion enough to weep when His friend Lazarus died, even though he knew he would later raise him from the dead. He washed his disciples feet and served the people he cared for. Never once did he do anything out of personal gain, but continued to give from the day he was born – as is the sign of a true Christian this same love for humanity.

He Overcame Death

If someone can overcome death, I’m inclined to listen to whatever they have to say. The most important characteristic of Jesus, and what gives power and authority to the Gospel, isn’t Jesus’ death on the cross, but his resurrection from death 3 days later. In spite of Jesus’ amazing life, the Gospel just wouldn’t be very compelling if it had ended with Jesus dying on a cross, and his disciples going back to tend sheep.

Jesus was crucified until dead and then speared through the side by a Roman soldier to make certain (there was much pressure by the Jewish pharisees to make sure of this). In spite of this, historical accounts (including non-biblical sources) tell us that 3 days later (as Jesus prophesied), he moved away the stone and gloriously appeared to his disciples (after scaring a few Roman soldiers half to death). Before departing into heaven, Jesus had eaten with His disciples, allowed them to touch the wounds in his hands and feet, and appeared to more than 500 people.

Jesus proved his deity in allowing us (humans) to kill him, make sure he was dead, and lock him in a tomb with the government guarding it. By rising from the dead, Jesus has proven that he is surely God, and has authority even over death.

God is Still Working

What has made the truth of the Bible the most evident to me is that God isn’t dead. The story doesn’t end 2,000 years ago with some abstract performance, but rather God is alive and working in people today.

Revelation is far better than philosophy. Philosophy changes in this country every few years. Many believe people like Martin Luther were great philosophers, but in reality they had revelation – not theory; revelation that came directly from God. I for one would rather be certain about something and not have to go back and wonder if I’ve made a mistake in my thinking. It is revelation that opens the eyes of many individuals to see just how perfect God is, which immediately makes you see how deficient even his most devout followers are. If philosophy is seeking answers, revelation is the God-given answer. He gives it out willingly, and is still giving it out today.

I know there’s a lot of hate in this world, and many people claiming to be Christians who don’t act like what even atheists could tell you about Christianity. There’s a lot of pain, judgment, bigotry, and we’re in uncertain times. You’ll know the real Christians by their love through all of this. They’re not the ones holding up “God hates fags” signs. They’re not the ones trying to put Muslims in concentration camps. They’re the quiet ones out there doing the two things that Jesus told them to do: love God, and love others. Unconditionally. No matter what their religion or sexual orientation. If there’s one premise the entire Bible can be summarized into it’s this: God is love. It’s his followers that fall short of that, not him.

If a huge cosmic mistake has been made, and there really was no God, then I haven’t missed much. I’ve benefited greatly from the wisdom of the Bible and have lived a good life because of it. I’ve prospered both spiritually and financially from trusting God and applying Christian principles to my life, which has allowed me in turn to support my church financially and raise normal children. The quality of my life is much higher than its ever been, and I will some day die a very fulfilled individual. I have no doubt, however, that the God in me is real, and is doing some amazing things in this world.

On NCCIC/FBI Joint Report JAR-16-20296

Social media is ripe with analysis of an FBI joint report on Russian malicious cyber activity, and whether or not it provides sufficient evidence to tie Russia to election hacking. What most people are missing is that the JAR was not intended as a presentation of evidence, but rather a statement about the Russian compromises, followed by a detailed scavenger hunt for administrators to identify the possibility of a compromise on their systems. The data included indicators of compromise, not the evidentiary artifacts that tie Russia to the DNC hack.

One thing that’s been made clear by recent statements by James Clapper and General Roberts is that they don’t know how deep inside American computing infrastructure Russia has been able to get a foothold. Rogers cited his biggest fear as the possibility of Russian interference by injection of false data into existing computer systems. Imagine the financial systems that drive the stock market, criminal databases, driver’s license databases, and other infrastructure being subject to malicious records injection (or deletion) by a nation state. The FBI is clearly scared that Russia has penetrated more systems than we know about, and has put out pages of information to help admins go on the equivalent of a bug bounty.

Everyone knows that when you open a bug bounty, you get a flood of false positives, but somewhere in that mess you also get some true positives; some real data. What the government has done in releasing the JAR is made an effort to expand their intelligence by having admins look for (and report) on activity that looks like / smells like the same kind of activity they found happening with the DNC. It’s well understood this will include false positives; the Vermont power grid was a great example of this. False positives help them, too, because it helps to shore up the indicators they’re using by providing more data points to correlate. So whether they get a thousand false positives, or a few true ones in there, all of the data they receive is helping to firm up their intelligence on Russia, including indicators of where Russia’s interests lie.

Given that we don’t know how strong of a grasp Russia has on our systems, the JAR created a Where’s Waldo puzzle for network admins to follow that highlights some of the looser indicators of compromise (IP addresses, PHP artifacts, and other weak data) that doesn’t establish a link to Russia, but does make perfect sense for a network administrator to use to find evidence of a similar compromise. The indicators that tie Russia to the DNC hack were not included in the JAR and are undoubtedly classified.

There are many good reasons one does not release your evidentiary artifacts to the public. For starters, tradecraft is easy to alter. The quickest way to get Russia to fall off our radars is to tell them exactly how we’re tracking them, or what indicators we’re using for attribution. It’s also a great way to get other nation states to dress up their own tradecraft to mimic Russia to throw off our attributions of their activities. Secondly, it releases information about our [classified] collection and penetration capabilities. As much as Clapper would like to release evidence to the public, the government has to be very selective about what gets released, because it speaks to our capabilities. Both Clapper and Congress acknowledged that we have a “cyber presence” in several countries and that those points of presence are largely clandestine. In other words, we’ve secretly hacked the Russians, and probably many other countries, and releasing the evidence we have on Russia could burn those positions.

Consider this: Perhaps we have collection both from DNC’s systems, located in the United States, but also other endpoints inside Russia (or other countries) from C2 servers, or even uplinks directly back to the Kremlin. Perhaps we can account for the entire picture based on global collection of traffic, but releasing evidence of that will directly hamper our ability to perform these types of collections in the future. It’s no doubt that Clapper is being very careful what he says. If we can intercept comms of Russian leaders celebrating Trump’s election, we likely can also intercept the network traffic coming back to the Kremlin.

Looking at how various agencies are in agreement on this subject, and given the FBI’s recent and obvious agenda to influence the elections themselves in the republicans’ favor, it will not surprise me at all to find that there is credible evidence linking Russia to all of this. While possible, I don’t get the impression that FBI is simply trying to wag the dog to distract from their own proclivities. CrowdStrike’s involvement certainly helps to make their findings believable. At the same time, we’ll probably never hear about much of it directly. What the government could do, and should do, however, is an independent peer-review of both CrowdStrike’s findings and their own; this would allow them the luxury of continuing to compartmentalize the classified indicators and artifacts they’ve established, but also build the confidence of the public in general. There are a number of third party research arms capable of doing this. To name a few, MITRE Corporation has a long history of working with the intelligence community, has the experience in-house to peer-review these findings, and the clearances already in place to make sure that data is never leaked. MIT Lincoln Labs also has a cyber arm more than capable of reviewing this data, as do a number of universities that are actively doing this type of work for the government already.

We don’t ever need to see the data, at least until the indicators and the capabilities behind them become obsolete. In fact, even if we saw the data today, most information security experts still wouldn’t be able to agree on it. To interpret this data correctly, you need not only expert cyber warfare experience, but also years of intelligence on Russia (and maybe other countries), full knowledge of our capabilities and where our points of presence are, and a lot of other intel that will likely always remain classified. Giving the evidence we have on the DNC attack to security experts, without the rest of the intelligence to go with it, would be like giving spaghetti to a baby. That’s why we both need and are benefitting from a Director of National Intelligence on this matter.

What we do need to see, however, are independent reviews by people with the experience. Look to the FFRDCs for that kind of expertise. Many of the experts in this space are seasoned career intelligence people, detached enough from government to be impartial in their research, but close enough to government to be able to review the intel that the security community at large will never see.

Three Recommendations to Harden iOS Against Jailbreaks and Malware

Apple has been fighting for a secure iPhone since 2007, when the first jailbreaks came out about two weeks after the phone was released. Since then, they’ve gotten quite good at keeping the jailbreak community on the defensive side of this cat and mouse game, and hardened their OS to an impressive degree. Nonetheless, as we see every release, there are still vulnerabilities and tomhackery to be had. Among the most notable recent exploits, iOS 9 was patched for a WebKit memory corruption vulnerability that was used to deploy the Trident / Pegasus surveillance kit on selected nation state targets, and Google Project Zero recently announced plans to release a jailbreak for iOS 10.1 after submitting an impressive number of critical vulnerabilities to Apple (props to Ian Beer, who should be promoted to wizard).

I’ve been thinking about ways to harden iOS against jailbreaks, and came up with three recommendations that would up the game considerably for attackers. Two of them involve leveraging the Secure Enclave, and one is an OS hardening technique.

Perfect security doesn’t really exist, of course; it’s not about making a device hack proof, but rather increasing the cost and time it takes to penetrate a target. These ideas are designed to do just that: They’d greatly frustrate and upset current ongoing jailbreak and malware efforts.

Frustrating Delivery and Persistence using MAC Policy Framework

The MAC Policy Framework (macf) is a kernel-level access control framework originally written into TrustedBSD, and made its way into the xnu kernel, used by iOS and macOS. It’s used for sandboxing, SIP, and other security functions. The MAC (mandatory access control) framework provides granular controls over many aspects of the file system, processes, memory, sockets, and other areas of the system. It’s also a component of the kernel I’ve spent a lot of time lately researching for Little Flocker.

Rooting iOS often requires getting kernel execution privileges, in which in most cases all bets are off – you can, of course, patch out macf hooks. Gaining kernel execution, however, can be especially tricky if you’re depending on an exploit chain that performs tasks that you can thwart using macf, before you get your kernel code off the ground. It can also force an attacker to increase the size and complexity of their payload in order to successfully disable it, all which take time and increase cost. Should an attacker still succeed, disabling macf will leave sandboxes and a number of other iOS features broken, which jailbreaks want to leave intact. In short, it would require a much more complex and intricate attack to whack macf without screwing up the rest of the operating system.

For example, consider a kernel level exploit that requires an exploit chain involving writing to the root file system, injecting code into other processes (task_for_pid), loading a kext, or performing other tasks that can be stopped with a macf policy. If you can prevent that task_for_pid from ever happening, then that exploit chain might not be able to get off the ground to make the rest possible. Should the attack succeed in spite of this added security, you’ve now forced the attacker to go digging pretty deep in the kernel, find the right memory addresses to patch out macf, and invest a lot of time to be sure their jailbreak doesn’t completely break application sandboxing or other features. In other words, it takes a lot of work to break macf without also breaking third party apps and other features of the iPhone. Adding some code to sandboxd to test macf would also be extra gravy; if macf is compromised and that causes sandboxd to completely break, the user is going to notice it and perhaps find their phone unusable (which is what you’d want if a device is compromised).

Apple understands that if you can keep an exploit chain from getting off the ground, you can frustrate attempts to gain kernel execution. For example, Apple mounts the root partition as read-only; it’s trivial to undo this, as is demonstrated by any jailbreak. All you need is root – not even kernel.  But what about macf? Using the MAC Policy Framework can prevent any writes to the root file system at a kernel level, and can even prevent it from being mounted as read-write except by a software update. MAC is so well written that even once you’re in the kernel, opening a file (vnode_open) still invokes macf hooks; you’ll have to go in and patch all of those hooks out first in order to disable it. This means that lower down on your exploit chain, your root user won’t be able to gain persistence without first performing surgery in the kernel (and likely breaking the rest of the OS).

But wait, macf can do a heck of a lot more than just file control. Using macf, you can prevent unauthorized kexts from loading, you can prevent processes (like cycript and mobile substrate) from attaching to other processes (task_for_pid has hooks into macf), you can even prevent signals, IPC, sockets, and a lot more that could be used to exploit the OS… there’s a whole lot you can do to frustrate an exploit chain before it even gets off the ground by adding some carefully crafted macf policies into iOS that operate on the entire system, and not just inside the sandbox.

Apple has yet to take full advantage of what macf can do to defend against an exploit chain, but it could greatly frustrate an attack. Care would have to be taken, of course, to ensure that mechanisms like OTA software updates could still perform writes to the file system and other such tasks; this is trivial to do with macf.

Leveraging the SEP for Executable Page Validation

The Secure Enclave (SEP) has a DMA path to perform very high speed cryptography operations for the main processor. It’s also responsible for unwrapping class keys and performing a number of other critical operations that are needed in order to read user data. Leveraging the SEP’s cryptographic capabilities could be used to ensure that the state of executable memory has not been tampered with after boot. I’ll explain.

As I said earlier, the system partition is read only and remains read only for the live of the operating system (that is, until it’s upgraded). Background tasks and other types of third party software don’t load until after the device has booted, and usually also authenticated. That means that somewhere in the boot process is a predictable machine state that is unique to the version of iOS running on it, at least as far as executable pages are concerned.

Whenever a new version of iOS is loaded onto the device, the update process could set a flag in the SEP so that on next reboot, the SEP will take a measurement of all the executable memory pages at a specific time when the state of the machine can be reliably reproduced; this is likely after the OS has booted but before the user interface is presented. These measurements could include a series of hashes of each page marked executable in memory, or possibly other types of measurements that could be optimized. These measurements get stored in the SEP until the software is updated again or until the device is wiped.

Every time iOS boots after this, the same measurements are taken of all executable pages in memory. If a root kit has been made persistent, the pages should not match and the SEP could refuse to unlock class keys, which would leave the user at a “Connect to iTunes” screen or similar.

This technique may not work on some tethered jailbreaks that actively exploit the OS post-boot, but nobody really cares about those much anyway; the user is aware of them, root kits or malware can’t leverage those without the user’s knowledge, and the user is effectively crippling their phone to use a tethered jailbreak. It does, however, protect against code that gets executed or altered while the system is booting, including detecting kernel patches made in memory. An attacker would have to execute their exploit after the measurements are taken in order for the code to go unnoticed by the SEP.

Care must be taken to ensure that the technique to flag a software update not be reproducible by a jailbreak; this can be done with proper certificate management of the event.

Encrypt the Root Partition and Leverage the SEP’s File Keys

One final concept that takes control out of the hands of the kernel is to rely on the SEP to prevent files from being created on the root partition by encrypting the root partition with a quasi-like class key in a way that the SEP could refuse to wrap new file keys for files on the root partition. Presently, the file system’s keys are all stored in effaceable storage however if rooffs’ keys were treated as a kind of class key inside the SEP, the SEP could refuse to process any new file keys for that specific class short of a system update. Even should Trident be able to exploit the kernel, it theoretically shouldn’t be able to gain persistence in this scenario without also exploiting the Secure Enclave, as it couldn’t create any new files on the file system; it may also be possible to prevent file writes in the same way, with some changes.


The SEP is one of Apple’s most powerful weapons. Two of these three solutions recommend using it as a means to enforce a predictable memory state and root file system. A third recommendation could lead to a more hardened operating system where an exploit chain could potentially become frustrated, and/or require a much more elaborate kernel attack in order to succeed.





Social Media Auto Publish Powered By :