Tenable recently sponsored a survey on BYOD (bring your own device) and mobile security run by our friends at the LinkedIn Information Security Community. Given that mobile comes up frequently when we speak with customers about their challenges with unknown assets and shadow IT, we want to share a few highlights, let you know how to download the full report and invite you to attend an upcoming webcast that digs into the details of the results.
BYOD and mobile growth
In the study, the majority of respondents (72%) had reached the stage where BYOD was available to all (40%) or some (32%) employees. This matches the number seen in similar studies and is expected to grow even higher in the next few years.
The majority of respondents (72%) had reached the stage where BYOD was available to all (40%) or some (32%) employees
Employees are able to do a lot more with their mobile devices now. While email, calendar and contact management were the most common applications (used by 84% of respondents), many respondents reported that other employee productivity applications were also available via BYOD and mobile, including:
45% - document access and editing
43% - access to SharePoint and intranets
28% - access to SaaS applications like Salesforce
With the number of mobile devices increasing and the types of activities on those devices getting more complex, organizations should expect attackers to target mobile devices for data breaches, intrusions and malware incidents.
Mobile threats and breaches
39% of respondents reported that within their organization, BYOD or corporate-owned devices had downloaded malware at some point in the past. That number could be higher though, because 35% of surveyed respondents said they are “not sure” if malware has been downloaded in the past.
39% reported that within their organization, BYOD or corporate-owned devices had downloaded malware
The use of mobile malware by attackers is definitely on the rise. In February of this year, Checkpoint announced that for the first time, mobile malware was one of the ten most common attack types seen in its threat intelligence database. For example, the previously-unknown malware called HummingBad targets Android devices, installs malicious apps and enables malicious activity such as key-logging, which can help attackers steal credentials that could be used to gain access to corporate networks and data.
The survey also reveals that security breaches using BYOD and mobile devices are on the rise, with 21% of respondents saying they experienced a security breach through the use of BYOD or mobile devices. However, like the mobile malware responses, the actual number of breaches could be higher because 37% answered that they "weren't sure."
21% of respondents said they had experienced a security breach through the use of BYOD or mobile devices
It’s not surprising that 35% of respondents said they didn’t know if mobile malware was present and 37% said they didn’t know if they’d had a mobile breach. Gaining visibility into device status is a huge challenge with mobile security, simply because the devices are so transient. They move from 3G to 4G to wireless networks seamlessly and are turned off and on at random times, making it difficult to include them in a security management program. Technologies like mobile device management (MDM) and passive detection will become increasingly important to ensure mobile security.
Managing mobile device security
The top three tools mentioned in the survey to manage mobile device security were:
43% - mobile device management (MDM)
28% - endpoint security tools
27% - Network Access Controls (NAC)
At Tenable, we believe that whatever tool you decide to use to manage mobile security, it’s important that it integrates with the other security solutions you have in place and fits seamlessly into your overall vulnerability management program. Look for integration points like the example in the screenshot below, where MDM data can be fed into your vulnerability management / continuous monitoring solution.
This Tenable SecurityCenter ContinuousView™ dashboard incorporates data from both passive activity monitoring and MDM systems to provide a view into mobile devices on a network and their associated vulnerabilities
More information
I have touched on just a few of the findings from the 2016 Spotlight Report. There is lots more data on topics such as breach recovery, user and application behavior, supported platforms, and typical support. To learn more:
I recently had the privilege to attend the National Institute of Standards and Technology (NIST) Cybersecurity Workshop 2016, held at the NIST headquarters in Gaithersburg, Maryland on April 6-7, 2016. One thing caught my attention right away: there were two digital clocks prominently displayed on either side of the auditorium. Both clocks were synchronized, and according to my phone, they were accurate to the second. It makes sense because NIST is the keeper of the nation’s atomic clock that will neither gain nor lose one second in about 300 million years. Talk about being a stickler for precision!
The CSF is a framework, and not a standard
The second thing that caught my attention was the “tailorability” they designed into the Framework for Improving Critical Infrastructure Cybersecurity (CSF). At first, this tailorability struck me as being inconsistent with their precise approach to timekeeping and their approach to developing standards. However after thinking more about it, I realized the CSF is a framework, and not a standard. NIST developed the CSF in conjunction with industry to be tailorable so it would precisely meet the needs of wide-ranging organizations. The CSF consists of three primary parts: Core, Implementation Tiers, and Profiles, each of which supports tailoring. Let’s look at some of the ways an organization can tailor the CSF to meet their precise requirements.
The Core provides a set of activities to achieve specific cybersecurity outcomes. At the most detailed level, the outcomes are control objectives, and the CSF specifies 98 specific outcomes (or control objectives.) However, the CSF gives organizations adopting the CSF wide latitude in the specific controls. Matthew Barrett, NIST’s Cybersecurity Framework Program Manager, said, “If you like your framework, you can keep your framework.” The CSF includes references to controls from a number of other frameworks, including COBIT, ISO/IEC 27001:2013, CIS Critical Security Controls, and NIST SP 800-53 Rev. 4. Adopting organizations are free to borrow from other frameworks to tailor controls as needed to meet their specific needs, and can apply different controls to different systems, based on risk assessment.
Implementation Tiers help an organization determine the degree of sophistication their cybersecurity program needs to achieve. At first glance, the four tiers – Partial, Risk Informed, Repeatable, and Adaptive – look similar to a maturity model. However, the CSF explicitly says, “Tiers do not represent maturity levels.” The concept of maturity includes a natural progression from lower levels to the highest level. However, the CSF does not assume that maturation to the highest level is appropriate for every organization nor for every line of business within an organization. It explicitly says, “Progression to higher level Tiers is encouraged when such a change would reduce cyber security risk and be cost effective,” and “The Tier selection process should be informed by an organization’s current risk management practices, threat environment, legal and regulatory requirements, business/mission objectives, and organizational constraints.”
Given an organization’s environment, mission, and resources, moving to the highest Tier may not be justified. Organizations are encouraged to adopt the Tiers that meet their specific needs. Again, the CSF allows tailoring to achieve a precise organizational fit while still attaining security objectives.
Profiles are snapshots of the status of an organization’s Core functions and supporting categories and sub-categories. The two classes of Profiles are Current Profile, representing the as-is state, and the Target Profile, representing the as-desired state. Profiles are tailorable. They align Core functions, categories, and sub-categories with an organization’s business requirements, risk tolerance, and resources. Comparison of the current and target Profiles helps identify shortcomings that must be addressed to meet an organization’s cyber risk management objectives. This comparison forms the basis for an improvement plan tailored to the specific needs of an organization.
Adopting a framework provides a common language
Much of the value of adopting the CSF is in the self-assessment and planning processes relative to the Core, Tiers, and Profiles. These processes typically involve executives, line-of-business leaders, and technical staff. They result in a better understanding of the precise security controls needed to meet the organization’s specific risk management objectives. Much of the CSF’s value is in the process of implementation. These activities drive organizations to develop an organization-specific awareness of their business objectives, threats, and the actions they should take to manage the risk. Adopting a framework also provides a common language that the different parts of an organization can use to discuss cyber security, enabling better communication, collaboration and achievement of goals and objectives.
If your organization plans to use the CSF, be prepared to tailor it to your organization’s precise needs
Even though the CSF was published by NIST, the National Institute of Standards and Technology, the CSF is a framework, not a standard. If your organization plans to use the CSF, be prepared to tailor it to your organization’s precise needs, and look to Tenable to help you automate the operation and assessment of CSF’s technical controls. Tenable SecurityCenter Continuous View™ (SecurityCenter CV™) includes multiple dashboards and Assurance Report Cards that you can easily tailor to give you the precise visibility you need.
We are pleased to announce the release of Nessus 6.6. We are excited about the new and updated capabilities being delivered. Here are some highlights.
Nessus Cloud workflow updates
If you’re currently using Nessus Cloud, you’ll notice a few changes to the user interface with this release. All the capabilities that Nessus Cloud provides - vulnerability scanning, configuration assessment, malware detection, etc. - continue to be available, although the exact steps to use them are changing slightly. These changes will make it easier for you to find and use key capabilities. Look for changes in these areas:
Scans
When you first log in to your updated Nessus Cloud, you might ask yourself, “Where’s my scan data?” Don’t worry - it’s all still there. Where you’d normally see a list of scan jobs, you’ll see a new dashboard view that will give you a quick overview of scan results. The list of scans can still be accessed by clicking the Scans menu at the top and then My Scans link on the left menu.
If you want to run a new scan, the process hasn’t changed, but you’ll find the New Scan button moved to the right side of the Nessus Cloud interface:
Policies
While not as big a change as the Scans and Agents, if you’re used to accessing Policies through the top menu, you’ll now find it a bit further down the screen under the Resources Take a look at the Scans screen snap as an example.
Agents
With this release, you should find it much easier to manage Nessus Agents, including setting up scans for Nessus Agents. For example, all of the agent settings can now be quickly accessed through the new Resources area on the main scan window.
You’ll also notice that we’ve organized all of the scan templates - including the ones available for agent scans - into tabs. So when you want to set up a new scan for Nessus Agents you’ve deployed, it will be quick and easy to get started with the right scan templates.
Nessus Agents updates
While we're on the topic of Nessus Agents, with Nessus 6.6, agents now support Windows 10 and Debian 8. Many of you have been asking for this, especially Windows 10 support.
New platform support
In addition to the Nessus Agents Windows 10 and Debian 8 platform support noted above, with the 6.6 release, Nessus scanners and Nessus Professional also support Windows 10 and Debian 8, Kali 2.0.
New configuration audits
And finally, our compliance research team has added several brand new configuration audits in Nessus 6.6. Look for articles later this week from our director of compliance research, Mehul Revankar, with details about new configuration audits for Docker and OpenStack.
More information
If you’re a current Nessus Cloud customer and want to see more about the workflow changes, log in to the Tenable Support Portal and click the Training Videos link in the options on the left side of the screen. Matt McClellan, the product manager for Nessus, recorded videos where you can see exactly what’s changed and what’s new.
Information about Nessus Agents, including white papers and videos, is available on our website.
If you’d like to try Nessus Cloud, Nessus Manager or Nessus Professional, request an evaluation
“It worked in Dev, it works in Dev. Don’t know why it’s not working in production. It’s an Ops problem now.”
Many of us have lived through a failed production deployment of an application at least once. And unfortunately for some, the memories from such failed deployments can haunt for the rest of our lives. But thanks to a relatively old technology (but gaining traction recently), such things could quickly become a thing of the past. Welcome containerization—or as most people know it—Docker containers.
Why Docker?
Developers have long sought a system with which they could build a piece of software once, package it, and then run it anywhere—without having to worry about dependencies, library versions, host OS, underlying hardware etc. Docker containers are the perfect solution.
And on the other hand, Operations folks have sought a system for setting up dev/lab environments in a consistent and repeatable way (ideally in a scripted fashion) in an environment that closely resembles the production environment. So when the code gets deployed into production, they can be assured it won’t blow up; and even if it does, developers can quickly reproduce the issue and issue patches. Docker containers address that need as well.
But that’s not all. Docker containers are built from stripped down versions of the base operating systems, and contain only a bare minimum of system libraries and supporting programs. That means that they are a lot more efficient than virtual machines (VMs), without the overhead associated with VMs. Therefore, it’s possible to pack more containers than virtual machines on the same physical host.
Plus, Docker supports UnionFS file system, which enables the combination of multiple file systems into a single file system. So remember that dream of a LAMP base image you always wanted to build? Yeah, that’s a breeze now.
Given the benefits of using Docker, it’s easy to see why developers are flocking towards “dockerizing” their applications. But before they get too far ahead, there is one pesky little thing to take care of—security.
Securing Docker
By leveraging some kernel-level features such as namespaces and cgroups, Docker containers already provide some basic level of security right out the box. But that’s not sufficient. Users need to take additional steps to lock down the kernel, reduce the attack surface of the docker daemon and harden the container configuration to have a truly secure setup.
How can Tenable help?
Along with Nessus 6.6, Tenable released several updates in the Nessus plugin feed to audit Docker host(s) and containers. Here are some simple steps you can take to secure Docker installs.
Docker service detection and container enumeration
The first step towards securing Docker installs is to actually find them in your organization. Tenable recently released a Docker Service Detection plugin (#89110), which detects Docker installs and, if available, enumerates all the active containers on that host. Here’s a sample result:
Patch Docker host vulnerabilities
Docker containers share the kernel with the host OS, which means that kernel-level vulnerabilities now gain a whole new level of significance on Docker hosts.
It is therefore important to run a comprehensive credentialed patch audit against Docker hosts to ensure they are up to date with the latest patches and aren’t missing any security fixes. Nessus supports local security checks for a variety of Linux distributions. So regardless of which base OS you pick for a Docker host, there is a good chance Nessus already has support for it.
CIS audit for Docker
The next step is to harden the Docker host itself. For example, have strict file and directory permissions, limit the number of services running other than the docker daemon, limit user access to the docker daemon, keep an eye on container sprawl, etc.
CIS released an excellent benchmark for Docker v1.6+, which covers everything I just referred to and a lot more. Tenable added support for a CIS Docker v1.6 audit in Nessus 6.6. Here’s a sample result:
Audit Docker containers
Nessus can audit the configuration of the Docker containers as well. Just select an audit and run a scan against the Docker host, and Nessus will automatically identify applicable containers and audit the configuration of those containers. For example if you ran a scan with application audit such as Apache or MySQL, Nessus will automatically identify containers running Apache or MySQL and only audit those.
Keep in mind, though, that containers are stripped down versions of the base OS. So if you ran a scan against a container with an audit that was meant for the complete base OS, you may find some results that are not applicable. For example, files or binaries that don’t exist. So we encourage you to customize your audits for Docker containers and strip out irrelevant pieces.
Once the scan finishes, Nessus will list containers under the Hosts tab in a special format: container-name.docker.container. Here’s an example:
Wrap-up
In our world, new technologies come and go all the time. Yesterday it was virtualization, today it is containerization, and tomorrow it will be something else. Tenable will adapt, evolve and align with your needs as new technologies come online. Support for auditing Docker is just one more new technology that we have added to your arsenal.
New Scan Policies, Plugins and Dashboard for CVE-2016-2118 & CVE-2016-0128
No matter which name you prefer, Badlock or Sadlock, for the recently disclosed CVE-2016-2118 (SAMR and LSA man-in-the-middle attacks possible) and for Windows by CVE-2016-0128/MS16-047 (Windows SAM and LSAD Downgrade Vulnerability) Tenable has you covered. Nessus®, SecurityCenter™, SecurityCenter CV™, or Passive Vulnerability Scanner™, Tenable can determine if you are at risk.
According to Badlock.org, the security vulnerabilities can be mostly categorized as man-in-the-middle or denial-of-service (DoS) attacks. These would permit execution of arbitrary Samba network calls using the context of the intercepted user, such as the ability to view or modify secrets within an AD database, including user password hashes, or shut down critical services or modify user permissions on files or directories. A DoS attack against the Samba service is also possible by an attacker with remote network connectivity.
Affected versions of Samba are:
3.6.x
4.0.x
4.1.x
4.2.0-4.2.9
4.3.0-4.3.6
4.4.0
Regardless of where you stand on the “Sadlock” discussion, if the hype warranted the naming of this vulnerability, Tenable can provide visibility into where to prioritize your remediation efforts for Badlock.
The Tenable response
Nessus
Impacted operating system vendors are making updates available. Tenable has issued a series of local and remote Nessus® plugins to detect the presence of affected versions of Samba or Windows:
MS16-047: Security Update for SAM and LSAD Remote Protocols (3148527) (Badlock)
We have released a customized SecurityCenter™ dashboard to monitor, track and remediate critical assets affected by CVE-2016-2118 and CVE-2016-0128. This dashboard is automatically available via the feed to provide insight on the impact to your environment and the progress of your efforts to remediate this vulnerability.
Public cloud, private cloud or hybrid cloud—regardless of which cloud-computing model you choose, there is a good chance a part of it is already powered by an open source solution. And when it comes to open source solutions for the cloud, there isn’t a better, more stable and comprehensive solution than OpenStack.
What is OpenStack
Started in 2010 as a joint project between Rackspace and NASA, OpenStack has now become the open source cloud operating system for private cloud deployments. If it’s not clear what that means, then think of it as a layer of software, which can glue together large pools of hardware resources (compute, storage, network) and then present them to be managed under a single interface either through a dashboard or APIs.
Why OpenStack
But with public cloud providers such as Amazon AWS and Microsoft Azure providing such compelling low cost, comprehensive cloud solutions, one might wonder why anyone would opt for OpenStack. There are many reasons. Some customers want complete end-to-end control over their own infrastructures, some want to avoid vendor lock-ins, some want to reduce the burden from expensive licensing fees or some just want to put their existing commodity hardware to good use. And OpenStack provides them with the option to install the software on their own hardware and spin up their own private cloud.
But with all that upside comes additional responsibility. When it comes to security, public cloud providers have long claimed they are responsible for security of the cloud, and the customer is responsible for security in the cloud. But with private cloud deployments, you are responsible for both security of the cloud, as well as security in the cloud.
So once you spin up your own private cloud, the next step is to secure it. And Tenable has just the right solution to help you out.
How can Tenable help?
Over the past few years, Tenable has gradually added support for auditing all major public cloud providers such as Amazon AWS, Microsoft Azure, and Rackspace. Now it's time to go private—to audit private clouds.
With the release of Nessus 6.6, Tenable has now added support for auditing an OpenStack deployment.
There are two things we are doing with respect to OpenStack. First, providing our customers with a snapshot of their OpenStack deployments via the REST API, and second, providing guidance to secure an OpenStack deployment based on the OpenStack Security Guide.
OpenStack deployment snapshot
Often, when traditional computing workloads transition to the cloud infrastructure (public or private), it gets hard to keep track of all the resources deployed in the cloud all the time. This is especially true when multiple users have the privileges to provision new resources on demand. Therefore, keeping tabs on active/inactive instances, tenants, users, networks and subnets changes since the last scan becomes important. Nessus 6.6 solves that problem with its new plugin for OpenStack. Here’s a sample result.
OpenStack security guide
One of the indicators of a mature platform is the existence of a security guide. It shows that the vendor cares and takes security seriously. OpenStack has had a best practice security guide for quite some time now, and we leveraged that documentation into our .audit to provide guidance for hardening OpenStack deployments. The audit reviews configuration of critical files such as nova.conf and keystone.conf and many more to make recommendations if they are in line with best practice security guidelines. The audit also reviews role-based access policies listed under policy.json files, which determine which user can access which objects. Here’s a sample result.
Setting up the scan
The .audits for OpenStack are under two categories: OpenStack, which includes REST API-based audits, and Unix, which includes security guide-based audits.
Under the Credentials/Miscellaneous tab, there is new tab to enter credentials for an OpenStack REST API audit. The Unix audits are done over SSH, provided the IP addresses for various nodes (for example, compute, network) are provided as targets.
Wrap-up
With support for OpenStack, Nessus now audits a wide variety of cloud deployments from public cloud providers such as Amazon AWS, Microsoft Azure, and Rackspace to private deployments such as OpenStack. And we plan to add support for similar technologies when they come online.
Lack of communication between IT departments and those responsible for executing agency mission can lead to the creation of shadow IT—unauthorized and often unmanaged applications that can introduce vulnerabilities. This is something that SecurityCenter Continuous View™ (CV) can help you identify, understand and manage.
Too often there is little communication between those responsible for executing an agency’s mission and those who acquire, develop, deploy and manage the agency’s information technology. The result is that workers often do not get the IT they need.
If IT doesn’t help the staff efficiently do the job at hand they will find ways to get around the authorized IT
The agency might have a state-of-the-art network, data centers and applications, all leveraging the latest technology; but if it doesn’t help the staff efficiently do the job at hand they will find ways to get around the authorized IT and introduce their own solutions. The result is unauthorized and often unmanaged applications that can introduce vulnerabilities into the enterprise.
The threat of shadow IT
The threat is not theoretical. In the fall of 2014, the Homeland Security Department discovered attacks at several agencies, exposing personal data of over 800,000 employees as well as customer information. Ten months later, an audit of software development processes uncovered shadow development of applications by untrained personnel that produced local applications not visible to IT management.
“Shadow IT development” describes systems built outside the official IT development process and used without official approval. As a result, they are not included in inventories of systems to be monitored and managed, leaving them unsecured.
Shadow IT is unlikely to be patched and updated, access is not controlled, and it is not monitored
Shadow development is just one source of shadow IT. The term can refer to any unauthorized or hidden technology introduced into an enterprise, including rogue access points, personal devices, unauthorized commercial applications, or servers that have simply been forgotten as networks evolve and staff leaves. These assets are unlikely to be patched and updated, secure configurations are not maintained, access is not controlled and they are not monitored. The result is a gap that the White House has called “the missing link” in government cybersecurity:
Agencies can’t secure what they can’t manage, and can’t manage what they don’t know about. This challenge represents a critical, but heretofore missing link for U.S. cyber security.
The government’s response
At a high level, the solution to shadow IT is comprehensive network discovery. Accurate, up-to-date inventories of network connections, devices, software and active IP addresses mean security teams are less likely to be caught unprepared by attacks on vulnerable assets.
At a lower level, government is addressing one of the causes of shadow IT by ensuring that IT acquisition is aligned with mission. It is not enough to ensure that IT is good; it must do the job for which it is intended. The Office of Management and Budget is making this the job of the Chief Information Officer (CIO) and making sure he has a seat at the right table.
In its 2015 guidance to agencies for the Federal IT Acquisition Reform Act (FITARA), OMB directed that:
...to ensure early matching of appropriate IT with program objectives, the CIO shall be a member of governance boards that include IT resources (containing 'shadow IT' or 'hidden IT'), including bureau Investment Review Boards.
Securing shadow IT
Avoiding shadow development and performing network discovery is not enough to secure your network from shadow IT. Security requires both discovery and assessment. You must be able to understand the security status of devices and software and effectively manage it. This must be done on a continuing basis, since relying on a point-in-time snapshot leaves blind spots in quickly evolving networks.
Agencies can’t secure what they can’t manage, and can’t manage what they don’t know about
Tenable SecurityCenter CV can help with finding and assessing hidden IT on your network with:
Active scanning
Closed-loop, real-time connections to the business
The Internet of Things has the potential to revolutionize the world, including healthcare. But doctors, hospitals and medical experts might want to pause before adopting this technology and evaluate the cybersecurity challenges.
The commitment for the Internet of Things (IoT) in the healthcare sector is staggering. Recently a study from MarketResearch.com predicted that by 2020, the IoT market in the healthcare sector will reach $117 billion, expanding at a rate of 15 percent per year.
IoT refers to the networking of sensors and other devices to enable machine-to-machine communication. Because it can create a global web of IP-addressable devices that are often not regularly monitored or managed, this can greatly expand an enterprise’s threat surface.
The IoT’s intelligence, accessibility and ability to scale are not only its strengths, but also its weaknesses. According to Accenture Technology, the IoT can increase production, boost innovation and reshape the current business landscape. But this might come at the cost of cybersecurity.
Benefits and challenges
Within healthcare, the lifesaving potential of this technology makes its rapid adoption virtually irresistible. Networked devices can monitor conditions and notify healthcare providers, patients and loved ones of changes. Problems can be identified and controlled remotely. Appointments and procedures can be scheduled automatically, and records kept up-to-date and accessible to those who need them.
Networked devices can monitor conditions and notify healthcare providers, patients and loved ones of changes
Philips, a company best known for light bulbs and personal hygiene, has created a healthcare subsidiary to create a new generation of medical sensors.
“Phillips recently created a pillbox that pops open when it’s time to take your meds, and sends a message to, say, a family member or nurse confirming that you’ve taken them.” –The Globe and Mail
The example of an IoT pillbox, as helpful as it is, is only the tip of the iceberg for how beneficial IoT in healthcare can get:
Sensors like the ones used by neonatal units to monitor premature infants can be placed directly on the skin on home patients, along with high-definition cameras to monitor skin color, breathing and temperature, and alert nurses of any changes.
Smart beds now being used at New York Presbyterian Hospital can tell immediately if a patient has gotten up, and let the nursing station know.
Fitness trackers like the FitBit, Apple Watch, and others, which surpassed $2 billion in revenue, not only measure heart rate, sleep patterns, diet, and exercise but soon could be integrated with health care providers to track recovering or high risk patients.
Fitness trackers can also integrate with insurers to provide discounts. “U.S. insurer John Hancock (a subsidiary of Manulife) is offering clients up to 15% off premiums if they willingly hand over data that proves they lead a healthy lifestyle.” –The Globe and Mail
A recent CIO.com article cited three factors in the upward trend of IoT devices in healthcare:
Chances are you already have one. Consumer devices based on the IoT concept include the Apple Watch, fitness trackers and other commercially sold wearables.
They are getting less expensive. Sensors, a key component of IoT, will cost an average of $0.38 in 2020 as compared to $0.50 today.
They’re becoming standardized. The IPSO Alliance brings together companies such as Google, Cisco, Intel and Oracle to create standards and support “Smart Objects” technology.
Despite its widespread adoption in healthcare, this astronomical growth of unprotected devices and data could be a heart attack waiting to happen for the healthcare industry. Recent events such as the ransomware attacks against several prominent hospitals show that medical centers are high profile targets for hackers and online criminals.
Within healthcare, the lifesaving potential of this technology makes its rapid adoption virtually irresistible
The solution
Using the IoT safely in healthcare is not necessarily difficult. Good communication, appropriate protocols, mapping and isolating IoT devices and vulnerability management and analytics can help the healthcare industry protect patients and their networks.
Communication: Hospitals and healthcare providers must communicate with each other and their patients to ensure that risks are understood and mitigated. For example, a doctor or hospital will never call to ask for personal information to “access” or “fix” medical records or devices.
Island of IoT devices: Access to networked medical devices must be effectively controlled, and access by devices to other accounts and systems must be limited.
Keep your protocols and processes tight: Protocols and processes on networked equipment should not be enabled by default. Enabling only those that are necessary can help prevent intruders from gaining access to and control over your resources.
Know your metrics and where you’re vulnerable: IoT is about data and rapid interaction between devices, and something like the Tenable SecurityCenter Continuous View™ solution, which consolidates and evaluates vulnerability data across your organization, can prioritize security risks and provide a clear view of an organization’s security posture. It offers pre-built, highly customizable dashboards and reports which can help organizations visualize, measure and analyze the effectiveness of their security program regardless of their infrastructure.
The healthcare industry’s commitment for the Internet of Things (IoT) is staggering, but the cybersecurity implications don’t have to be. Learn more about how SecurityCenter Continuous View can help better protect your organization from cybercrime.
We’ve shared a few blog articles in recent months about shadow IT - what it is and how to manage it. We’ve also had many interesting conversations with customers and prospects about their own reasons for wanting to get better visibility into shadow IT on their networks. In this article, we’ll share the top three reasons that we hear, in no particular order.
1. You can’t secure what you can’t see
The first step in the majority of security frameworks is to inventory assets. For example, step one in the CIS Critical Security Controls (formerly the SANS Top 20) is to do an "Inventory of Authorized and Unauthorized Devices."
Organizations that follow this or another framework are following the advice "You can’t secure what you can’t see." For them, getting visibility into unauthorized devices and shadow IT is critical to laying the foundation for a comprehensive security program.
2. Many little costs can add up to a big expense
It’s interesting that many people tell us they want to manage shadow IT for a reason that has little to do with security. Instead, they’re not sure how much shadow IT is costing their organization and they want to figure that out.
It’s easy to see how the cloud applications and services that are so easy for anyone to set up and pay for via their corporate credit cards can easily add up to a big expense for the organization. While many of these applications and services start out as a free service, many users quickly bypass the free offering to unlock additional features, gain more capacity or to use them for extended periods of time.
We’ve heard of some IT teams partnering with accounting, to get information on whose expense reports include cloud services and applications. That’s one way to try and uncover this information. It’s also worth noting though that the same Tenable solutions that give professionals visibility into shadow IT for security purposes can help with the IT/usage challenge as well.
3. Shadow IT can introduce risk
The majority of people tell us they want to manage shadow IT because of concerns that unauthorized or unknown applications, services or devices will introduce risk into their networks and they won’t have visibility into these possible attack vectors.
On one hand, I think you could make the argument that cloud services may not introduce any more risk than other assets because cloud providers work very hard to harden their applications and services. Last year, threat prediction firm NopSec released a study on the state of vulnerability risk management. Part of that study looked at the length of time for organizations in different industries to identify and patch vulnerabilities. In this study, they noted “...cloud providers rank as the most progressive industry in terms of the remediation of known security issues - closing 90 percent of identified vulnerabilities in less than 30 days."
On the other hand, even if cloud services and application vendors are working hard to harden their applications, there still will be some vulnerabilities in those applications some of the time.
But the bigger concern is that people frequently use (or misuse) cloud services and applications. It’s just past tax season here in the USA so I’m reminded of Graham Cluley’s reporting last year on how many users of the free Dropbox service were unknowingly leaking tax returns and private data via sharing links that were publicly accessible. What if at your organization that was someone inadvertently sharing a customer list or employee data instead of their own tax information? Gaining visibility into the use of this type of shadow IT can help you manage who’s using it, what data is being shared and where the shared data is going.
What we don’t hear...
What we rarely hear as a reason why security professionals want to manage shadow IT is because they want to shut it down. It seems many feel that trying to block shadow IT will only make those using it work that much harder to do so. Instead, most approach shadow IT as something that they should manage like they manage other assets in their environment.
It all starts with them having visibility. Once that’s achieved, security professionals can look for opportunities to move shadow IT to approved applications and platforms and/or determine how shadow IT can become managed IT so it doesn’t introduce unnecessary cost or risk to the organization.
Determine how shadow IT can become managed IT so it doesn’t introduce unnecessary cost or risk to the organization
Visit our website to learn more about how Tenable is helping organizations manage unknown assets and shadow IT. And while you’re there, download our Eliminating Cyber Security Blind Spots white paper.
Understand the Threats, Prepare Your Defenses, and Take Action
The Verizon 2016 Data Breach Investigation Report (DBIR), published earlier this week, includes key insights on real-world data breaches and the evolving threat landscape. The DBIR continues to be one of the most important reports for organizations each year because it includes data from security vendors and organizations worldwide, revealing information on the past year’s breaches and security incidents.
The 2016 DBIR uses data from 67 contributing organizations
For this year’s report, Tenable contributed data for analysis to help the Verizon DBIR team paint the clearest picture of threats, vulnerabilities, and actions that lead to security incidents.
The 2016 report helps you understand how cybersecurity breaches occur, what the most likely attack types are for your industry, and what techniques you can adopt to reduce the risk. This blog summarizes key findings from the 2016 report.
Breach trends
Similar to findings from previous years, according to the 2016 DBIR, most attacks still come from outside the organization, and the two motivators for attackers continue to be money and espionage. This year’s report also emphasizes that attackers are getting quicker all the time. Time to compromise is minutes (81.9%), while time to exfiltrate data is between days (67.8%) and minutes (21.2%).
Even more disturbing, external breach notifications are up, while internal breach detection is down. This means organizations rarely discover on their own that they have been breached; the first time they know they’ve been compromised is when law enforcement, a fraud detection service, or a third party notifies them.
Attackers are getting quicker all the time. External breach notifications are up, while internal breach detection is down.
Even with security tools and experienced staff in place, it’s still difficult to detect threats. If you don’t have the right solution in place to continuously monitor your network and proactively identify suspicious or malicious activity, detecting whether or not your data has already been stolen can be a nearly impossible task.
Phishing
The DBIR combined the results of over eight million phishing tests in 2015 from multiple security awareness vendors and found that the mean time from the start of a phishing campaign to first click is 3 minutes and 45 seconds. In less than 4 minutes, an attacker can gain a foothold on your network.
Social threats like phishing are hard to defend against, even when you have a comprehensive training and awareness program in place related to social engineering threats that employees may face; a unified, robust monitoring solution can identify early signs of a compromise. Often, core pieces of security exist – anti-virus deployments, intrusion detection sensors, and NetFlow capture are available. But they don’t do any good if they are not used. For example, anti-virus can be installed on all corporate assets, but if it is a mishmash of products installed by IT over the years, gaps can occur.
To help protect against phishing attacks, use a unified security assurance solution to make sure anti-virus solutions are enabled and updates are regularly rolled out across the company. Also, make sure you are using a security assurance solution that can correlate intrusion detection alerts and NetFlow capture. By using these tools together, you can quickly identify emergent threat actors from phishing attacks before significant damage occurs.
Stolen credentials
The Verizon 2016 DBIR reveals that 63% of breaches used weak, default or stolen credentials. To protect against stolen credentials and outsmart attackers, the best approach is still going back to the basics:
Perform configuration auditing for password strength controls
Segment and continuously monitor your network
Enable two-factor authentication
Keep track of which credentials are used where, when and by whom, and rotate passwords frequently
Flag and investigate any anomalies
Make sure you monitor how users behave once they log into systems, to understand what bad behavior looks like. Also consider evaluating new technologies that will help you move away from passwords altogether, replacing passwords with newer technology, such as Universal Two Factor (U2F), which gives users the ability to authenticate their identities with biometrics like fingerprints or iris scanners.
Exploited vulnerabilities
According to the 2015 DBIR, 99.9% of the exploited vulnerabilities were compromised more than a year after the CVE was published, and we continue to see this trend in the 2016 DBIR. Older and well-known vulnerabilities continue to be the source of most attacks, with the top 10 known vulnerabilities accounting for 85% of successful exploits. However, the other 15% is made up of over 900 vulnerabilities.
The 2016 DBIR report also highlights how time from vulnerability publication to exploitation varies. For example, the following figure from the Verizon 2016 DBIR illustrates that Adobe and Microsoft vulnerabilities are exploited quickly, while other vulnerabilities, such as Mozilla vulnerabilities, take longer to exploit after disclosure. This data gives you the context you need to focus and prioritize your remediation efforts more effectively.
Source: Verizon 2016 Data Breach Investigations Report, page 14
Once again the 2016 DBIR recommends vulnerability scanning as a way to help identify new devices, services, and changed configurations. The key to any scanning program is to reduce or eliminate your between-scan blindness. Simply running a quarterly, monthly, or weekly scan is not sufficient. Active scanning must be combined with passive network traffic monitoring to enable continuous discovery of new devices, services, and vulnerabilities in near real time. A comprehensive security assurance program, which includes vulnerability management and continuous monitoring, will always be more effective at preventing data breaches than “fire drills” focused on the “threat of the day.”
Mobile and Internet of Things
Just as in last year’s report, the 2016 DBIR still shows no significant data on breaches involving mobile or IoT. This could be sample bias, or it could be that these areas just aren’t as big of a threat as many corporate marketing departments would lead you to believe.
However, that does not mean that mobile and IoT should not be included in an organization’s risk assessment strategy and security program. The threat environment continues to evolve, so it’s a good practice to monitor and secure all devices—be they desktops or mobile—to make it more difficult for attackers to gain entry.
Make the Verizon 2016 DBIR actionable for your organization
The Verizon 2016 DBIR is loaded with statistics and analysis about the threats, vulnerabilities, and actions that lead to security incidents. However, the process of gathering and analyzing the data needed to act on Verizon’s recommendations is an iterative, challenging, and time-consuming process. Most organizations simply don’t have the time to manually extract data from different parts of the business and iteratively filter and convert the information into security monitoring and an actionable program.
Use Tenable dashboards to gain visibility, context, and actionable intelligence from the DBIR findings
To help your organization get the most out of the findings in the Verizon DBIR, in 2015 Tenable created easy-to-use Verizon DBIR dashboards that collect contextual information about attack patterns based on several sections of the 2015 DBIR to give you the visibility, context, and actionable intelligence you need to apply DBIR information to your own networks:
These dashboards, available in SecurityCenter Continuous View™, leverage a unique combination of active scanning, agent scanning, continuous listening, and host data activity monitoring technologies to help you quickly identify whether the top vulnerabilities identified in the Verizon DBIR are in your environment, and if your organization is a victim of malware or advanced threats.
Coming soon: new and updated Verizon 2016 DBIR ARCs and dashboards
The Verizon 2015 DBIR dashboards from Tenable are still highly relevant to the findings in the Verizon 2016 DBIR. However, we also plan to update and release new Verizon 2016 DBIR Assurance Report Cards (ARCs) and dashboards for SecurityCenter Continuous View based on Verizon’s 2016 findings.
The Tenable researchers have completed their initial analysis of the 2016 Verizon DBIR, and development of new ARCs and dashboards is underway. We will post information about these new ARCs and dashboards to our blog and in the SecurityCenter feed as they become available, so stay tuned for more information!
Take the time to dive deeper
Overall, the 2016 DBIR provides a level of information about breaches that few other reports can match, as well as a high level of transparency about the data and methodologies used to generate the report. As a result, the DBIR continues to be one of the most important reports for organizations each year.
Understanding the data breach scenarios that are relevant to you, your industry, and your asset landscape is key to smart security. This summary just scratches the surface of Verizon’s findings. Make sure you read the entire DBIR report to learn about other key findings relevant to your organization.
As announced in December 2015, the PCI Security Standards Council released version 3.2 of the Payment Card Industry Data Security Standard (PCI DSS) on April 28, 2016. This version update was necessary because of the PCI Council’s decision last December to “dial back” on the sunset dates for the replacement of all versions of SSL and “early versions” of TLS for companies subject to PCI compliance. Troy Leach, the PCI Council’s Chief Technology Officer, also stated in a blog announcing the revised dates for replacing SSL/early TLS that another major reason for publishing this version update now is because “the industry recognizes PCI DSS as a mature standard now, which doesn’t require as significant updates as we have seen in the past. Moving forward, you can likely expect incremental modifications to address the threat landscape versus wholesale updates to the standard.”
The industry recognizes PCI DSS as a mature standard now
This assertion clearly marks the end of the three year cycle formerly followed by the PCI Council on reviewing and updating the DSS, and hopefully means that the standard will be more responsive both to changes in the threat landscape and to the evolution of payment acceptance methods.
Clarification: Clarifies intent of requirement. Ensures that concise wording in thefefe standard portrays the desired intent of requirements.
Additional guidance: Explanation, definition and/or instruction to increase understanding or provide further information or guidance on a particular topic.
Evolving requirement: Changes to ensure that the standards are up to date with emerging threats and changes in the market.
The “clarification” and “additional guidance” changes are attempts to make the DSS flow logically, articulate the intent of each requirement, and quite often to make sure that the overarching intent or reach of the requirement is explicitly obvious.
The “evolving requirement” changes are where you will find the “new” requirements, some of which are effective immediately upon release of the standard while the majority of them are considered “best practices” until a certain acceptance period has elapsed.
Key effective dates
For convenience, here is a list of key effective dates found in PCI DSS v3.2:
April 28, 2016 – PCI DSS Version 3.2 released
June 30, 2016 – Sunset date for all service providers to migrate to secure versions of TLS in their service offerings
October 31, 2016 – PCI DSS Version 3.1 expires, making PCI DSS Version 3.2 fully in force
January 31, 2018 – Sunset date for all new requirements to be treated as best practices instead of requirements
June 30, 2018 – Sunset date for all merchants to migrate to secure versions of TLS in their PCI related operations (certain exceptions for POS systems apply)
What’s changed? What’s new?
Changes in PCI DSS version 3.2 may be grouped into several categories, including details about the dates for replacing SSL/early TLS, properly handling third party relationships including new service provider requirements, and an emphasis on meeting PCI DSS requirements on an ongoing or “business as usual” basis.
SSL/early TLS
The rules and testing procedures surrounding the removal of SSL/early TLS ... are a little complicated
The first change observed is that all of the rules and testing procedures surrounding the removal of SSL/early TLS have been consolidated into a single location, now found in Appendix A2. Don’t let that fool you though, because the rules are a little complicated – complicated enough, in fact, that I’m currently working on a follow-on blog that will explain some important nuances around the SSL and early TLS rules in more detail.
New service provider requirements
Surprisingly, the most significant changes directly impact service providers
Surprisingly, the most significant changes directly impact service providers and not merchants. There are nine new requirements found in v3.2 and seven of them are explicitly “for service providers only.” This is most likely the result of the “changing threat landscape,” or more simply because many of the recent major payment industry breaches involved some level of compromise or exploitation of third parties/service providers. Merchants have some increased or clarified responsibilities for managing their third party relationships compared to v3.0/v3.1 but there is now much more explicit responsibility put on the service providers themselves.
PCI and “business as usual”
PCI DSS should be implemented into business-as-usual (BAU) activities
A more subtle but still significant change in v3.2 is an increased emphasis on the necessity of doing what the PCI DSS requires on an ongoing and continuous basis. The best practice goal that “PCI DSS should be implemented into business-as-usual (BAU) activities as part of an entity’s overall security strategy” means that all companies should be treating the standard as a framework for their information cybersecurity programs and not merely treating it as a burdensome, annual check box audit. The emphasis on BAU activities is found in all three types of changes from clarification that merchants not only maintain a list of third party service providers but also list exactly what they do, to requiring more stringent remote access controls (particularly the use of multi-factor authentication by more parties), and in such new requirements as this one:
12.11Additional requirement for service providers only: Perform reviews at least quarterly to confirm personnel are following security policies and operational procedures. Reviews must cover the following processes:
Daily log reviews
Firewall rule-set reviews
Applying configuration standards to new systems
Responding to security alerts
Change management processes
This is a new set of quarterly reviews required for service providers (who must also maintain documented evidence of these reviews) and will be required to be performed in addition to all of the other activities required by the PCI DSS.
What is a service provider?
Given the emphasis on the third party/service provider relationship and the new requirements specifically written to service providers it might be a good idea to clarify what constitutes a service provider according to the PCI DSS rules.
The PCI Glossary Version 3.2 provides an expanded definition of a service provider as follows:
Business entity that is not a payment brand, directly involved in the processing, storage, or transmission of cardholder data on behalf of another entity. This also includes companies that provide services that control or could impact the security of cardholder data.
The definition actually continues by providing some examples of companies that should be included and the special circumstances where a company may not be considered a service provider.
The Glossary also provides a definition of what is a merchant which includes this statement:
Note that a merchant that accepts payment cards as payment for goods and/or services can also be a service provider, if the services sold result in storing, processing, or transmitting cardholder data on behalf of other merchants or service providers.
What does this all mean? It means that there are numerous types of companies that could fall into the category of being a service provider.
There are numerous types of companies that could fall into the category of being a service provider
Historically, service providers were recognized as the companies that operate “in-line” with the payment authorization and settlement path. Service providers are responsible for reporting their compliance status to the major payment card brands, and Visa and MasterCard actually maintain global listings of all the validated compliant service providers registered to do business with them.
The broad nature of the definition of service provider, and the changing IT landscape that revolves more and more around outsourcing has meant that there are lots of companies that perform services for merchants that weren’t “on the radar screen” for the service providers. In recognition of this, Visa recently launched a new program where these other service providers can be recognized and tracked. The program is called the Merchant Servicer Self-Identification Program (MSSIP). This program is intended for those companies that fall outside of the direct payment path (but all are subject to PCI compliance based on the definition of service provider!).
There is also a new category of third party which the PCI DSS calls “designated entities” and the supplemental validation requirements are now included in v3.2 in Appendix A3.
Designated entities are defined as “entities designated by a payment brand(s) or acquirer as requiring additional validation of existing PCI DSS requirements.”
Appendix A3 provides further detail:
Examples of entities that this Appendix could apply to include:
Those storing, processing, and/or transmitting large volumes of cardholder data,
Those providing aggregation points for cardholder data, or
Those that have suffered significant or repeated breaches of cardholder data.
These supplemental validation steps are intended to provide greater assurance that PCI DSS controls are maintained effectively and on a continuous basis through validation of business-as-usual (BAU) processes, and increased validation and scoping consideration.
The mandate is clear for all – follow the PCI DSS continuously, be able to prove it with documented evidence – and this means service providers too!
Are you a merchant or a service provider?
All companies exist because they are selling goods or services, so at one level all companies are merchants. If you are a merchant, you need to understand that every company or third party that helps you conduct business is a likely candidate for being a service provider. Also, if your company sells a service, you are probably a service provider. If your company sells to other companies (and not directly to consumers), you are probably a service provider. If your company is primarily a supplier of hardware and/or software and you provide support for the hardware/software you sell, you are probably a service provider.
How does Tenable help?
Tenable can help you track your PCI “business as usual” security activities to demonstrate ongoing adherence to PCI DSS. Tenable SecurityCenter Continuous View™ (SecurityCenter CV™) can be used to meet or augment over two-thirds of the technical controls found in PCI DSS v3.2, while it identifies vulnerabilities, reduces risk, and ensures compliance.
SecurityCenter CV can be used to meet or augment over two-thirds of the technical controls in PCI DSS v3.2
SecurityCenter CV™ provides pervasive visibility across your network including your cardholder data environment; identifying vulnerabilities, misconfigurations, and malware, and providing critical context using data from sources such as network traffic, virtual systems, mobile device management, patch management, host activity and monitoring, as well as external sources of threat intelligence and known malicious indicators to feed an intelligent monitoring system. SecurityCenter CV provides the critical visibility needed to assure that responses to failures in security controls are executed in a timely and decisive manner.
This is a broad summary of highlights from the PCI DSS 3.2. Tenable also plans to provide a more thorough analysis of the key changes and themes of PCI DSS v3.2 in future blogs, so stay tuned.
Is your network secure right now? Have any of your PCs or mobile devices been compromised? Before you even attempt to answer these questions, you need to pause and ask yourself: Can you actually answer either of these questions with any degree of certainty? Think hard about that one—because your job may depend on it.
According to the recent Verizon Data Breach Investigations Report (DBIR), the average time it takes for an organization to detect a compromise or to discover an attacker inside its network is measured in months—and sometimes years—rather than hours or minutes. With many of the major data breaches in recent years, the company found out about the attack the hard way—with a phone call from a credit card merchant or the FBI reporting stolen customer data being exposed or used in the wild.
The traditional security model is no longer working
The problem is a function of the traditional approach to security. The standard model employed by most organizations for the last decade or more is broken, and it’s time for a new strategy that focuses less on prevention. You need to look at security through a lens of shortening that time to detect a compromise and actively hunting for threats.
It's time for a new strategy that focuses less on prevention
It isn’t really a secret that the perimeter is dead. The concept of “inside the network” and “outside the network” and the idea that you can protect your network and data by simply keeping the bad guys out has been an outdated strategy for some time now. The explosion of mobile devices and BYOD (Bring Your Own Device) programs and the rise of cloud services have effectively removed whatever wall might have previously existed between your network and the bad guys.
The threat landscape has changed
Even if that was not the case, the reality is that the threat landscape shifted as well. While organizations were busy trying to harden the network perimeter, cyber espionage malware attacks like Stuxnet, Flame, and Duqu were silently spreading … undetected. While IT admins have been busy looking for unauthorized access and trying to keep the bad guys out, the attackers have been stealing credentials and logging in with valid usernames and passwords.
The vast majority of network compromises and data breaches have the appearance of authorized activity
The reality is that the vast majority of network compromises and data breaches have the appearance of authorized activity. Whether it’s an inside job by a disgruntled employee, or an external attacker using a username and password captured in a phishing attack, what you see on your network is an authorized user with valid credentials. The crucial key isn’t whether the authentication itself is valid, it’s whether the access is common behavior, and whether the actions taken once the access is granted seem normal or suspicious.
Transform security
How can you defend your network and data against current threats? Effective security comes down to three things: visibility, context, and action. You have to pay closer attention. You need tools in place that can actively monitor all of the endpoints and devices on your network—that can combine business intelligence and threat intelligence to provide context and help you identify suspicious or malicious activity.
How Tenable can help
Tenable SecurityCenter Continuous View™ (SecurityCenter CV™) gives you the tools and information you need to proactively tackle the threat hunting problem and address compromises before they become breaches. SecurityCenter CV provides comprehensive visibility and critical context to enable you to quickly take effective action.
Don’t wait for the FBI to let you know your network has been breached. Don’t expect traditional perimeter security and anti-malware defenses alone to protect you. Adopt a new approach to security and actively hunt for threats before they hunt you.
Adopt a new approach to security and actively hunt for threats before they hunt you
For more information, read about Tenable’s Threat Hunting solution. And watch the Tenable Blog this month for more articles about Threat Hunting.
Tenable Nessus v6.6 has received certification from the Center for Internet Security (CIS) for the Amazon AWS Foundations benchmark; the first and only CIS member to receive that certification.
Tenable is the first and only security vendor to be certified by CIS for the Amazon AWS Foundations Benchmark
Industry standard security benchmarks such as the guides from CIS are one of the best ways to secure a resource. That resource could be a server, software application, network device or even a cloud service such as Amazon Web Service (AWS). If you own or use these resources and are responsible for their security, these guides provide a solid base to your security program. In addition to providing hardening guidance, they also provide peace of mind, knowing that you did the best job you could do to prevent a breach (at least in the eyes of your industry peers). If you follow the recommendations from the guide, the chances of getting breached will be limited. And if an unfortunate breach does happen, you would probably receive less criticism than if you had no policy around it.
That being said, benchmark authors do tend to play a wait-and-watch game when it comes to publishing content for new technologies. In general you will find that hardening guides are written only after a technology has received a certain level of maturity and acceptance. In the fast moving world of technology, where new products and services live and die every day, to say that AWS has reached a certain level maturity and made an impact is an understatement.
For some organizations the advent of cloud services such as AWS has truly been a game changer, and Tenable recognized that value long before it became common knowledge.
The wait-and-watch game played by benchmark authors does serve as a disadvantage for security vendors such as Tenable, because without these benchmarks, there aren’t any generally accepted guides to publish content against. In such scenarios we at Tenable publish our own content based on commonly accepted best practice and vendor recommendations. And that’s exactly what we did more than two years ago when we added support for auditing AWS. With the CIS benchmark released, we are now publishing our audit to assess the configuration of the AWS account, and we are also the first and only security vendor to get certified.
So what is covered under the new CIS AWS guide? Here’s a quick overview.
CIS AWS Foundations Benchmark overview
The CIS benchmark for AWS provides prescriptive guidance for configuring security options for a basic set of foundational AWS services. Here’s the list of services that are within the scope of this benchmark:
AWS Identity and Access Management (IAM)
AWS Config
AWS CloudTrail
AWS CloudWatch
AWS Simple Notification Service (SNS)
AWS Simple Storage Service (S3)
AWS VPC (Default)
The benchmark is divided into four sections:
Identity and Access Management (IAM)
If Amazon Web Services were a kingdom, then the keys to that kingdom would be the “root” account. The root account has unrestricted access to all resources in the AWS account and it must be fiercely guarded and its use limited. This section provides recommendations to limit the use of the root account, and if used, provides necessary monitoring guidance to prevent unauthorized use. In addition, it also recommends using multifactor authentication (MFA), disabling inactive accounts, and having a very strong password policy.
Logging (CloudTrail, CloudWatch, S3, AWS Config)
The use of logging API calls is another important recommendation in this benchmark. It recommends that all AWS API calls should be logged via CloudTrail, and CloudTrail should be configured to send logs to S3 and CloudWatch for long term and real-time analysis respectively. The logs should be encrypted, and the encryption keys should be rotated on a regular basis.
Monitoring (CloudTrail, CloudWatch, SNS)
Monitoring an AWS account is critical to prevent and detect unauthorized use of the account. The benchmark recommends generating alerts by using a combination of metric filters and alarms. Some of the events to monitor and generate alerts against include non-MFA enabled accounts logged in via the console, root account usage, failed authentication attempts, unauthorized changes to IAM, S3, AWS Config and network configuration.
Networking (default VPC)
And last but not least, the networking section makes recommendations for configuring security related aspects of the default virtual private cloud (VPC). The recommendations include prohibiting security groups from allowing unfettered ingress access to remote console services such as SSH and RDP from 0.0.0.0/0, and also ensuring that the default security group restricts all traffic by default.
Sample result
Tenable AWS best practice audit update
Along with the CIS audit, the Tenable best practice audit has also been updated to include recent recommendations. The audit now serves twin objectives: one, to provide a snapshot of your AWS deployment and two, to provide best practice hardening guidance based on the recent update. Both these audit files are now available for download on the Tenable portal.
Wrap up
At Tenable we are always striving to keep our content fresh and up to date. Achieving CIS certification for AWS is just one of the ways for us to meet that goal.
We also realize that AWS is not the only cloud service provider in the marketplace, and there are other cloud service options our customers might consider such as Azure and Rackspace. Over the past few years we have added support for Azure and Rackspace as well, and more recently to OpenStack. So regardless of which cloud computing model or provider you choose, you can rest assured that Tenable has you covered.
Tenable recently commissioned Forrester Consulting to conduct the April 2016 study, Vulnerability Management Trends In APAC: Managing Risk In The Age Of The Customer, to examine how organizations in the Asia Pacific region are handling their vulnerability management strategies and investments. After surveying more than 100 enterprise security decision-makers, it is apparent that reducing risk and increasing security posture is a top priority for enterprises in the region.
About the study
The survey was taken from five specific areas in APAC, with 25% of respondents coming from each region: China, Singapore, Japan, Australia, and New Zealand. The majority of respondents, 52%, came from companies with 1000-4999 employees. All respondents were manager level or above working in IT, and responsible for vulnerability management at their respective organizations. Those surveyed came from a variety of industries, including telecommunications services, financial services, retail, and more.
Managing risk a top priority
The way organizations view vulnerability management is changing. Rather than the traditional focus on compliance, vulnerability management solutions are shifting to a risk based approach. Only 23% of those surveyed would still prioritize compliance above understanding their risk posture.
Only 23% of those surveyed would still prioritize compliance above understanding their risk posture.
Instead, 40% of APAC security decision-makers would classify their vulnerability management programs as strategic, responsible for helping the organization understand risks associated with their most important assets.
Attacks on the rise
This renewed focus on risk is certainly warranted. According to the survey, 80% of companies had experienced at least one attack over the past 12 months. Of all the types of attacks seen by respondents, phishing and DNS-based attacks were the most common. These incidents had significant impact on those surveyed, including lost productivity, loss of business renewals, and loss of new customers.
Lack of continuous monitoring
Despite this renewed focus on risk management, only 22% of respondents currently monitor their environments continuously for new threats. Twice as many respondents, 44%, only scan their environments periodically, while 28% scan monthly. The prevalence of periodic scanning is troubling, as it can potentially leave gaps that provide attackers a window of opportunity to discover and exploit known vulnerabilities.
Only 22% of respondents currently monitor their environments continuously for new threats
The lack of continuous monitoring could be due to the fact that organizations are facing significant challenges with their current vulnerability management solutions. Respondents specified a number of different challenges, including having difficulty remediating breaches across security and operations, an inability to prioritize vulnerabilities, and difficulty accounting for evolving mobile and cloud threats.
These difficulties have led APAC security professionals to consider expanding their investments into more advanced vulnerability management and continuous monitoring solutions. When making these investments, the survey found that organizations were looking for several key capabilities:
Ability to identify, scan, and protect devices
Active scanning
Benchmarks to compare current security controls
Continuous scanning/listening capabilities
High visibility across IT infrastructure, including the ability to scale coverage across cloud, virtualized, and mobile environments
These desired capabilities demonstrate a need for organizations to be able to manage the increased risk of technologies and devices being introduced into the corporate environments by employees, customers, and partners. Business leaders expect to expand their operations with cloud and mobile technology, and to do that securely they must have continuous visibility into those assets, which provides critical context that can be used to take decisive action against potentially harmful vulnerabilities.
A Tenable solution
Tenable Nessus® is the industry’s most widely deployed vulnerability management solution and has been deployed by more than one million users across the globe. Combined with SecurityCenter™, organizations utilizing Tenable have access to the industry’s broadest asset and vulnerability coverage, uniquely positioning them to develop a successful vulnerability management program.
Those looking for a continuous monitoring solution turn to SecurityCenter Continuous View™, which Tenable believes solves many of the challenges mentioned in this study by providing advanced analysis of vulnerability and threat data, network traffic and event information to deliver a continuous view of IT security across all environments.
Resources
Want to know more about how to move your vulnerability management program forward? Check out these Tenable resources:
Security teams around the world are struggling to keep up with the rapidly changing threat environment, while facing the pressure of being responsible for any malicious activity that happens on their watch. With so many moving pieces, it’s impossible to identify what threats are impacting your environment without making sure your foundations are covered.
The most basic foundation is visibility.
What should I be watching?
What do you need to watch? It depends on your environment. Effectively, you need to watch:
Data
To answer the question...
Network activity
What is talking?
Host/node activity
What are devices doing?
User authentication and access
How are users interacting with devices?
Security control activity
How is security working?
There are thousands of ways to collect data in these four categories. They make up the backbone of a comprehensive security monitoring program and ensure that you have complete visibility when something happens. This foundation is essential to finding the breadcrumbs left by an adversary.
When planning your monitoring, it helps to think “What is the worst thing that could happen?” or alternatively “What could get me fired?” and “How would I see that with my tools?” Those major concerns that you and your management have are valid; they tend to point toward the most important areas to protect. It’s our job as security professionals to ensure that there are sensors that collect information at any necessary point for visibility into these scenarios to recreate what happened.
Once you’re collecting data, it's easy to be overwhelmed. Unfortunately, an abundance of data is a sign that you’re doing something right. It also gives security tools a chance to understand what is normal.
Too much data and not enough time...
Once the data is there, it's time to tame it so that its usable. A giant pile of unusable data doesn’t provide value to anyone.
First off, group things that are similar for analysis. This helps to limit your scope, not by limiting the data, but by creating profiles for different kinds of devices and activities that are common in your environment.
This plugin groups systems into types that you can use to roughly break out what you have on your network:
Is there something weird, or infrequent? That’s probably where you should start. The smaller the set of systems you’re working with initially to filter and tune things, the simpler it will be. Some examples of common assets to start with in our library are embedded devices, webcams, SCADA devices, mobile devices and printers:
A great example is printers. Generally, they have a specific function on your network. They have a web interface but should not be communicating with the internet. Using an asset list to identify and isolate the activity of your printers lets you see what they’re doing, without the noise of the rest of your environment:
In theory, the activity should seem pretty flat and homogenous. That’s a good thing… You can create filters for those normal things so that the abnormal ones come up more apparently. What ports do they talk on normally, and to whom? Filter out those normal combinations so you can see more clearly. In order to see printers talking with the internet, try using a combination asset list:
Printer communications are usually very well defined, with endpoints directly talking to printers, or alternatively using a print server to manage jobs. If endpoints are printing directly, it would be unusual for anything else to communicate with printers. We can use asset lists to determine which systems are endpoints; then isolate ones that aren’t endpoints, that are talking to your printers:
Creating alerts for these unusual conditions when they occur is the equivalent of an early warning for a coming storm. SecurityCenter™ will check periodically to determine if any new hosts match the criteria you’ve set up, and will alert in a variety of ways: email, notifications, syslog, etc.
Devices like printers, that have set patterns to their activity, are much easier to monitor for threat activity than dynamic resources. When you’re learning your toolset, they’re a great place to start. Once you’ve classified one device type, move on to another and you’ll gradually get more familiar with your overall exposure, and improve your defenses as well as your detection capabilities.
Over 20,000 companies and government agencies worldwide use Tenable to identify vulnerabilities in and reduce risk to their network. With nearly 80,000 plugins—the broadest coverage in the industry—our vulnerability audits find all local and remote flaws, whether on the ground, in the cloud or on your mobile devices.
However, this deluge of vulnerability data can be overwhelming for you and your short-handed IT security and operations teams. Where do you start? How do you make heads or tails of all the security data that’s been discovered? And finally, how do you simplify the remediation process, so you can fix the most urgent issues in the most timely manner?
First things first: we make it easy for you! With Tenable SecurityCenter Continuous View™ (SecurityCenter CV™), you have a single pane of glass to see a consolidated view of all your vulnerability data from active scanning, agent scanning, intelligent connectors, passive listening, and host data. By leveraging our purpose-built Assurance Report Cards, you can visualize the totality of your security program, contextualize and prioritize your vulnerabilities and then take decisive action to reduce exposure and risk in your organization.
Now, you ask: I have found and prioritized the vulnerabilities, how do I get started with the remediation process?
Great news! If you’re using ServiceNow as your IT service management (ITSM) platform, today we release our custom-built application to give ServiceNow Security Operations customers full, continuous visibility of IT assets and any associated vulnerability data, so you can understand your full risk posture.
By integrating SecurityCenter CV vulnerability data with ServiceNow Security Operations, staff members in IT Operations can automatically create help tickets when new vulnerabilities are identified
Besides providing visibility, the key use case for this ServiceNow integration is for organizations to apply the right remediation process to their vulnerabilities. By integrating SecurityCenter CV vulnerability data with ServiceNow Security Operations, staff members in IT Operations can automatically create help tickets when new vulnerabilities are identified. These tickets can then be assigned and tracked to the right individual or team for closed-loop remediation and accountability. Finally, when these vulnerabilities are fixed and closed, you can verify whether the security risk has been truly fixed with subsequent scans.
The Tenable for ServiceNow Security Operations app is available in the ServiceNow Store. Download it today to streamline your security remediation. For more information on our integration with ServiceNow, including our Solution Brief and How-To Guide, go to our Technology Integrations page.
The Homeland Security Department’s Continuous Diagnostics and Mitigation (CDM) program can help ensure that your agency has the proper cybersecurity controls in place. The right CDM tools can also help you identify and eliminate threats in your network before they become breaches.
As an increasingly hostile threat landscape has made the limitations of perimeter-based IT defenses apparent, the federal government has shifted from a cybersecurity regime of periodic assessment of static controls to continuous monitoring of IT resources and activities. The goal is to ensure not only that regulatory requirements are being met, but that the enterprise is effectively defended.
The federal government has shifted from a cybersecurity regime of periodic assessment of static controls to continuous monitoring of IT resources and activities
The Department of Homeland Security’s Continuous Diagnostics and Mitigation program supports this shift with a suite of off-the-shelf products giving better real-time visibility into government networks and systems. And the right CDM tools can also help agencies actively hunt and eliminate threats in the network before they become breaches.
Continuous Diagnostics and Mitigation
CDM is a risk-based approach to government cybersecurity that “provides federal departments and agencies with capabilities and tools that identify cybersecurity risks on an ongoing basis, prioritizes these risks based upon potential impacts, and enables cybersecurity personnel to mitigate the most significant problems first.”
The first phase of CDM, which began in 2013, focused on endpoint security. Phase 2, Least Privilege and Infrastructure Integrity, focuses on identity and access management and concentrates on monitoring and responding to network activity. The final phase will cover boundary protection and event management. No single tool or service fulfills all of the program’s requirements, and a variety of products for phases 1 and 2 are available under a Blanket Purchase Agreement (BPA) from the General Services Administration.
It is expected that new and increasingly robust solutions will continue to be developed and added to the BPA as the cyberthreat landscape evolves. But some solutions now in the catalog can help you move beyond risk mitigation to proactive identification and elimination of threats.
Threat hunting
Merely monitoring network status and activity is not enough to tell you if a threat has penetrated your network
Threats and attacks are becoming more complex, more sophisticated and stealthier, and merely monitoring network status and activity is not enough to tell you if a threat has penetrated your network. Intruders hide their tracks and quietly move from system to system, targeting sensitive information and resources while remaining below the horizon of traditional defenses. This sophistication, coupled with increasingly complex IT and security infrastructures, make it difficult for even skilled security staff to spot malicious activity.
The common wisdom today is that every enterprise is a target and the question is not if, but when, you will be breached. You need not only visibility but also the context that comes from combining data across platforms to reveal quiet but malicious behavior.
You need not only visibility but also the context that comes from combining data across platforms to reveal quiet but malicious behavior
Agencies waiting for a compromise to reveal itself will continuously be in a reactive mode, scrambling to isolate and repair damage after the fact rather than preventing it. Fortunately, you do not have to choose between the continuous monitoring required under CDM and comprehensive visibility with advanced analytics across multiple platforms. CDM solutions can provide both.
The tools you need now
Tenable’s threat hunting solution provides the real-time visibility and critical context needed to identify and eliminate threats, using the sensors of SecurityCenter Continuous View™ (SecurityCenter CV™).
SecurityCenter CV is available under the Continuous Diagnostics and Mitigation BPA. It scans the user environment to detect vulnerabilities, misconfigurations and malware, and also performs advanced analytics. The platform’s non-intrusive, low-overhead passive scanning provides comprehensive visibility across the network. The threat hunting solution uses this to create a baseline of normal activity that allows anomalies to be detected and flagged. By identifying chained events that might not appear suspicious by themselves, complex threats can be uncovered in real time.
Leveraging the full capabilities of CDM tools such as SecurityCenter CV helps you achieve not only compliance with cybersecurity regulations, but also real cybersecurity.
Today, I’m pleased to announce that new Tenable 2016 Verizon DBIR ARCs and dashboards are now available for immediate download in the SecurityCenter Feed.
Tenable 2016 Verizon DBIR ARCs and dashboards are now available for immediate download
With these new ARCs and dashboards, you can move from reading the 2016 DBIR to responding to it – using the insight and recommendations to improve the resilience of your security program. Assess yourself against key findings in the 2016 report. Analyze how well your organization conforms to many of the recommendations and best practices highlighted in the 2016 DBIR, and identify ways to improve your program.
Don’t just read: take action
Each year, Verizon releases the Data Breach Investigations Report (DBIR) to provide key insights into how to manage risks and avoid security failings, as well as to help organizations of all sizes learn from the experiences of others. Many people examine the report closely, but most organizations struggle to turn DBIR findings into actionable intelligence. As a result, year after year, little, if any, progress seems to be made when it comes to defending against common vulnerabilities and threats.
As in previous years, the Verizon 2016 DBIR notes that the vast majority of all attacks continue to fall into a few basic patterns. Because attackers are relying largely on common attack methods, you can use the Verizon DBIR to dramatically reduce the success of breach attempts by identifying these patterns on your network – helping you prevent a compromise or breach.
In terms of vulnerabilities, this year’s Verizon DBIR continues to show that most organizations don’t have foundational vulnerability management controls in place. Implementing a repeatable, time-bound, policy-based vulnerability management process that includes automated, near real-time assessments is critical, so you can understand the degree of risk DBIR findings pose to your organization and remediate issues before breaches occur.
Automate DBIR assessment, identify threats and effectively improve defenses
It is a good security practice to regularly assess your organization against DBIR findings and recommendations
It is a good security practice to regularly assess your organization against DBIR findings and recommendations. But to reduce the risk, you can’t stop there. Best-in-class security organizations incorporate the insights that the DBIR provides into their security program on an on-going, measurable basis to better defend against today’s biggest IT security risks.
Tenable enables you to assess yourself by providing a unique combination of active scanning, agent scanning, integrations with third-party systems, passive listening and host data, which automatically feed and correlate security data from across your environment into ARCs and dashboards. This helps you to quickly identify whether the top vulnerabilities and threats in the Verizon DBIR are in your environment.
Pre-built dashboards identify if specific vulnerabilities from the 2016 DBIR exist in your organization
Tenable Verizon 2016 DBIR ARCs and dashboards give you the visibility and context you need to analyze how well your organization conforms to many of the recommendations and best practices highlighted in the Verizon DBIR. You can then use this information to quickly take decisive action, applying the findings in the Verizon DBIR to better protect your organization against threats.
ARC policy statements show how effectively security programs are at meeting 2016 DBIR recommendations
Verizon 2016 DBIR dashboards
SecurityCenter CV dashboards are pre-built, highly customizable dashboards that security managers, analysts and practitioners can use to get the visibility and context they need to connect the dots between the mountains of security data and Verizon 2016 DBIR findings. This helps you more easily determine which events present a real threat in your environment and which are just noise. With dashboards, you can focus action on the events and threats that the Verizon 2016 DBIR highlights as the ones that matter the most, increasing your likelihood of identifying attackers before they are able to find sensitive data.
The following dashboards are now available in SecurityCenter CV:
SecurityCenter CV provides the industry’s first-ever Assurance Report Cards (ARCs), designed to enable CISOs and security leaders to define their security program objectives in clear and concise terms, identify and close potential security gaps and communicate the effectiveness of their security investments to C-level executives, board members and business managers. Tenable provides several pre-built Verizon 2016 DBIR ARCs that enable you to align the findings and recommendations of the Verizon DBIR to your IT security program using a policy-based approach. You can use the sample policies in Tenable ARCs based on the 2016 DBIR findings, or customize ARC policy statements as needed based on your organizational requirements.
The following ARCs are now available in SecurityCenter CV:
The Verizon 2016 DBIR is one of the most important reports of the year. Once it’s released, CISOs and other business executives frequently have questions regarding how their organizations measure up against DBIR findings. Use Tenable to assess your security posture within the context of the DBIR and proactively arm yourself with timely, accurate information – before IT security leaders and the business ask.
Use Tenable to assess your security posture within the context of the DBIR and proactively arm yourself with timely, accurate information
If you lived in a climate with lots of mosquitos, gnats, and crawly things, your house could easily be overrun with pests. Where would you start to get rid of them? You could buy a fly swatter and start swinging away. But would you be able to swat fast enough to get ahead? Maybe, maybe not. Either way, it doesn’t yet make sense to start using your fly swatter at this point.
The best approach is to cover preventive measures first; close the doors, screen the windows, and caulk the cracks. You could also create an air-gapped entry allowing you to kill the bugs that are on you to prevent them from hitchhiking into your living quarters. No matter how creative and thorough your preventive measures are, it is still possible, if not likely, that a few bugs will breach your defenses and invade your living area. That is when using a fly swatter makes sense - and it’s time to go hunting.
Case in point, I have a colleague who is allergic to mosquitos. She needs to be able to find any mosquitos in her house to reduce her chances of getting bitten. While she has preventive measures in place, she needs a way to quickly find or trap any mosquitos in her house. And in her case, maybe she doesn’t want to use a fly swatter, she wants the best possible tools to zap them away.
The importance of preventive measures
The parallel to IT security threat hunting may be obvious, but it is worth discussing because controlling pests is most effective when preventive measures are in place. In security, too, the goal is to eliminate threats, not to hunt for them. Hunting isn’t cost-effective unless strong preventive controls are in place and operating effectively.
What should be in place before tackling threat hunting? The Center for Internet Security’s (CIS) Critical Security Controls (CSC) provide excellent guidance. The CSC is a prioritized list of the top twenty technical controls focused “on the most fundamental and valuable actions that every organization should take.”
The first five controls are essential, and the CIS refers to them as “Foundational Cyber Hygiene.” Here’s how Tenable SecurityCenter Continuous View™ (SecurityCenter CV™) can help you address the five highest priority controls:
CSC1: Inventory of Authorized and Unauthorized Devices. Tenable provides multiple ways to inventory devices, including active discovery scans, intelligent connectors to third-party Configuration Management Database and Mobile Device Management systems, passive network listening, and host data from network devices.
CSC2: Inventory of Authorized and Unauthorized Software. Similar to inventorying hardware, Tenable inventories software using active discovery scans, intelligent connectors to third-party systems, and passive network listening.
CSC3: Secure Configurations for Hardware and Software on Mobile Devices, Laptops, Workstations, and Servers. Tenable audit files support multiple standards to audit configuration conformance for a wide range of systems.
CSC4: Continuous Vulnerability Assessment and Remediation. Tenable offers industry leading active and agent-based vulnerability assessment. Passive listening supplements periodic scans with continuous vulnerability assessment to remove the blind spots between scans.
CSC5: Controlled Use of Administrative Privileges. Tenable can test for the presence of accounts that should not be on a system and can test servers to ensure they are configured with the proper level of access control, including detecting servers that have not been locked down to a least level of privilege.
Time to hunt
SecurityCenter CV continuously monitors your systems and network, looking for and prioritizing anomalous or suspicious activity that needs investigation
After at least these five preventive controls are in place to protect your environment, you are ready to start hunting for threats. SecurityCenter CV continuously monitors your systems and network, looking for and prioritizing anomalous or suspicious activity that needs investigation. SecurityCenter CV dashboards speed your investigation by putting contextual information at your fingertips so you can quickly take action. For example, you could use the Detect Suspicious Activity dashboard shown below as a starting place. The most suspicious activity is highlighted in red, and you can click each red item to drill in and investigate.
Few industries have more to lose from a cybersecurity attack than the energy sector. While an attack in finance or retail can cost an organization millions of dollars, a targeted attack in the energy sector can cost lives.
When I made a career transition from the banking sector to the upstream oil and gas industry, the first major culture shock I experienced was the strong attention to safety in the corporate environment. I understood the focus on safety in the field, but was surprised when I was reprimanded for climbing the stairs in a corporate office with a cup of coffee that did not have a lid. This surprise was compounded when every meeting, on the phone or in the conference room, began with a safety moment and presentation.
One of the most important branches of an oil and gas company is the health, safety and environment department (HSE). Just like tech and ops, human resources and finance, HSE has its own budget, staff, and executives and is dedicated to classifying, tracking, reporting and responding to safety incidents and events.
But despite this focus on physical safety, I was shocked when I first found personal off-the-shelf devices attached to the network behind firewalls in the server room. There was a general lack of cybersecurity awareness that I had grown accustomed to experiencing in the banking and finance sector.
My first question to my new team was why cybersecurity risks were not treated as HSE incidents? It took some time to explain the potential cataclysmic effects of a cyberattack, even inside the corporate firewall. If some of the personal devices that I saw connected to the network contained a virus, cyber attackers could have stolen intellectual property, shut down plants and collapsed the organization.
At the time, the organization I was working with was also exploring the use of cutting edge technology such as automated sensors in the upstream environment, the segment of the oil and gas industry tasked with finding petrochemical reserves deep underground. This networking of IP-addressable sensors capable of machine-to-machine communication is called the Internet of Things, or IoT, and because these devices often are not regularly monitored or managed they can greatly expand an enterprise’s threat surface.
By 2020, the IoT market in the energy sector will be worth $22.34 billion USD
But despite this potential risk, the IoT is growing. A recent study from MarketResearch.com predicted that by 2020, the IoT market in the energy sector will be worth $22.34 billion USD.
The challenge
The IoT is just part of the larger integration of information technology (IT) with industrial control systems (ICS) in the oil and gas industry. In many ways, the goals of IT and ICS are at odds with each other. IoT devices and ICS are designed under a data model called Accessible, Integrity, Confidentiality (AIC) where data is prioritized as accessible, data integrity is the second priority, and data confidentiality is the last priority in this model.
IoT devices and ICS are designed to have data accessibility and data integrity as priorities, with data confidentiality of third importance
IT is designed the opposite under what is classified as a Confidentiality, Accessibility and Integrity model (CAI). In this model, data confidentiality and data accessibility are the top two priorities, and data integrity is the third priority.
The AIC model makes sense from the physical safety point of view in ICS and oil and gas. If I am reading the flow of crude gas from a pipe, or a chemical mixture being introduced to gas at the refinery, it’s more important for the numbers on the valves to be accessible and as accurate as possible. Without this attention to accessibility and integrity, disasters can happen. The issue is that integrating these two models can cause gaps in security. These gaps can lead to a breach in the ICS from the IT side or vice versa. And without the ability to keep adversaries out of either system, the threats of a cyberattack are greatly increased.
The solution
In order to safely incorporate and get the full value of IoT, oil and gas companies must fully secure all technology—ICS as well as IT—and incorporate cybersecurity into their HSE organizations as a safety issue.
Securing technology requires:
Mapping and documenting all networked devices
Implementing policies to secure access to all systems
Monitoring all networks continuously for activity and status
By incorporating these practices into their organizations, oil and gas companies can more securely, efficiently and effectively introduce new technologies into their current ICS and IT organizations.
How Tenable can help
The energy sector’s commitment to IoT is expanding along with the cybersecurity risks. Learn about how SecurityCenter Continuous View™ can help you take decisive action against risks by providing continuous visibility of all your assets and critical context surrounding threats.