Quantcast
Channel: Tenable Blog
Viewing all 1976 articles
Browse latest View live

Mr. Robot and Tenable

$
0
0

At Tenable, we’re huge fans of USA’s cable series, Mr. Robot. The show follows Elliot Alderson, a talented, yet troubled, security engineer at Allsafe Cybersecurity who connects with people by “hacking” them. He becomes involved with a hacktivist group, fsociety, whose goal is to cancel all debts by taking down the largest company in the world, Evil Corp. In this blog, we will discuss several of the attacks demonstrated in the show, as well as how Tenable’s products can serve as a method to detect the attacks before or as soon as they happen.

The attacks in Mr. Robot are realistic and can seem intimidating

Episode 2: ones-and-zer0es.mpeg – Malicious mixtape CD

In this episode, a malicious actor from the mysterious Dark Army group pretends to be an aspiring rapper handing out his mixtape on the street corner. Ollie, one of Elliot’s colleagues at Allsafe Cybersecurity, takes the CD home and puts it in his computer, but when he does, his computer freezes up and then ejects the disc. At the end of the scene, the fake rapper is shown monitoring Ollie’s webcam and types into a chat room, “we’re in.”

Both penetration testers and attackers have used this attack for years. A CD is loaded with the malware and is presented as something important—such as new company training material or an important financial document—and then mailed, dropped outside a building, or simply handed to the victim-to-be. As soon as the victim loads the CD and clicks the file, the regular file may run, but so will the malware.

Using SecurityCenter Continuous View™ and the Log Correlation Engine™( LCE®) a custom TASL (Tenable Application Scripting Language) could be written to create a normalized set of events using a combination of several events:

  • Windows-Drive_Removed
  • Windows-LCE_Client_Detected_Attached_USB_Device
  • Windows-LCE_Client_Detected_Removed_USB_Device
  • Windows-LCE_Client_Detected_Attached_Drive
  • Windows-LCE_Client_Detected_Removed_Drive

with the detection of a new External Netflow connection. That new TASL event would be used in an email alert sent to the incident response team for investigation.

Episode 4: da3m0ns.mp4 – Raspberry Pi to control HVAC

In their quest to destroy Evil Corp’s backups at the ultra-secure Steel Mountain data center, Elliott and the fsociety crew penetrate the building and connect a Raspberry Pi to a networked HVAC controller. Once it is connected, the Pi calls back to their headquarters and they are in.

Network implants are a tool commonly used by red teams as a method to establish long-term, covert presence on a network. Due to their often small form factor, they can easily be hidden out of sight and can remain undetected on a network for months.

Using SecurityCenter CV and the Passive Vulnerability Scanner™ (PVS™), PVS can detect when a new system is present on the network and sends a New Mac Address message to LCE. SecurityCenter uses the New Mac Address event and PVS plugin IDs to initiate a scan and to send an email to the IT operations team.

Episode 6: br4ve-trave1er.asf – Parking lot USB drop

Facing pressure from drug dealer Shayla’s violent supplier, Elliot is forced to break into the network of the prison where drug supplier Fernando is being held in order to bust him out of prison. To help accomplish this, Elliot recruits fellow fsociety member Darlene to drop USB drives—containing malware—in the prison parking lot to help facilitate the break-out. A guard picks up one of the drives on his way in, plugs it into his workstation, and the executable runs. Even though anti-virus catches the file, Elliot has a shell on the machine until the guard physically unplugs the power.

Attackers can easily trick unsuspecting victims into picking up thumb drives and plugging them into their computers, then compromising them using methods like USB keystroke injection, BadUSB, and backdoored files. For example, in corporate America during the time that annual salary raises are announced, an attacker could drop a thumb drive in the company’s parking lot. The attackers could backdoor a PDF and title it “2015 Annual Salary Raises.” This would entice a naive user to open the file and even potentially distribute it.

    SecurityCenter CV and LCE can easily detect this activity using the method described in the first scenario from Episode 2. In addition, Tenable’s solutions offer additional methods for tracking such malicious behavior:

    • SecurityCenter can audit systems to ensure that USB media is prohibited and to call out systems deviating from this policy.
    • SecurityCenter can detect systems where USB media has been used during the course of normal scanning.
    • Tenable has agents that can detect USB media usage as frequently as in real time.
    • SecurityCenter offers analytics (e.g., "Removable Media and Content Audits") that make it easy to see an organization's exposure to these problems.

    The attacks in Mr. Robot are realistic and can seem intimidating at first, but with proper network monitoring and auditing, as well as user awareness, most can be thwarted.

    Thanks to Cody Dumont and Corey Bodzin for their contributions to this article.


    The Security Model is Broken, Part 5: IoT Security

    $
    0
    0
    Billions of IoT devices are not secure

    The Internet of Things (IoT) is rapidly growing, but security is lagging behind. Millions of cars and almost a billion smart phones are vulnerable to some sort of hacking. Gartner estimates that there will be 26 billion IoT devices by 2020. How many vulnerable devices will we have by then? As Yogi Berra said, it’s deja-vu all over again.

    There will be 26 billion IoT devices by 2020

    This year, one American car manufacturer recalled 1.4 million hackable cars; researchers successfully hacked automobiles made by an electric car maker; and the Stagefright vulnerability exposed almost a billion Android smart phones.

    To their credit, the electric car maker anticipated that vulnerabilities could occur, they had OTA (over the air/WiFi) patching capabilities and they have patched their vehicles. The American car manufacturer does not have OTA capabilities and is distributing USB drives to the owners of the vehicles so they can patch their cars. And Google scrambled to get a patch out because they did not have a coordinated patching process in place with their manufacturers, but they have since instituted a monthly patching process in the wake of the Stagefright vulnerability.

    What is more frightening (pun intended) are the results of the 2014 HP IoT Research study which highlights alarming rates of vulnerabilities in IoT devices they tested. Most devices had vulnerabilities related to cross-site scripting and weak credentials, weak password schemes, unencrypted transmissions, personal information being collected, or account enumeration weaknesses. OWASP also has a project to identify and maintain the top ten security problems related to IoT. Again, the usual suspects figured prominently: insecure cloud/web interfaces, lack of encryption, insufficient authentication.

    This is not a very good showing, but it doesn’t have to be this way.

    In January of this year, the FTC released a paper outlining IoT security and privacy risks. They proposed both legislation for general security and privacy protection, and supported best practices for IoT security. I believe this is a necessary first step to hopefully developing regulatory standards for IoT devices.

    Also, several industry groups such as the IoT Consortium, Industrial Internet Consortium, Allseen Alliance, and others are working on IoT standards including the security components.

    While these efforts should be supported and continue, the necessary safeguards to secure IoT devices are well known by now. Vulnerability assessment, patching capabilities, strong passwords, intrusion lockout, encrypted transmissions, input validation, and privacy safeguards are well established and fundamental security practices.

    The necessary safeguards to secure IoT devices are well known by now

    These safeguards should be implemented from the get-go of IoT device deployment. If they are not, then we haven’t learned from past mistakes.

    Seeing the Forest and the Trees

    $
    0
    0
    A change in perspective can reveal new threats

    We’ve all heard the saying about missing the forest for the trees many times, and network security professionals tend to “get into the weeds” when identifying and remediating threats. While it’s true that we can solve 80% of our network vulnerability issues by addressing the top 10% of our attacks, if we’re only looking at those top 10%, then we’re missing 90% of the forest. When we overlook so much, we lose visibility into some key indicators of possible compromise, and the big picture.

    We can solve 80% of our network vulnerability issues by addressing the top 10% of our attacksBut if we’re only looking at those top 10%, then we’re missing 90% of the forest

    We are all forced to do more with less, and that includes less time and resources to devote to incident response and remediation. However, with regular news about espionageware and other “new” malware that has been in place and active for years, it’s time we start looking at the lower end of our network issues.

    The bottom 10%

    All modern malware must communicate to a controller, and this is even more critical in the realm of espionageware, where the owners/authors are trying to extract data out of the targeted environment. We learned from the mass mailing worms of the past that high volume traffic meant quick detection, which translated to easy malware identification and response. Malware authors soon learned to avoid quick detection and facilitate longer periods of data extraction; they knew how important it was to keep these threats “low key” and hide as much traffic as possible in the daily network pattern. The result? Hidden threats would never rise to the top 10%, and if we were not looking elsewhere, we would never catch that traffic and threats.

    When we look at the bottom 10% of issues, we start to see a more complete picture

    When the equation is turned on its head and we look at the bottom 10% of issues, we start to see a more complete picture. While not everything on the bottom can or will be attributed to malware, we can identify things like malfunctioning (chatty) network cards, transient employee assets, and misconfigurations. While these non-malware identifications take time to process, what’s left is activity that can lead to early identification of “new” malware.

    Establishing a baseline

    You may be thinking: how are we going to find the time to identify everything and define what is “abnormal”? When done correctly, the majority of the time is spent up front, establishing a baseline and determining what is normal for your environment. The time requirements, as well as the baselines, are unique to each environment and can vary as much as an hour a week to hours per day. Once you’ve been able to obtain a baseline, it’s a process of continuously monitoring for changes to that baseline. This continuous monitoring should already be in progress, looking for intrusions and other attacks, but now with the additional requirement of looking for abnormalities. In reality, minimal additional effort for detection and identification will require the legwork of network support staff.

    Once you’ve been able to obtain a baseline, it’s a process of continuously monitoring for changes to that baseline

    The process of continuous network monitoring should be continuously updating the network norm—whether formally or informally. With this type of monitoring, you see the ebb and flow of network traffic, and your analysts recognize when things like network payroll may take more bandwidth or processor power. However, the process should raise red flags when a device starts making regular weekly connections to an out-of-organization or out-of-country host if that is not normal behavior. This single machine may be the gateway to a new attack against the organization, and the traffic must be identified as either benign or hostile. However, this traffic will never be classified if it’s not detected or monitored.

    Detecting malware through abnormal activity

    The easiest way to discover malware is through the abnormal activities or symptoms that are inflicted on a compromised host or network

    If a covert piece of malware communicates with its command and control server or repository once every other day, how much proprietary company data can be exfiltrated in a week? If that same malware isn’t detected for 18 months, how much information is lost? By cutting back the time from infection to discovery, the time and amount of data being extracted is also cut down. For every piece of malware, there has to be a first discovery, and that is most often done by companies that are compromised, not malware researchers. The easiest way to discover malware is through the abnormal activities or symptoms that are inflicted on a compromised host or network.

    Trees in the forest

    While the majority of the bottom 10% of network issues will not be related to malware, that 10% gives us a more complete “big picture” of our networks. These long forgotten “trees” in our network “forest” have some tales to tell, if we’re willing to listen. Some of these tales will be of strangers in our midst, which must be addressed for the safety of the organization. These warning signs can lead us to earlier detection, saving the organization money and protecting our data. By neglecting the bottom 10%, we are overlooking some of the biggest sources of actionable intelligence on our networks.

    Other resources

    We take a deeper dive into this subject in several whitepapers that address the detection of malware and abnormalities with SecurityCenter™ and Nessus®:

    These free resources are available in the Tenable Network Security Resource Library. We also host a discussion forum for Indicators of Compromise and Malware—join the conversation!

    Scaling up Tenable

    $
    0
    0

    Discerning CISOs worldwide increasingly view Tenable products as the gold standard for centrally managing and improving their security posture amidst an ever-worsening threat landscape.

    And while we secure some of the world’s largest and most complex networks, we’ve been thinking about how to best scale up the company to better serve an even broader base of customers worldwide.

    We have decided to partner with two top-notch venture capital and growth equity firms, Insight Venture Partners and Accel, and to accept a significant investment. Accel previously invested in Tenable in 2012.

    The investment will enable Tenable to grow our team, enhance next-generation product security software, and expand globally.

    2015 has been a banner year for Tenable:

    • We added 200 employees
    • We more than doubled our sales force and number of resellers worldwide
    • We grew billings over 50% last year and have a run rate that exceeds $100 million

    Tenable has the most widely-deployed vulnerability and threat management platform on the security market today with more than one million Tenable users and 20,000 customers worldwide. From some of the most recognizable brands in the world such as Visa, Deloitte, BMW, Adidas and Microsoft, to all arms of the U.S. Department of Defense, customers rely on Tenable’s solutions to continuously monitor their networks for vulnerabilities and malware.

    Our entities currently serve customers in 10 countries around the world and we have immediate plans to expand efforts into 10 more countries including: China, Mexico, Ireland, Sweden, United Arab Emirates and India.

    This latest round of investment, the largest funding round to date for any private security company, will help us expand our global footprint and further accelerate product development for our next generation security software solutions.

    Thank you,

    Ron Gula, CEO

    Security Professionals Give Global Cybersecurity a “C” Grade

    $
    0
    0

    In a new survey conducted by Tenable in partnership with research firm CyberEdge Group, 504 information security practitioners worldwide were asked a series of questions to calculate their overall confidence levels that the world’s cyber defenses are meeting expectations. Global cybersecurity readiness earned a score of just 76%, or a C average. And approximately 40% of respondents said they feel “about the same” or “more pessimistic” about their organizations’ ability to defend against cyber attacks compared to last year.

    Security professionals are clearly overwhelmed by an increasingly complex and challenging threat environment, and shaken by the continuing occurrence of major breaches. In particular, security pros named their biggest challenges as cloud infrastructures, mobile device proliferation, and the level of board member involvement with cyber risk issues. However, most respondents do believe that they have effective cybersecurity tools in place.

    Global cybersecurity readiness earned a score of just 76%

    The survey includes responses from 6 countries and 7 industries, including government, education, healthcare and financial services to name a few. Practitioners in the US rated the state of national cybersecurity at 80% or a B-, while Australian respondents perceive their country’s cyber readiness at 69%. Scores are also calculated for specific industry verticals.

    You can access the full report, 2016 Global Cybersecurity Assurance Report Card, on the Tenable website. You can also register for an upcoming December webinar, 2016 Cybersecurity Assurance Report Card: Key Insights and Takeaways (US, APAC, or EMEA). Cris Thomas, Tenable Strategist, and Steve Piper, CEO of the CyberEdge Group, will interpret the survey results and help you benchmark the effectiveness of your practices.

    Drifting Out of Compliance? You’re Not Alone

    $
    0
    0

    This is the first installment in a “Drifting Out Of Compliance” series where I take a closer look at organizational approaches to compliance, the resulting challenges that impact organizations’ ability to demonstrate point-in-time compliance, and the challenges of making the shift from a point-in-time compliance mentality to a continuous compliance one. A well thought out, risk based approach to establishing a comprehensive security program is the way to go, with compliance following as a natural result. A security first, compliance second mentality is ideal. To that effect, the authors of compliance frameworks usually clarify that their guidance is only intended as a baseline level of security. However, many organizations still struggle to attain even this baseline level of security. For those organizations, being held accountable to these compliance frameworks results in implementing greater levels of security than they would otherwise achieve.

    A security first, compliance second mentality is ideal

    Every year, many companies undergo a compliance assessment as called for by FISMA and PCI. In the case of NERC compliance, in addition to an auditor’s spot check, some companies are self-reporting their compliance stature. According to the Verizon 2015 PCI Report, most companies are passing annual PCI assessments. However, Verizon also reports that 80% of those companies fail compliance in subsequent assessments. In the case of self-reported NERC violations, many violations have persisted for months, even years. In the case of HIPAA, many compliance violations are personnel related—whether intentional or unintentional—and show no signs of improving. Regardless of the industry or the regulations, even if companies demonstrate point-in-time compliance, they very quickly drift back out of compliance.

    Even if companies demonstrate point-in-time compliance, they very quickly drift back out of compliance

    Why is this? I wanted to better understand the reasons behind this, not from a “which compliance requirements are the hardest to sustain?” perspective but rather to find out “how do businesses approach compliance?” Based on conversations with professionals in the security industry—security directors, CISOs, QSAs, and penetration testers—I offer the following findings about why companies drift out of compliance.

    Reason #1: Project mindset

    In the project mindset approach to compliance, companies put together a temporary project team whose goal is to prepare for a compliance assessment. Once they have successfully passed the annual assessment, the project team is disbanded.

    This approach is very common. Jeff Man, a Qualified Security Assessor for 10 years, estimates that two-thirds of the companies he worked with approached PCI DSS compliance with this project mindset.

    In conversations with an IT security director from a quick service restaurant, I discovered this same project-based approach. She reported significant costs associated with flying people into headquarters for three weeks every year to prepare for the annual PCI DSS assessment, pulling them away from their usual business activities. And though a vendor’s payments security product helped them reduce time and costs associated with PCI compliance, the lion’s share of their savings were associated with this three week project. Clearly, demonstrating compliance on an ongoing basis throughout the year was not part of their approach to compliance.

    Rather than viewing PCI compliance as something to sustain throughout the year, most companies suspend their core business activities for several weeks to focus on passing an annual assessment. However, as the Verizon report attests, 80% of these companies drift out of compliance soon thereafter.

    Reason #2 – It’s not just about technology

    Many companies rely too heavily on technology to solve their compliance headaches. Yes, technology controls are a necessary part of securing sensitive data and demonstrating compliance, but simply purchasing technology without building processes around that technology will only get you so far. This approach may have worked when compliance mandates were first introduced, but as the standard of due care has risen, simply implementing technology is not enough.

    For example, one company purchased millions of dollars of equipment and yet, two and a half years later, half of that same equipment remained in storage: the technology became “shelfware,” providing no security value. Other companies trust security automation so much that they take the “set it and forget it” approach: “if I set it up and let it run, then I’m being compliant.” However, without implementing processes around technology to fill gaps—between point solutions, between inter-departmental workflows—the technology will never be optimized.

    Reason #3: Reactionary cycles

    Some IT security professionals report that their security departments are stuck in reactionary problem fixing cycles—cleaning viruses off desktops, dealing with password lockouts, mitigating data breaches, responding to unannounced audits.

    In one case, a company discovered a breach and conducted a forensic investigation, discovering that logging had not been turned on in that area of the network. Not only were they unable to find the ingress and egress points for the attack but they were never able to identify what type of data was exfiltrated. Ironically, these are the same types of reactionary cycles that take time away from continuous improvement efforts which could reduce future reactionary cycles.

    Just a few examples of continuous improvements include better defining and refining processes, conducting data mapping exercises and working with the system’s owners (to know where sensitive data resides). Some continuous monitoring efforts include identifying and verifying which security controls are in place, whether they’re positioned optimally on the network, and making sure they’re operating as expected. These are just a few types of due diligence and forward thinking efforts which have the potential to pull you out of reactionary cycles and to help you work more effectively and efficiently.

    You are not alone

    If you are drifting out of compliance, you are not alone. All of the challenges associated with attaining and sustaining compliance highlights the need to take a broader, more unified approach to seeing what is happening within the organization, across networks and across devices. It's time to move away from a point-in-time, checkbox mentality to a more persistent, bigger picture continuous compliance mentality. This includes both continuous process improvement and continuous network monitoring. Along the way, you may find opportunities that not only introduce efficiencies but also improve morale, reduce attrition and perhaps even save time and money.

    It's time to move away from a point-in-time, checkbox mentality to a more persistent, bigger picture continuous compliance mentality

    In my next blog, I will take a deeper look at organizational challenges that impede the shift from a point-in-time compliance mentality to a continuous compliance mentality. If you have any compliance stories or organizational challenges you’d like to share, please email them to rkral@tenable.com. Let’s move towards a more sustainable compliancy and build a stronger security stature along the way.

    Creating Meaningful Metrics

    $
    0
    0

    In today’s information security departments, no matter what the maturity level, metrics are almost always a deliverable required by upper management to gauge the security posture of the company as well as department progress. What I have seen in my various positions within information security is, often times, these metrics fail to provide real value to their organization. If a security department is running a report out of a tool, they’re really only providing information within the limitations of that tool’s reporting capability. Even so, what does 1,000 “Medium” level vulnerabilities really mean?

    Often times, metrics fail to provide real value to their organization

    The importance of metrics

    In their simplest form, metrics are merely a type of measurement, and what they measure is determined within various tools’ configurations. I have found risk, threat counts, and severity to be the most common metrics within the information security realm. However, what does that mean to your environment? I’m certain most individuals reading this blog are all too familiar with the infamous weekly or monthly “status meeting,” where everyone in a department comes together to discuss current projects, share any relevant news, and share “vulnerability reporting.” Having worked in a large Fortune 500 company, in these scenarios I would find myself staring at absurd figures wondering how they even pertained to the security of our environment. “This month our Windows servers had 3,291 vulnerabilities, down from 5,026 after patching. Linux servers had 729 vulnerabilities, down from 944 after updates.” So, are we really more secure because the more recent number is smaller? Are we really 35% more secure in our Windows servers because of the numerical difference in vulnerabilities?

    Ranking the criticality of assets, vulnerabilities and metrics

    Without understanding vulnerabilities and their inherent risks as they pertain to your environment, these numbers are little more than mathematical equations. 35% less total vulnerabilities does not equal a 35% improvement in the security of your environment if 25% of those vulnerabilities were “low risk” informational findings, and the other 10% were open FTP ports that allow anonymous logins. Senior level engineers and management need to identify, and then classify, their assets by criticality. After doing so, not only would they have quantifiable information being reported, but they also have quantifiable information with meaning that is unique to their environment. Once this is completed, a security program may begin to truly measure risk posture. For example, Company A has 500 endpoints discovered in SecurityCenter™. The first step is simply to identify what the assets are. Are they test labs, or are they critical servers holding financial data? Once the assets have been sorted into function, criticality or other criteria, the company can perform various types of threat modeling.

    For example, take a group of servers responsible for hosting a company’s internal instant messaging application. A report is generated and the servers are found to have a total of 10 “high severity” vulnerabilities.” Do these classify as critical systems requiring 100% uptime? Perhaps, or maybe the company chooses to accept a higher risk appetite for this group of servers. While they’re very important, it could be argued that they’re not exactly critical to the success of the company. As such, perhaps those 10 high-severity vulnerabilities have a longer acceptable timeframe to remain on the system before being remediated.

    Consider this same question for a group of servers supporting the accounting department. Does the threat to the company remain the same? I’m fairly confident most organizations would say no. I’m also sure that few would argue that 10 high-severity vulnerabilities on servers supporting a financial department, as opposed to 10 high-severity vulnerabilities on servers supporting an application for instant messaging, have two completely different levels of importance to a company. That is also the exact reason why senior-level engineers and management need to get together before generating metrics and running reports on groups of assets.

    Senior-level engineers and management need to get together before generating metrics and running reports on groups of assets

    Creating meaningful metrics

    There are many advantages to identifying and classifying the systems in your organization. Save yourself confusion in the future by taking some of these steps to avoid reports that simply spit out a total count of vulnerabilities:

    • Identify: Know what you’re scanning. For example, does a Windows Server 2008 with an IP address of 10.10.10.4 support the marketing or finance department?
    • Analyze: Know the risk to your organization. How much damage will be done if this system becomes compromised?
    • Categorize: Obtain a high-level view of your security posture. Identify which assets belong to each area of your organization, and assign levels of criticality based on those findings.
    • Execute: Generate meaningful metrics. Once the previous steps have been completed, you will have a solid foundation to generate reports with quantifiable and meaningful vulnerability and risk information as they pertain to your organization.
    Make metrics meaningful to your organization

    Make metrics meaningful to your organization. Identify your assets, analyze their criticality to your organization, categorize their areas of use, and execute metrics and reports based on what is important to your organization. Visit our Security Metrics page to learn more about creating meaningful metrics with Tenable products.

    Answering Your Questions about Nessus Cloud

    $
    0
    0

    I was fortunate to attend several Tenable User Group meetings in the Northeast a few weeks ago. One of the topics we discussed at the meetings was Nessus® Cloud – what it is, what it does, and how it works. The following questions came up several times in the meetings, so we’re sharing here in case they’re on your mind as well.

    External or internal scanning?

    Most people we met at the User Group meetings were familiar with Nessus Cloud for doing external scans and knew that Nessus Cloud is also a PCI DSS Approved Scanning Vendor service for those organizations that need to meet PCI external scanning requirements. What was new for some people is that through the Tenable Cloud, Nessus Cloud customers have several worldwide scanners available to them. While you can select any cloud scanner to scan externally-facing assets, selecting a scanner that’s geographically close to those assets can deliver results faster.

    Nessus Cloud customers have several worldwide scanners available to themNew scanner pools

    For internal scanning, we talked about two options available to Nessus Cloud customers. First, every Nessus Cloud license comes with one or more internal scanners, so customers can deploy these secondary scanners on their internal networks. It’s also possible with Nessus Cloud to deploy Nessus Agents on individual target systems. We expect that Nessus Cloud customers will use a mix of external, internal and agents to scan their environments. This Deploying Nessus Cloud video gives a good overview of the different external and internal scanning options available and when it makes sense to use one option over another.

    Where does Tenable host Nessus Cloud?

    We host the Tenable Cloud on Amazon Web Services (AWS)

    We host the Tenable Cloud on Amazon Web Services (AWS), which offers Nessus Cloud customers many useful benefits, as our VP of cloud services, Sean Molloy outlined in this Nessus in the Cloud blog article earlier this year. For example:

    • Given the sheer size of AWS, there’s no upper bound limit on growth.
    • With the regional coverage of AWS, customers can select where their data is pinned.
    • We’re able to implement multiple layers of security to better protect the Tenable Cloud and our customers’ data.

    While scalability of the cloud is very much a given, the second two points above both sparked further discussion.

    How do I keep my data in a specific region?

    When we talked about AWS and its extensive regional coverage, the question of how to “pin” or designate that your data be archived in a specific region came up multiple times. It’s also one that comes up frequently from our friends in Europe, even more so since the European Court of Justice invalidated the US-EU Safe Harbor agreement back in October, making it more important than ever for organizations managing data within Europe to keep that data within EU boundaries.

    For Nessus Cloud customers, it is very easy to pin data to a specific region

    For Nessus Cloud customers, it is very easy to pin data to a specific region. When an account is first provisioned, a customer selects an AWS region for data storage. Current choices are:

    • US / N. Virginia
    • EU / Frankfurt
    • APAC / Singapore

    Again, specific to Europe, as all customer data is stored in secure, regional AWS services, the certifications for EU data protection that AWS maintains apply to the Tenable Cloud. More information is available from Amazon.

    How is my data protected?

    The protection of customers’ data in the Tenable Cloud is our top priority, so we're happy when we get the opportunity to talk about the security of our application and our customers’ data. We outline our policies in the Tenable Data Protection Practices whitepaper on our website, and our cloud services team is always available to discuss the policies in more detail. The whitepaper covers Tenable Cloud security policies from several different perspectives:

    • For physical security, since the Tenable Cloud uses data centers and services from Amazon Web Services (AWS) to provide and deliver Nessus Cloud to customers, AWS is responsible for policies and controls for physical security of their data centers. They offer documentation on their practices via their website.
    • For application security, the Tenable product team follows a number of best practices to ensure security of software applications they develop and deliver.
    • Data security practices include encryption for data in all states in the Tenable Cloud and isolation of data so that customers cannot access data other than their own.

    Those are a few highlights. Other areas include data retention, replication and disaster recovery policies that all play a role in ensuring the security of the Nessus Cloud application and customer data. Again, for more detail, download the Tenable Data Protection Practices whitepaper.

    Other questions?

    Do you have other questions about Nessus Cloud? If so, they might be answered in the Nessus FAQ on our website. If not, send me an email at dgarey@tenable.com and I’ll answer in another blog article, update the FAQ or get back to you directly.

    If you’re interested in attending an upcoming Tenable User Group meeting, talk to your local Tenable account manager or partner about upcoming events in your region.


    Establishing Relevant Security Metrics, Part 1: What is a Metric?

    $
    0
    0

    Marcus Ranum, a Senior Strategist at Tenable, is a sought-after spokesperson on security metrics. Over the next two weeks, Marcus will share some of his insights in a 5-part video blog series, Establishing Relevant Security Metrics. He will offer advice on creating a security metrics program for your organization and selecting the most important metrics.

    The language of security is metrics

    In this first episode, Marcus defines “metric,” relates security metrics to an organization’s larger business goals, and discusses how data supports information security stories.

    In the next installment, Marcus will discuss “Why keep security metrics?”

    Visit our Security Metrics page to learn more about creating meaningful metrics with Tenable products.

    3 Myths That Impede the Shift Towards Continuous Compliance

    $
    0
    0
    Drifting Out of Compliance, Part 2

    This is the second installment in my Drifting Out of Compliance series, taking a closer look at organizational approaches to compliance and the challenges of shifting from a point-in-time compliance mentality to a continuous compliance one. Although a security first, compliance second approach is best, many organizations still struggle to attain the baseline level of security documented in compliance requirements.

    In the first installment of this series, I pointed out that the point-in-time compliance mentality is commonplace in the marketplace today and manifests itself in several ways:

    • The project mindset: setting up a team to demonstrate compliance at a point in time only
    • The technology-only investment mindset: acquiring prescribed technology with little thought to implementation and process
    • The reactionary mindset: “fire drills” that crop up when an urgent need arises

    A security team could be entrenched in one (or more) of these mindsets without a concerted effort to break the cycle. And such a mentality perpetuates these 3 common compliance myths:

    Myth #1: Demonstrating compliance at a point in time amounts to compliance throughout the year

    The false sense of security resulting from passing an annual assessment, combined with the subsequent and inevitable drift out of compliance over time, sets an organization up for an increased risk of data breaches. According to Verizon, 80% of those that passed their annual PCI assessment drifted out of compliance shortly thereafter, busting this myth wide open. To that end, it is no surprise that the “continuous” concept is becoming a key component in more and more compliance frameworks. More to come on this topic in the next installment of this blog series.

    Myth #2: Reactionary cycles are always productive and without opportunity cost

    As many of us have experienced, reactionary cycles build on one another and fight against the key planning concept “build the plan, work the plan.” Ironically, well thought out, forward-thinking planning efforts may reduce future reactionary cycles. In such a culture of reactionary cycles, it’s easy to question “Why work a plan, or commit to work, when you know full well there are many more fire drills coming around the corner which are going to trump the plan?” To this end, employees can’t help but resign themselves to a culture of reactionary cycles with no room (or hope) for continuous improvement.

    Myth #3: Processes and technology usage are the same

    Perhaps this myth is really an “unconscious assumption.” Yes, technology usage could be considered a process, but, take it a step further and consider these questions:

    • How repeatable is that process?
    • Could someone else step in and execute the same process?
    • Is there a system in place that ties one process to another, such as interdepartmental handoffs?
    • Who’s monitoring these processes to ensure all gaps are closed?
    • Are there processes to manage the processes?

    To ask a question we all already know the answer to: “Have there been breaches where effective, perfectly capable technologies were in place? Did process gaps play a significant role in a business-crippling data breach?” Prior to a data breach, the value provided by processes may seem intangible and hard to quantify. Only afterwards, after suffering significant losses, does the tangible value of those processes become crystal clear. Consider this:

    • Do you view processes as if they are business assets?
    • Do you think about how to increase the value of those “process assets?”

    Opportunity for process maturity

    There’s plenty of room to build more mature, repeatable, continuous processes

    If your organization is like most, there’s plenty of room to build more mature, repeatable, continuous processes. Though security experts are knowledgeable and proficient with security concepts and tools, they may not be as well-versed in process methodologies such the Capability Maturity Model or Six Sigma. And if they are, are they too consumed by reactionary cycles to put that knowledge to good use? Businesses think about optimizing productivity of personnel and maximizing ROI of their product purchases. Should processes be viewed any differently?

    Consider the following Six Sigma doctrine:

    Continuous efforts to achieve stable and predictable process results (e.g., by reducing process variation) are of vital importance to business success.

    Just as we need advanced network monitoring technology to continuously monitor our networks and to monitor the effectiveness of our security controls, we also need to continuously mature and improve our “process assets.” Without process maturity, closing the gap between siloed processes is hit or miss, reactionary cycles will rule the roost, and data breaches due to weak processes will continue. Without valuing and investing in process as an integral part of optimizing technology usage, the challenge of shifting from a point-in-time compliance mentality to a continuous compliance one will be great indeed.

    We need to continuously mature and improve our process assets

    Check back for the next installment in this series when I will take a look at how the “continuous” concept has become part of the standard of due care. If you have any compliance stories or organizational challenges you’d like to share, I’d like to hear about them. Email me at rkral@tenable.com.

     

    More Understanding PCI DSS Scanning Requirements

    $
    0
    0
    Yes, Virginia There Are Internal Network Scanning Requirements for PCI

    Recently, Tenable published a blog, Understanding PCI DSS Scanning Requirements, which provided an overview of the three distinct network vulnerability scanning requirements found in the Payment Card Industry Data Security Standard (PCI DSS). The blog primarily focused on using Nessus® Cloud for your external network vulnerability scanning to meet the PCI DSS 11.2.2 requirement. [Tenable, with Nessus Cloud, is an Approved Scanning Vendor (ASV) certified by the Payment Card Industry Security Standards Council (PCI SSC) to perform external network vulnerability scanning of a company’s perimeter, or Internet facing systems.]

    There is often confusion about the vulnerability scanning requirements for PCI, most likely because there is so much focus on the external vulnerability scanning. Why? External scanning is the only one of the three scanning requirements that must be performed by an authorized third party (an ASV). And the vast majority of “small” merchants and service providers are allowed to conduct their own compliance validation by (1) submitting the appropriate Self-Assessment Questionnaire (SAQ) and corresponding Attestation of Compliance (AoC), and (2) submitting evidence of four quarterly (passing) external vulnerability scans performed by an ASV.

    In this blog, I will focus on the other two network vulnerability scanning requirements: (1) internal network vulnerability scanning, and (2) internal/external network vulnerability scanning after “significant changes” to your cardholder data environment (CDE). I will also address how you can meet these requirements with either Nessus Professional, Nessus Manager/Cloud, SecurityCenter™, or SecurityCenter Continuous View™.

    Who has to perform internal vulnerability scans for PCI?

    In PCI terms, there are two types of entities: your company is either a Merchant (accepting debit/credit cards for the payment of goods and services you provide) who “transmits, processes, or stores, cardholder data,” or a Service Provider (a catch-all category that includes any other company that supports the payment card authorization & settlement and post-settlement activities, or more simply could “impact the security of [your customers’] cardholder data.”)

    At a minimum, if you are a Merchant, and you either validate compliance with a QSA company (producing a Report on Compliance or RoC) or you self-assess using either SAQ-C or SAQ-D, then you are required to perform quarterly internal vulnerability scans and rescans as needed to resolve all “high-risk” vulnerabilities in your internal network.

    If you are a Service Provider, you are required to perform quarterly internal vulnerability scanning regardless of whether you validate compliance with a QSA or by way of a Self-Assessment Questionnaire.

    Note: Even if your payment processing is performed primarily by a third party and/or your systems are hosted in the cloud, you are still responsible for validating compliance with the PCI DSS, which includes responsibility for vulnerability scanning as described above. If you’re not sure if this applies to you – ask! Don’t assume that the third party company you’ve engaged is doing it for you.

    If you’ve determined that you need to perform internal vulnerability scanning, let’s look at the details of that requirement.

    Internal vulnerability scanning (PCI DSS 11.2.1)

    The PCI SSC provides a definition for an internal scan:

    Refers to a vulnerability scan conducted from inside the logical network perimeter on all internal-facing hosts that are within or provide a path to an entity’s cardholder data environment (CDE).

    The PCI DSS section that deals with network vulnerability scanning is requirement 11.2:

    11.2 Run internal and external network vulnerability scans at least quarterly and after any significant change in the network (such as new system component installations, changes in network topology, firewall rule modifications, product upgrades).

    The detailed requirement for internal vulnerability scanning states the following:

    11.2.1 Perform quarterly internal vulnerability scans and rescans as needed, until all “high-risk” vulnerabilities (as identified in Requirement 6.1) are resolved. Scans must be performed by qualified personnel.

    There really isn’t much detail provided in the PCI DSS about what constitutes a valid, or qualifying internal network vulnerability scan – they pretty much have left that to the scan vendors to determine what is appropriate. The few detailed requirements are actually found in the PCI DSS Approved Scanning Vendors Program Guide which provides the following recommendations:

    • Applying the detailed requirements for external vulnerability scanning found in the ASV Program Guide to internal vulnerability scanning programs:
      • Be Non-disruptive– no exploitation, denial of service, or degradation of performance
      • Perform HostDiscovery– make a reasonable attempt to find all live systems
      • Perform Service Discovery–perform a port scan on all TCP ports and all common UDP ports
      • Perform OS and Service Fingerprinting– primarily used to tailor additional vulnerability discovery
      • Have Platform Independence – scanner must cover all commonly used systems
      • Be Accurate – confirmed vulnerabilities must be reported as well as suspected or potential vulnerabilities

    The biggest difference between internal vulnerability scanning and external vulnerability scanning is what the PCI DSS requires in terms of a “passing” scan

    The biggest difference between internal vulnerability scanning and external vulnerability scanning is what the PCI DSS requires in terms of a “passing” scan. For external scans all vulnerabilities rated “Medium” and higher must be remediated; but for internal vulnerability scanning only “High” or “Critical” vulnerabilities have to be remediated in order to adhere to the requirement.

    The risk ranking for internal vulnerability scanning is determined by you the customer

    It is also worth noting that the risk ranking for internal vulnerability scanning is determined by you the customer, in accordance with requirement PCI DSS 6.1 which requires you to “establish a process to identify security vulnerabilities, using reputable outside sources for security vulnerability information, and assign a risk ranking (for example, as 'high,' 'medium,' or 'low') to newly discovered security vulnerabilities.” The biggest advantage here is that you have the opportunity to take an actual risk-based approach to the overall security of your internal network, which means you can take into account limited access, internal segmentation, likelihood of attack from the Internet, and so forth.

    There is also a handful of special “automatic failures” for external vulnerability scanning that in some cases do not apply at all to internal scan results (e.g. open access to databases from the Internet or unrestricted DNS zone transfers).

    Vulnerability scanning after significant changes (PCI DSS 11.2.3)

    The third type of vulnerability scanning ... is when significant changes occur to the network environment

    The third type of vulnerability scanning required by the PCI DSS is when significant changes occur to the network environment, particularly the cardholder data environment. The PCI DSS does not require that these scans be performed by an ASV (for external scanning), only that they are performed by qualified personnel.

    What constitutes a significant change? The PCI DSS offers this guidance:

    The determination of what constitutes a significant change is highly dependent on the configuration of a given environment. If an upgrade or modification could allow access to cardholder data or affect the security of the cardholder data environment, then it could be considered significant.

    Scanning an environment after any significant changes are made ensures that changes were completed appropriately such that the security of the environment was not compromised as a result of the change. All system components affected by the change will need to be scanned.

    There is certainly not universal agreement on what constitutes a “significant change” to your environment, but things such as adding systems or system components, updating OS/application versions, applying patches, changing access rules, or adding new logic or functionality are all worthy of consideration. Tenable would suggest erring on the side of caution if for no other reason than the fact that you are allowed to perform unlimited scanning (e.g. there is no additional cost for additional scans performed).

    How Tenable solutions meet these scanning requirements

    Any Tenable product may be used to execute vulnerability scans that meet both the internal and scan-after-significant-change requirements found in PCI DSS 11.2.

    All versions of Nessus (Professional, Manager, Cloud) come with a scan policy template that is acceptable for use in meeting the PCI DSS 11.2.1 Internal vulnerability scanning requirement.

    Internal PCI Network Scan

    This scan policy template provides the minimum functionality required for an internal vulnerability scan; namely, a comprehensive scan of all TCP ports and common UDP ports as well as omitting things like Denial-of-Service attacks or certain conditions that are exclusive to external vulnerability scanning (that our ASV Nessus Cloud solution provides). If you are a SecurityCenter or SecurityCenter Continuous View customer, there is a similar scan policy template available for use with your deployed Nessus scan engines.

    Customers who use Nessus Manager or Nessus Cloud have the added benefit of being able to deploy remote Nessus scan engines to geographically diverse locations such as all of your retail stores or regional offices. Effective vulnerability scanning of remote retail locations has always been a challenge for merchants subject to PCI adherence, but with Nessus Cloud there is no longer the need to attempt internal routing of scan engines and results – you simply manage it all from the cloud.

    Whether you are using Nessus Cloud for your external scanning for PCI today, or you are a long time user of Nessus or SecurityCenter for internal scanning, the good news is that you can use the Tenable solutions that you already love to meet all of your PCI vulnerability scanning requirements – inside, outside, and all over the world.

    For more information:

    Security Issues that Deserve a Logo, Part 1: Glimpse

    $
    0
    0

    Since April 2014, a new trend in security has experienced a meteoric rise, with headlines grabbed in both mainstream media and the tech press. Vulnerabilities, once the preserve of the researcher and defender, were suddenly thrust into the limelight when an OpenSSL bug was announced. Previously, the chances of a big security bug being discussed outside of our industry were low, the media being more concerned about the outcomes and impacts rather than the methods. But with a catchy name and a simple vector logo, Heartbleed went viral—and not in the sense we’re used to.

    This new found fame certainly has some advantages; for example, it pushes the plight of plucky patchers onto the agenda of Boards everywhere. But for many, the fire drill caused by a new logo vulnerability can be driving the wrong behaviour. A good example is POODLE, which was announced around the same time as MS14-062; but due to the catchy name, POODLE got far more attention when MS14-062 was arguably the more troublesome flaw.

    The fire drill caused by a new logo vulnerability can be driving the wrong behaviour

    This knee-jerk reaction, driven by disclosure and popularisation of bugs by slapping a name and logo on them, is frustrating when bigger security issues garner far less attention. With that in mind, I decided to create my own security issues that I think actually deserve a cool logo and memorable names in the small hope that these are also propelled in front of the people who set business priorities.

    Over the coming weeks I’ll be discussing vulnerabilities such as Invader, Stutter, Bandit, EagerBeaver and Subversion. But for now, I give you Glimpse.

    Like many others in the industry, one of my first jobs was in IT management. Tasked with building out the network to support the growing business and dealing with the far too frequent outages caused by over utilised systems, I’d occasionally provision and release new platforms. This was some time ago when the default server build was more physical in nature: virtualisation was something only a few were playing with on the desktop and cloud services were just hosting providers. With only a handful of systems spun up and released per year, it was low enough to have a naming convention based on famous characters from your favourite novel; I can still remember the IP addresses and names of many of my systems, which read like a who’s who of the literary world.

    But that was 15 years ago, and boy have things changed since then. We’ve moved from a few physical servers running many services, towards many virtual systems running one or part of the application stack. When you’re bursting the amount of IP addresses communicating on the network by 50% per day, it’s no wonder traditional tools are failing to give clarity in complexity.

    Glimpse, the lack of an accurate understanding of systems and data owned by an organisation

    This has led to the security issue that almost every company faces today: Glimpse, the lack of an accurate understanding of systems and data owned by an organisation. Glimpse impacts almost everyone and has ripple effects throughout the operational environment. With a lack of visibility, data driven decisions and answers to simple questions are almost impossible to make. When Heartbleed hit, the major issue wasn’t that OpenSSL 1.0.1b had a huge flaw in it; it was that many struggled to answer the simple question, “Where are we vulnerable to this?” 

    Answering “Where are we vulnerable to a major flaw?" should take seconds and have a high level of accuracy. But unfortunately, due to Glimpse, the response is often more of a guesstimate rather than fact. The security team that answers with, “We have 1254 systems affected by Heartbleed ± 3%” would be trusted by the business far more than another team that simply said “I don’t know, let me find out for you.”

    I recently spoke to the security director of a large retailer, and I asked what he felt his biggest issue was. He replied, “I can tell you exactly how many tins of beans I have in every store around the world and in every warehouse, but I can’t tell you how many servers are currently running.” This is a real issue that deserves a logo, a catchy name, and maybe even a theme song. Glimpse is a real problem that must be replaced with complete discovery and visibility.

    Glimpse is a real problem that must be replaced with complete discovery and visibility

    Find out about other security issues that also deserve the Heartbleed treatment; in my next blog, I’ll introduce you to Subversion.

    Grading Cybersecurity Around the Globe

    $
    0
    0

    Tenable has released its inaugural Global Cybersecurity Assurance Report Card, with research conducted by the CyberEdge Group. The report takes the responses of over five hundred security professionals from around the world and asks them to grade their organizations’ ability to assess cybersecurity risks and to mitigate threats that can exploit those risks. The results of the survey are pretty interesting and show a wide variety of answers around the world and across industries. Unfortunately the overall global average grade is just 76%, or an unremarkable “C.”

    About the report

    There are a couple of things to note about this report. First, we surveyed a broad spectrum of security professionals around the world and did not specifically target our customer base. Second, we only looked at data from industry verticals and countries where we received a statistically significant number of responses; in this way, we would not have a small number of responses for a particular area or industry making things look worse or better than they actually were. And third, all of the people we included in the survey data came from organizations with more than five hundred employees, which means that the answers are really enterprise level answers.

    Summary results

    When you look at the data, a few interesting things emerge regardless of which country or industry you look at. Most respondents ranked their ability to secure both cloud infrastructure (IaaS, PaaS) and cloud applications (SaaS), mobile devices and web apps as weaknesses. But almost everyone ranked their ability to secure endpoints, data centers, dealing with insider threats and communicating to executives as strengths. This could be because end points and data centers have been around longer and we have more experience dealing with them, as opposed to mobile and cloud apps which are comparatively much newer aspects of the cybersecurity landscape.

    Results by industry

    Financial services and telecom score the highest grades

    There are some interesting bits when you break things down by industry. Financial services and telecom score the highest grades, while government and education score the lowest grades. It’s not a huge surprise that financial services and telecom are at the top of the ranking, since these sectors are risking real money and in some cases large amounts of real money.

    What is surprising is seeing education and government at the bottom of the list

    What is surprising is seeing education and government at the bottom of the list. Education has been a huge target for quite some time, with a rash of college online break-ins in the late ‘90s, targeting student’s personally identifiable information and research intellectual property. However, educational institutions have to deal with large transient populations, and reliance on mobile devices may help explain their low ranking. It is also surprising to see government’s low rankings in the report. (Note that government here doesn’t just apply to the US federal government; this is a global report and also includes state and regional governments from around the world.) Considering the number of attacks suffered by government agencies, we hoped that they would have a higher score.

    Technological challenges

    The number-one challenge facing cybersecurity ... was the overwhelming cyber threat environment

    The report also asked people to rank the challenges they face in attempting to secure their infrastructure. One very interesting result was that a lack of effective security products came in dead last on the list, meaning that people report being generally happy with the security products that are available to them. And lack of budget only scored in the middle of the list; perhaps people have been able to stretch what little budgets they have for so long that they have gotten used to it, or maybe companies are realizing the importance of spending on cybersecurity and have loosened up those budgets a little bit. But the number-one challenge facing cybersecurity as reported by the respondents was the overwhelming cyber threat environment. There is just too much happening too fast for people to keep up with.

    We also asked people how optimistic they were today versus a year ago. A full 90% of respondents said they were the same or more optimistic today. Considering that the number-one challenge was an overwhelming cyber threat environment, it is good to see people keeping a positive outlook.

    For more information

    I encourage you to download the 2016 Global Cybersecurity Assurance Report Card report and infographic yourself to examine the data in more detail. Or you can watch an on-demand webinar about the report findings for the US and Canada, EMEA, or APAC.

    We will be posting more detailed blogs exploring the results in specific industries and technologies.

    And keep an eye out next year as we do this research again; you will have the ability to start looking at year-over-year trends within your own industry.

    Establishing Relevant Security Metrics, Part 2: Why Keep Security Metrics?

    $
    0
    0

    If “the language of security is metrics,” then establishing a sound security metrics program is a necessity. But why should you track metrics? It’s not just about justifying your security team’s work and budget. In Part 2 of Marcus’ metrics series, listen as he discusses problems and opportunities inherent to security metrics.

    Metrics are fun!

    In the next installment, Marcus will address “What are the top security metrics to track?”

    Visit our Security Metrics page to learn more about creating metrics with Tenable products.

    Tenable Announces Free On-Demand Training

    $
    0
    0

    At Tenable, our goal is to help our customers truly secure their IT infrastructure and help them make the most of their investments. Today, we are excited to announce the availability of free on-demand training courses, available to our customers, partners, and the general public. We will also continue to offer instructor-led courses virtually, at Tenable classroom facilities, and at customer and partner locations.

    Complimentary on-demand training

    Tenable’s new on-demand training program includes a valuable sample of training courses pulled from our training catalog. These on-demand training courses are available free of charge to anyone with an Internet connection, and include technical content covering all of Tenable’s products, as well as a fun interactive game booth. The on-demand courses are targeted at individual security analysts, security teams, knowledge workers and other professionals evaluating Tenable’s products. Anyone can watch the available videos on their schedule, at their own pace, in a structured learning environment.

    You can access the complimentary courses on the On-Demand Training page.

    Tenable customers access all education and training via the Support Portal

    If you are an existing Tenable customer under active maintenance or subscription, you will now receive access to all of the available training courses, which include over 10 hours of technical material, free of charge! For access to all complimentary on-demand training, please go to the Tenable Support Portal.

    Tenable partners access all education and training via the Partner Portal

    To enable our valued partner community to gain the skills, knowledge and tools needed to effectively sell and demonstrate Tenable solutions to their clients, Tenable is proud to offer best-in-class on-demand training. In addition to all the existing onboarding and sales tools, the Tenable on-demand training courses help our partners become product experts from the comfort of their own desks. All partner training, including the complimentary on-demand training and the instructor-led training, can be accessed via the Partner Portal.

    Get started today

    Access to the On-Demand Training courses is also available to the general public. For customer and partner access to all complimentary on-demand training, please visit the Tenable Support Portal and the Tenable Partner Portal.


    CIS Updates the 20 Critical Security Controls

    $
    0
    0

    The Center for Internet Security (CIS) has come forward with their most recent set of information security controls. The previous edition of the Critical Security Controls listed 20 controls for an organization to implement to protect their networks. The most recent edition (CIS Critical Security Controls v6.0) keeps the same number of controls, but replaces one control and adjusts the priority of others. The data used to formulate these controls comes from private companies, and government entities within many sectors (power, defense finance, transportation and others). Experts from various organizations combined their knowledge to create this consensus of controls, and it is a great reference point for any organization looking to improve their information security posture.

    It is a great reference point for any organization looking to improve their information security posture

    The changes

    The CIS web site states:

    The new Controls include a new Control for “Email and Web Browser Protections,” a deleted Control on “Secure Network Engineering,” and a re-ordering to make “Controlled Use of Administration Privileges” higher in priority.

    This makes sense, as the Secure Network Engineering Control could be interpreted to encompass multiple controls within the 20 Controls mentioned on their list. Removing it provides more room for elaboration in other areas, such as the newly added Email and Web Browser Protections control, and others already mentioned (Wireless Access Control, Malware Defenses, Boundary Defenses, etc.).

    The top 4 controls

    A particular point of interest is with the top four controls, as there has been no change in their order at all. CIS still identifies these four controls as their most important:

    • Inventory of Authorized and Unauthorized Devices
    • Inventory of Authorized and Unauthorized Software
    • Secure Configurations for Hardware and Software on Mobile Devices, Laptops, Workstations, and Servers
    • Continuous Vulnerability Assessment and Remediation

    Notably, the fourth bullet places emphasis on the term “Continuous” which is now a part of the standard of due care, also emphasized in NIST and PCI DSS frameworks to name a few. Additional information on compliance support for the shift to a more continuous state of compliance is elaborated further in our blog Continuous Now Part of the Standard of Due Care.

    How Tenable can help

    This falls perfectly in line with Tenable’s family of products and the services

    This falls perfectly in line with Tenable’s family of products and the services we provide our customers. The recent release of SecurityCenter™ 5.1 has inventory, continuous network monitoring, and configuration assessment capabilities to cover all four of these controls. To learn more, visit the SecurityCenter Continous View page.

    Changes in priorities

    Another point of interest in the revised Controls is the lowering in priority of “Malware Defense” from number 5 to number 8, with “Controlled Use of Administrative Privileges,” “Maintenance, Monitoring, and Analysis of Audit Logs,” and “Email and Web Browser Protections” all being moved ahead of it. This speaks to the trend in IT security of not attempting to chase down a defense for every new malware that is created. Rather, assume that your organization has been compromised at some point, and prepare to identify, control, and respond to the breach. With that understanding, it’s an effective transition from the first four controls that speak to proper inventory of devices, software and their configuration within an environment.

    Control 20 “Penetration Tests and Red Team Exercises” remains in the same position. However, the priority levels of Controls 9 through 19 have been modified from the last version of the Critical Security Controls.

    The 20 Critical Security Controls

    Here is a summary of the 20 Controls:

    • CSC 1: Inventory of Authorized and Unauthorized Devices
    • CSC 2: Inventory of Authorized and Unauthorized Software
    • CSC 3: Secure Configurations for Hardware and Software on Mobile Devices, Laptops, Workstations, and Servers
    • CSC 4: Continuous Vulnerability Assessment and Remediation
    • CSC 5: Controlled Use of Administrative Privileges
    • CSC 6: Maintenance, Monitoring, and Analysis of Audit Logs
    • CSC 7: Email and Web Browser Protections
    • CSC 8: Malware Defenses
    • CSC 9: Limitation and Control of Network Ports, Protocols, and Services
    • CSC 10: Data Recovery Capability
    • CSC 11: Secure Configurations for Network Devices such as Firewalls, Routers, and Switches
    • CSC 12: Boundary Defense
    • CSC 13: Data Protection
    • CSC 14: Controlled Access Based on the Need to Know
    • CSC 15: Wireless Access Control
    • CSC 16: Account Monitoring and Control
    • CSC 17: Security Skills Assessment and Appropriate Training to Fill Gaps
    • CSC 18: Application Software Security
    • CSC 19: Incident Response and Management
    • CSC 20: Penetration Tests and Red Team Exercises

    For more information

    We invite you to read our whitepaper on leveraging these controls for your organization.

    Visit the CIS web site to download a copy of the 20 controls.

    “Continuous” Now Part of the Standard of Due Care

    $
    0
    0
    Drifting Out of Compliance, Part 3

    This is the third installment in my Drifting Out of Compliance series, taking a closer look at organizational approaches indicative of a point-in-time compliance mentality and the challenges of shifting to a continuous compliance mentality. Although a security first, compliance second approach is best, many organizations still struggle to attain the baseline level of security outlined in compliance requirements.

    Thus far, we’ve looked at different approaches to compliance indicative of a point-in-time compliance mentality as well as three myths, or false assumptions, organizations believe which further entrench themselves in this point-in-time compliance mindset. In this discussion I take a look at how increasingly, organizations will be expected to adopt a continuous compliance mentality as part of the ever-rising standard of due care.

    Compliance frameworks weigh in

    Organizations will be expected to adopt a continuous compliance mentality as part of the ever-rising standard of due care

    Many of you are probably already familiar with the “business as usual” language introduced into the PCI DSS with v 3.1, the idea being to incorporate compliancy into everyday business processes as a matter of course. We see this sort of language in other frameworks as well, in a number of different contexts, including risk management, process improvement, continuous monitoring and continuous security improvements. It is clear to me that the concept of “ongoing basis” is not a new one. Other frameworks have been endorsing this continuous approach for some time as well. In fact, NIST has devoted an entire publication to Information Security Continuous Monitoring. Consider the following contexts and framework references:

    • Risk Management - NIST 800-37 endorses “near real time risk management,” “ongoing information system authorization” and “continuous monitoring processes” as part of its risk management framework
    • Security Controls - NIST 800-53 addresses the need for “ongoing risk-based decisions,” “ongoing effectiveness,” “near real time information” and “continuous monitoring”
    • Card Data Security - PCI SSC encourages PCI entities to “monitor effectiveness of security controls on an ongoing basis” and “maintain their PCI DSS environment in between PCI DSS assessments”
    • Continuous Monitoring - NIST 800-137 recognizes the need for an “ongoing awareness” to be incorporated as part of an overall Information Security Continuous Monitoring program
    • Security Continuous Monitoring – The NIST Cybersecurity framework has devoted a section of its requirements to this concept
    • Continuous Reporting - The federal government’s Continuous Diagnostic and Monitoring (CDM) program was started to address the need for reporting on an “ongoing basis”
    • Critical Infrastructure Reliability – NIST 800-82 endorses the “continuous monitoring of selected security controls” and the need for “continuous security improvements”
    • Risk assessments – the CIS Top 20 speaks to the need to “continuously conduct risk assessments” and to “continuously look for vulnerabilities”

    There ... always will be a need for continual compliance

    I’m sure there are many more, but you get my point. There has always been and always will be a need for a continual, non-stop security program, and considering compliance requirements represent a baseline level of security, there has always been and always will be a need for continual compliance as well. In this way, compliance activities will become more closely aligned with security programs. Per the PCI SSC:

    To ensure security controls continue to be properly implemented, PCI DSS should be implemented into business-as-usual (BAU) activities as part of an entity’s overall security strategy.

    The rising “baseline” calls out the need for efficiencies

    As the standard of due care increasingly incorporates the concepts of “ongoing awareness,” “continuous monitoring” and “business as usual” the compliance expectations rise as well. The challenge of attaining and sustaining even a baseline level of security, such as that laid out in the PCI DSS requirements, becomes even more daunting. When you pair the fact that 80% of organizations drift out of compliance in between annual assessments with the fact that compliance standards continue to become more demanding, it’s very easy to become overwhelmed by the thought of attaining a continuous state of compliance. However consider a suggestion found in NIST 800-137:

    Automated processes, including the use of automated support tools (e.g., vulnerability scanning tools, network scanning devices), can make the process of continuous monitoring more cost-effective, consistent, and efficient.

    I couldn’t agree more. Sustainable compliance requires automation and new found efficiencies. It has to. Security personnel are already “doing more with less.” An automated continuous network monitoring solution such as Tenable’s SecurityCenter Continuous View™ discovers, assesses and monitors your network environment on an ongoing basis. This provides the ongoing awareness and insights needed to conduct comprehensive risk-based assessments and to monitor the effectiveness of your organization’s security controls.

    Check back next week for the fourth and final installment of this Drifting out of Compliance series.

    Establishing Relevant Security Metrics, Part 3: What are the Top Security Metrics to Track?

    $
    0
    0

    In Part 3 of Marcus Ranum’s video series on security metrics, he talks about the most important security metrics that you should track. What are the top 10 security metrics? Marcus’ answer may surprise you!

    Security metrics is part of the overall quest for meaning.

    In the next episode, Marcus will explain “How to establish security metrics.”

    Visit our Security Metrics page to learn more about creating metrics with Tenable products.

    The Security Model is Broken, Part 6: How To Fix It

    $
    0
    0

    Over the past several months, I have been writing about how our security model is broken. This blog is the final in this series, and it focuses on four crucial root causes that must be addressed if we are going to fix our broken security model.

    Specifically, we need to invest more money, adhere to basic security hygiene, be cyber threat driven not compliance driven, and last but not least, stop retrofitting cybersecurity and build it into systems and applications.

    1 - Board of directors and business executives need to allocate more money for cybersecurity

    The one major underlying root cause of why our security model is broken is the lack of resource allocation or, to be more precise, money spent on cybersecurity.

    Many indicators point out that more money is being spent on cybersecurity, and that trend is expected to continue for some time. In my opinion, this spending trend is unfortunately a compensation for years of under investment and neglect in our cybersecurity infrastructure. Much like our nation’s infrastructure of bridges and roads, we have delayed investing in cybersecurity until it came back to bite us, as demonstrated by all the cyber breaches we continue to experience.

    Security technologies and safeguards that could have been deployed years ago to address many of the threats being exploited today are just starting to get widespread traction. For example: strong authentication, continuous vulnerability monitoring, and pervasive encryption at rest. We must continue to spend more money to secure our cyber IT infrastructure.

    More troublesome is that technology (e.g. smart devices, cloud based computing, big data, etc.) and corresponding cyber threats are expanding at an exponential rate while our cybersecurity spending continues to increase at a linear rate.

    This means that even more money (and new security safeguards) must be spent for the foreseeable future, and this will require that directors and executives allocate a bigger percentage of the IT budget for cybersecurity.

    2 - Implement and adhere to basic cybersecurity hygiene

    Our cybersecurity infrastructure continues to be plagued by lapses of not implementing or adhering to basic security practices and standards. Several, if not most, of the well publicized cyber breaches in the last couple of years have been traced to an alert not being responded to, servers not being properly configured, a vulnerability not being patched, or other basic security practice lapses.

    This needs to change. Much like airline safety practices, basic cybersecurity practices or standards must be ingrained and unfailingly adhered to. For example, we must implement: cyber safeguards such as granular encryption of personal information at rest and in transit everywhere; second-factor authentication, including system administrators; better privilege-access controls; continuous vulnerability and event monitoring.

    Basic cybersecurity diligence is a must! There is no room or reason for lapses anymore.

    3 - Organizations must be cyber threat driven not compliance driven

    Many organizations still continue to be compliance driven as the major driver for their security practices and safeguards. Often, many organizations will do the minimum necessary to meet regulatory or other industry compliance requirements. For example, several of the financial institutions breached in the last couple of years were PCI compliant, yet they were still breached.

    Regulations like HIPAA or industry based security compliance standards like PCI are a baseline addressing known or prevalent past cyber threats. Worse yet, lawyers often interpret regulations and opine on the necessary security controls to meet regulatory requirements, which is the equivalent of CISOs practicing law. Nevertheless, many organizations still fool themselves into thinking that if they are compliant, then they are secure enough or it is good enough.

    Regulations and standards lag behind today’s cyber breaches, because regulations and standards are retrospective not prospective. Regulations will never stay abreast or ahead of new technologies and threats. For example, NIST recently issued mobile security draft standards for industry comments, yet there are already millions of mobile devices deployed. Finally, lawyers should have no role in determining the cyber controls needed to meet regulatory compliance; that should be left to the subject matter experts.

    Organizations must be cyber threat driven, not compliance driven.

    4 - Build in and stop retro-fitting cybersecurity

    As I discussed in my Internet of Things blog, smart devices are being sold and deployed, yet the necessary security is often not built into them such as patching capabilities. As a result, discovered vulnerabilities have been difficult to patch, or the patching has been delayed. This is just one example of deploying technology and products without building in the necessary cybersecurity safeguards.

    Too often, organizations are driven by competitive, internal cost, and date driven project pressures that result in cutting corners or delaying building cybersecurity safeguards into the technology or the application.

    Much like building airplanes, where safety must be built in and tested prior to being used commercially, new technologies and applications must have the necessary cybersecurity built in and tested prior to being deployed.

    Build, ship, and fix later is no longer an acceptable business paradigm for cyber systems. Cyber threats are just too great and the result of breaches too severe to make this an acceptable risk.

    Conclusion

    Fixing our broken security model is not really difficult. We know what needs to be done and we know how to do it.

    Fixing our broken security model is not really difficult. We know what needs to be done and we know how to do it.

    It will, however, require more resources—money for people, solutions, etc. We must change our mindset that cybersecurity needs to be threat driven not compliance driven. We must ensure that baseline security practices are not negotiable. And we must start deploying new technologies and applications securely from day one.

    I hope that 2016 will be the year that we make strides to fix our broken security model.

    Happy New Year!

    Establishing Relevant Security Metrics, Part 4: How to Establish Security Metrics

    $
    0
    0

    In this episode of Marcus’ video series on security metrics, he provides advice on starting a metrics program. Should you go bottom up with data, or top down from processes? Find out which approach Marcus advocates and why.

    Marcus also talks about SecurityCenter™, the hundreds of statistics it collects, and how to effectively use such a rich metrics repository.

    Analyze how your practice works

    In his final installment, Marcus will leave you with his thoughts on “Keeping metrics relevant.”

    Visit our Security Metrics page to learn more about creating metrics with Tenable products.

    Viewing all 1976 articles
    Browse latest View live