Quantcast
Channel: Tenable Blog
Viewing all 1976 articles
Browse latest View live

Independence Day

$
0
0

In cybersecurity as in national security, remembrance and eternal vigilance are essential to maintaining our freedom.

Our nation has seen many changes since its founding 240 years ago, and it still is functioning pretty well despite the fact that our security and economy today depend largely on technology that did not exist a little more than a generation ago. But the keys to protecting our liberty and promoting the general welfare remain the same in cyberspace as in the real world: remembrance and eternal vigilance.

Remembrance

Those who cannot remember the past are condemned to repeat it

During our Independence Day celebrations we remember the challenges our nation has faced and the lessons we have learned from them. George Santayana wrote, “Those who cannot remember the past are condemned to repeat it.” The threats we have faced in the real world are varied, but we have learned from them and meet them with a strong defense and national resolve. Unfortunately, this is a lesson we still are learning in cyberspace. Despite repeated high-profile breaches of government information systems, our cybersecurity remains incomplete.

A recent evaluation by the Government Accountability Office found that 18 Executive Branch agencies with high-impact systems—those holding sensitive information whose loss could cause catastrophic harm—rated cyberattacks from nation-states as the most dangerous and frequently-occurring threat these systems face. Yet although there are regulatory requirements and technical guidance from the National Institute of Standards and Technology, a review of four agencies found cybersecurity gaps in their high-impact systems.

“Until the selected agencies address weaknesses in access and other controls, including fully implementing elements of their information security programs, the sensitive data maintained on selected systems will be at increased risk,” the report concluded.

Vigilance

Agencies should be constantly monitoring the status of and activity on their networks and attached systems

One of the basic principles for fully securing our information systems was recognized nearly 200 years ago: “Eternal vigilance is the price of liberty." Thomas Jefferson and the others to whom this statement has been attributed were not talking about cybersecurity, but it applies just the same. The Homeland Security Department has made continuous monitoring the standard for federal cybersecurity. This means that instead of static, periodic assessments of IT systems every month, year, or three years, agencies should be constantly monitoring the status of and activity on their networks and attached systems. This level of visibility allows agencies to not only respond quickly to incidents, but to be proactive and eliminate or mitigate threats before they become incidents.

This standard is supported by the Continuous Diagnostics and Mitigation (CDM) program, and by a Blanket Purchase Agreement through the General Services Administration. This makes available off-the-shelf technology to give real-time visibility into government networks and systems. Products such as the Tenable SecurityCenter Continuous View™ provide this vigilance so that agencies can ensure that their systems are protected, and the intelligence needed to protect our most valuable assets by defending against the most serious threats.

Security and liberty

In recent years attention has been paid to an apparent conflict between security and liberty. Yet while it is true that liberty can be threatened by abuses in the name of security, the two are not mutually exclusive. In fact, there is little liberty without security. As with so much else, this applies in cyberspace as well as the real world.

There is little liberty without security

Without adequate security, we cannot have the freedom to access and use information online with confidence, we cannot rely on the privacy of information that we use in online transactions, and we cannot depend on the delivery of critical services by our government. We achieve that security by remembering and learning from the lessons of the past, and by exercising eternal vigilance.


Four Reasons the EU General Data Protection Regulation is Important to Security

$
0
0

Brexit has catapulted the European Union (EU) into the news recently. However, from a security perspective, I think the EU General Data Protection Regulation (GDPR) is more important in terms of potential actions that need to be taken by organizations. The European Parliament passed the GDPR in April of this year, and it will become enforceable in May 2018. Once in force, the regulation will require every organization that offers products or services to EU citizens, as well as those handling data of EU citizens, to adhere to a strict set of data privacy and security measures.

The impact of GDPR measures is broader than information security

The impact of these measures is broader than information security, and it may well require significant changes to business processes and systems. If the GDPR applies to your organization, it is likely that your business leaders, privacy experts, and legal professionals are already discussing compliance measures. Security leaders should be included in these discussions to ensure that security is adequately prepared and funded to address the changes to people, process, and technology needed to meet the requirements of the regulation.

Reasons the GDPR is important to security professionals

  1. Penalties for violations are severe: Under Article 83(5) of the Regulation, serious infringements can result in fines of up to €20M or 4% of the offending company’s global annual revenue, whichever is higher.
  2. The “personal data” definition has expanded: Personal data means any information relating to an identified or identifiable natural person (“data subject”). An identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person. This definition of personal data is important to information security professionals because it implicates data that may not seem, at first glance, to qualify as personal. IP addresses, application user IDs, Global Positioning System (GPS) data, cookies, media access control (MAC) addresses, unique mobile device identifiers (UDID), and International Mobile Equipment IDs (IMEI) are some examples.
  3. “Technical and organisational measures” require adequate general information security controls: The GDPR uses the phrase “technical and organisational measures” 21 times. In essence, the GDPR is asking controllers to employ information security frameworks, which enable professionals to create consistent, repeatable processes and implement controls that are generally accepted by the information security community.
  4. The jurisdictional reach includes organizations outside of the EU: The GDPR’s jurisdictional reach (called the “territorial scope”) is broad and includes most organizations. Organizations based outside of the EU that offer goods or services to EU data subjects are covered by the regulation.

If you don’t suffer from triskaidekaphobia, I invite you to join Scott Giordano, a data privacy expert, and me for an upcoming webcast on July 27th where we will discuss Thirteen Essential Steps for Meeting the Security Challenges of the New EU General Data Protection Regulation. Scott is an attorney with nearly 20 years of legal, technology and risk management consulting experience. He holds Information Security Systems Professional (CISSP) and Certified Information Privacy Professional (CIPP) certifications. He is an expert on the intersection of law and technology as it applies to e-discovery, information governance, compliance and risk management issues.

Influence of the NIST Cybersecurity Framework on Hong Kong

$
0
0

Security is a common language across the globe; every person, organization, and country is concerned about security. From personal data privacy, business impact of cyber threats, to protection of core infrastructures, our objective in security is very simple: we all want to prevent attackers from doing something catastrophic. But after so many years and so many generations of technology, is there a simple solution for us all?

NIST

I am so lucky to have the privilege of visiting different organizations to exchange viewpoints in information security. And through those discussions, I have discovered that the Cybersecurity Framework (CSF) from the U.S. National Institute of Standards and Technology (NIST) is influencing my hometown—Hong Kong.

HKMA

Hong Kong, one of the premiere global financial centres of the world, has a solid tradition of protecting the finance industry and managing risk. The Hong Kong Monetary Authority (HKMA), governed by the Exchange Fund Ordinance and the Banking Ordinance of Hong Kong, is responsible for maintaining monetary and banking stability. In response to the latest cyber threats, HKMA published a circular in September 2015 highlighting the growing importance of Cybersecurity Risk Management. This circular drew a great deal of attention from the banking industry in Hong Kong, as all of the banks were looking for solutions to meet the requirements listed in the circular. When I visit banks and discuss this circular, some of them tell me that their security consultants suggest they can implement the NIST Cybersecurity Framework in response to HKMA's circular. Recently, in May 2016, HKMA launched the Cybersecurity Fortification Initiative. This initiative includes three parts: the Cybersecurity Fortification Initiative (CFI), the Cyber Resilience Assessment Framework (C-RAF), and the Intelligence-led Cyber Attack Simulation Testing (iCAST).

Security is a common language across the globe

Similarities between initiatives

I found an interesting similarity in the C-RAF and CSF; the Cyber Resilience Assessment Framework, Step 2, maturity assessment and the NIST Cybersecurity Framework both include five technical functions: identify, protect, detect, respond and recover. HKMA has not yet released details of C-RAF, but I believe the detailed version will share similarities with the NIST CSF. Why is that?

The NIST Cybersecurity Framework provides a flexible and risk-based implementation approach, it is not a one-size-fits-all framework, instead it enables different organizations to select the cybersecurity risk management process that fits their current situation. Why is that important? Because even in the same industry, each organization may have a different security posture. And the best way to help improve your security posture is to understand yourself better, to set an achievable goal, and to review and refine your objectives each year.

Even in the same industry, each organization may have a different security posture

I believe HKMA will provide specific security controls and recommendations, based on the uniqueness of the Hong Kong FSI market. But take a look at the NIST CSF now to better prepare for HKMA recommendations.

SFC

And last but not least, the Securities and Futures Commission (SFC) of Hong Kong issued a Circular to all Licensed Corporations in Hong Kong on 23 March 2016. The circular enumerated the first Suggested Cybersecurity Controls, including:

  1. Establish a strong governance framework to supervise cybersecurity management

After reading this blog, which framework comes to mind first? The NIST CSF is a tried and true framework gaining popularity worldwide. It’s worth serious consideration at your organization.

Tenable solutions

Tenable can help you implement a framework. Our unique SecurityCenter Continuous View™ (SecurityCenter CV™) solution can help. SecurityCenter CV will first collect data using different methods. For example, it can scan the network, together with picking up network packets and data from hosts. All data is pulled together, and based on what you have and what is happening, SecurityCenter CV will advise you about what to address and what you should be concerned about.

The SecurityCenter CV also provides dashboards for many frameworks. For example: 

The HKMA Cyber Security Risk Management dashboard maps to the cyber security controls prescribed by HKMA. It provides an initial view of your current security posture and what you have in your network, and it helps you identify significant risks so you can prepare better for different audit requirements.

HKMA Dashboard

 

A lot of customers will move on to build their own security framework, based on CSF. SecurityCenter CV supports over 90% of the NIST CSF technical controls and includes 9 Assurance Report Cards (ARCs) and 20 dashboards built specifically to illustrate CSF conformance.

CSF Dashboard

 

With the ARCs, you can report on your security posture that matches your business context.

CSF ARC

 

You can learn more about NIST CSF support in our solution story.

More information

Check out a few more blogs about the NIST Cybersecurity Framework:

Also read about what ARCs can provide:

Good Security Metrics Build Relationships and Trust

$
0
0
Using Security Metrics to Drive Action

Tenable recently sponsored the publication of an ebook, Using Security Metrics to Drive Action. This ebook is a compilation of thoughtful essays from 33 CISOs and other experts, who all share their strategies for communicating security program effectiveness to business executives and the board. In this article, excerpted from the ebook, Nikk Gilbert, Director of Global Information Protection and Assurance for ConocoPhillips, explains how metrics can strengthen team relationships.

For Nikk Gilbert, the secret sauce to success as a chief information security officer (CISO) is forging relationships. Metrics, he says, can be a great way to solidify those relationships.

Rather than advising readers to select a group of generalized metrics to monitor, Gilbert prefers to tell a story. Metrics, after all, are designed to tell the story of how well you’re succeeding at digitally securing your enterprise.

Metrics are designed to tell the story of how well you’re succeeding at digitally securing your enterprise

After starting work at a previous company, Gilbert avoided making aggressive changes to the way security was handled. Instead, he took co-workers out to lunch, one at a time. Some panicked—what does the CISO want? Did I do something wrong? It wasn’t about that, Gilbert says. “Quite frankly, I sat there and talked to them about everything but security,” he states. “It was creating the relationships.”

After establishing himself as an approachable leader, it was easier to talk about changes that needed to be made to protect customer data, intellectual property, and other proprietary information from malicious outsiders. During this process it was important to avoid drowning people in metrics.

“There are so many metrics out there that you can use to show different things,” he says. “What I’m trying to do from a strategic point of view is find those metrics that are really going to resonate with the business.”

From a strategic point of view, find those metrics that are really going to resonate with the business

Right around the same time, Gilbert’s team created a real-time online dashboard to monitor internal networking metrics. He used it to show key team members the value of monitoring several operations-level metrics, including:

  • Web proxies. This software allows authorized employees to surf authorized websites while blocking risky sites. “It’s a tool that helps us protect users from themselves,” Gilbert states.
  • Admin account accesses. Administrative accounts are extremely sensitive. “We have a real-time dashboard that watches access to admin accounts,” he says. If someone tries over and over to access an account unsuccessfully, the account gets flagged and additional actions can be taken as appropriate.
  • Data in/data out. The dashboard has a plug-in that reveals how much data is moving in and out of the network and through which ports—crucial information that can reveal whether, say, a denial-of-service attack is beginning.
  • Antivirus activity. If a computer is infected, the dashboard throws up an antivirus alert.
  • Firewall alerts. The dashboard monitors the network firewall’s sensors, which can detect a variety of network based indicators.

Individually, Gilbert acknowledges, there’s nothing spectacular about these metrics, but holistically, they demonstrate how it’s possible to use resources to respond to the metrics and stop an attack from grinding the business to a halt. They also help reveal which resources were still lacking. “That’s when we became invaluable to the executive team,” he notes.

More information

About the author

Nikk Gilbert has 18 years of executive-level experience in the government and private sectors and is a respected information security leader. Currently the Director of Global Information Protection and Assurance for ConocoPhillips, he’s a Distinguished Fellow of the Ponemon Institute, a recipient of the US Navy Meritorious Civilian Service Medal, and a frequent speaker at technology events throughout the world.

Ad: Security Metrics That Drive Action

Cyber Hygiene in Higher Education: Cybersecurity Projects during Summer Break

$
0
0

School is out for summer. This is a good time for schools to focus on cybersecurity projects for the coming year. Threat hunting, vulnerability management and the core value of continuous visibility are essential goals that school systems should commit to over the long, hot summer.

Finding time to improve is never easy. Information security professionals within the education sector in particular seldom have the time to reexamine their practices or to implement new procedures during the school year. Compounding these constraints is the complexity of IT systems in colleges and universities.

Educational campuses are unique in the breadth of their IT missions

Educational campuses are unique in the breadth of their IT missions. Not only must they serve large, mobile student populations, they also support the scientific and research needs of an academic staff while maintaining sensitive personal, academic, financial and medical records.

Cyber self-improvement is vital for schools, now more than ever. In April, the personal information of a Washington State school district was inadvertently released by the district after an outside party “spoofed” the email address from the superintendent. The email sought employee names, addresses, salary information and social security numbers.

Recently, certain Colorado schools experienced a security breach related to a proprietary platform called Infinite Campus. The Infinite Campus software stores personal and academic information, and may have released the personal information of over 2,000 students. Although several districts use the Infinite Campus platform, this compromised district expanded the collected data beyond grades, attendance and schedules to include highly confidential personal information. This may have been why they were targeted.

Most recently in Maine, a data breach widened as more employees complained of ID theft. In late March, district payroll employees received a phishing email which successfully tricked users into responding. The email asked for employee W-2 information. When several employees attempted to file tax returns this year, they discovered false ones had already been filed using their information.

Last summer, universities seemed to be more in the crosshairs as a flurry of cybersecurity incidents illustrated the growing threat facing higher education institutions.

  • The entire engineering school of a prominent Pennsylvania university had to be taken offline for an extensive investigation and clean-up of its network and systems.
  • Virginia universities were the target of a cyberattack against two officials whose work was connected with China.
  • Even one of our country’s oldest universities suffered a hack that compromised user credentials in eight schools.

This is the best time to beef-up security measures, performing asset discovery to create a baseline inventory of assets

Now is the time for information security professionals to start cyber summer activities, and planning back-to-school projects to tackle cybersecurity challenges during the coming school year. Specifically, this is the best time to beef-up security measures, performing asset discovery to create a baseline inventory of assets and to prioritize these assets, and to implement best security practices.

Best practices include using next generation firewalls and security for web and email services, as well as system monitoring and advanced threat detection.

Summer threat hunting and back to school vulnerability management

Threat hunting and vulnerability management are core capabilities for achieving continuous monitoring in your network and detecting threats before they are exploited. This can help you detect and mitigate outsider threats and attacks, as well as insider threats—both malicious and user error—that can compromise your systems and data.

Colleges and universities can use the relative quiet of summer break to establish a foundation of effective threat hunting and vulnerability management by implementing solutions such as Tenable SecurityCenter Continuous View™ (SecurityCenter CV™), which provides a platform to continuously monitor networks for critical vulnerabilities and threats.

SecurityCenter CV gives organizations the ability to monitor their networks 24x7 for new vulnerabilities, devices and incidents. The solution can alert administrators to threats and incidents, and produce reports for IT, security and administrative personnel.

Several large higher education institutions use Tenable SecurityCenter CV to protect their networks. For example, the Auckland University of Technology, the second largest university in New Zealand, selected Tenable SecurityCenter CV for greater visibility, multiple scanning mechanisms, and enhanced reporting.

The relative lull in educational activities over the summer provides an opportunity that should not be wasted

By employing sound cybersecurity practices such as threat hunting and vulnerability management through proven technologies such as Tenable SecurityCenter CV, schools, colleges and universities can take large strides toward improving their cybersecurity posture for the upcoming school year and beyond. The relative lull in educational activities over the summer provides an opportunity that should not be wasted.

Security Metrics are About Illustrating Criticality vs Risk

$
0
0
Using Security Metrics to Drive Action

Tenable recently sponsored the publication of an ebook, Using Security Metrics to Drive Action. This ebook is a compilation of thoughtful essays from 33 CISOs and other experts, who all share their strategies for communicating security program effectiveness to business executives and the board. In this article, excerpted from the ebook, Genady Vishnevetsky, CISO for Stewart Title Guaranty Company, talks about how the criticality of assets impacts risk factors.

“Your chief executive officer (CEO) isn’t interested in how many vulnerabilities you have,” says Genady Vishnevetsky, chief information security officer of Stewart Title Guaranty Company. That’s not to say that the number of vulnerabilities isn’t important, just that when you’re communicating the strength of the corporate security program to your CEO and other members of the C suite, metrics like the number of vulnerabilities won’t provide useful information.

Your chief executive officer (CEO) isn’t interested in how many vulnerabilities you have

“The reality is, your program has to be risk driven,” says Vishnevetsky. “The same vulnerability can have different impacts on the asset, based on many factors.” Thus, your priority of addressing the vulnerabilities has to be directly related to risk of the asset to business. He explains using the example that by assigning each asset a level of criticality, if security is breached on a very critical asset, even if it has fewer vulnerabilities, it can cause substantial loss of revenue, reputation and even bring the company down. Alternately, you can have assets that have hundreds of vulnerabilities, but those assets have no associated critical data. The number of vulnerabilities may appear high, but because they’re lower on a scale of criticality, the risk is lower, as well.

Vishnevetsky says the most effective way to determine which metrics are important is to use a computational method that determines the value of an asset or set of assets to the business, physical location of the asset, segmentation, additional compensating controls and what types of vulnerabilities exist for those assets. That allows organizations to build a solid picture of the criticality of those assets. “These compile into metrics that convert these vulnerabilities or threats into a risk factor,” says Vishnevetsky. “So, this particular asset has a risk factor: assign it a number from 1 to 5, 1 to 10, or 1 to 100—it doesn’t really matter. It’s all comparative. You show your assets according to value as opposed to looking just at the number of vulnerabilities.”

You can select at most five metrics that are both qualitative and quantitative, and each individual will pick up something he or she understands

When communicating the strength of your security program to the C suite, Vishnevetsky says that it’s important not to overwhelm them. “If I’m presenting to the executive team, it depends who’s on that team. Different executives will better understand metrics that are dear to their heart. You cannot tailor your metrics to every executive,” he says, “but you can select at most five metrics that are both qualitative and quantitative, and each individual will pick up something he or she understands.”

For example, Vishnevetsky says that the CEO will understand maturity level: our security program has a maturity of three out of five in this domain. In another domain, it has a maturity of one out of five, and another domain has a maturity level of four out of five. “That’s what they understand,” he explains. “It needs to be visual. It needs to be concise. It needs to be simple. Remember, they are not technologists who understand what the vulnerability is. They understand the risk to the business, and they understand the capability of your security program as far as how well it defends the business, how it helps to protect the business. That’s what they understand.”

They are not technologists who understand what the vulnerability is. They understand the risk to the business.

“The CEO probably needs to feel comfortable that you ‘get it,’ that you know what you’re doing. You can present him or her one or two simple metrics, usually a maturity and capability level of your security program,” he adds. “One or two and no more than that. That’s about all the metrics a CEO needs to know. Anything that deals with the number of viruses, number of vulnerabilities, number of penetration testing, number of scans—any massive numbers are going to blow their minds.”

More information

About the author

Genady Vishnevetsky is the CISO for the Stewart Title Guaranty Company. An established leader with experience in building successful security programs to protect enterprises against emerging threats, Vishnevetsky leads the security, governance, and compliance programs for a major real estate financial services company. In his past role as the vice president of security and information security officer at Paymetric, Vishnevetsky built the cybersecurity, governance, and compliance programs for the United States’ fifth largest payment processor of card-not-present electronic payments systems.

Ad: Security Metrics That Drive Action

Threat Hunting with YARA and Nessus

$
0
0

In Nessus 6.7, file system scanning functionality was introduced that could look for specific file hashes of files on disk. This was in addition to the running process detection which has been supported for quite some time. Now, as part of the Nessus 6.8 release, we’ve introduced YARA to our Windows malware file scanning subsystem. This provides an alternate method of defining criteria to search for files based on textual or binary patterns.

What is YARA?

YARA is an open source tool, originally developed by Victor Alvarez, that helps malware researchers identify malware. YARA works by ingesting “rules” and applying the logic in the rules to identify malicious files or processes.

Writing a rule

For the purpose of this blog, we will write a couple of very simple rules. However, the YARA rule syntax is quite rich (consult the Writing YARA Rules guide). Nessus supports all of the YARA 3.4 built-in keywords including those defined in the PE and ELF sub-modules.

I wrote a simple IRC bot to write rules against. To write our first rule, first examine some of the strings located in the IRC bot binary (output shortened for readability):

IRC bot

As you can see, the binary contains some fairly unique strings. We’ll write our first rule just using those strings:

Writing rules

The above rule will only be “triggered” if all five of the unique strings are located in the file.

Next, we will use the rules file with Nessus.

Configuring Nessus

After selecting your scan target and naming the scan, the first step is to configure the Windows credentials. The Nessus malware file system scanner runs over WMI so we must have this configured. This is how it looks in my scanner:

Windows credentials

Next, we’ll enable the YARA plugin. To do that, you must enable plugins 59275 (Malicious Process Detection) and 91990 (Malicious File Detection Using YARA). You can easily find the plugins by using the Advanced Search feature and setting Malware is equal to true. The plugins will be located in the Windows plugin family.

Enable the plugin

Finally, we will upload our YARA rule and select the directories to scan. We can do this by going to the Malware settings in the Assessment menu. If the Scan file system setting is enabled, you can add a YARA rules file by clicking the Add File link. In the image below, I’ve uploaded the newly created rule in the file tenable_bot_rules.yar. I’ve selected Scan %ProgramFiles(x86) as the directory to run the scan against.

Upload rules

With those steps completed, we can start the scan.

Getting results

Four minutes later, the scan finishes. We can see that something was found on the target:

Runtime results

Drilling down we can see that ircbot_v1.exe matched our rule Tenablebot in C:\Program Files(x86)\dontlookhere\ . The offending binary is even attached to the scan.

Results #1

Using more rules

Nessus only accepts one rule file per scan, so all rules must be included in a single file. Rules should be listed one after another.

For example, let’s say we heard a rumor that our example IRC bot was being packed with UPX to defeat static analysis. We can add a new rule to the rules file that checks for signs of UPX. Specifically, this rule will be triggered if the first two PE section names are “UPX0” and “UPX1”.

Add UPX rule

If we upload this updated YARA rule file and rescan the target, we find a UPX packed binary!

Results #2

Additional resources

Writing your own rules can be a difficult task. Luckily, there are a number of helpful resources available to help:

Transforming Security: The Thousand Mile Journey Begins with a Single Step

$
0
0

Information security professionals are all on the same journey, to protect customer and employee data, safeguard our company’s intellectual property and trade secrets, and strengthen our company’s brand and reputation. It’s a long, difficult journey, filled with many twists and turns. But it’s our passion for security that keeps us going.

The waves of change are also surrounding us. It’s human nature to evolve; in business we strive to adapt quickly for improvements that can yield better results. But with these changes come new challenges, requiring new approaches and new solutions. This includes security.

To understand the impact of these changes, we need to understand our history. Security has always been an afterthought. We layer security controls around existing IT infrastructure. Therefore, if we don’t understand the future of IT, we won’t understand the future of security. Let’s review a few of these IT trends and changes that will continue to impact our industry.

IT infrastructure migrates to the cloud

The most significant shift in the security market is the move to the cloud

The most significant shift is the move to the cloud. I’m not just talking about the use of cloud application or cloud services, but the outright migration of IT infrastructure from corporate data centers to the cloud. This trend will affect security—drastically.

  • Traditional network and infrastructure controls will no longer be an afterthought in designing and deploying systems. They will be embedded into the cloud infrastructure.
  • System hardening and patching will be accomplished simply with a new image that is created and destroyed as needed.
  • The perimeter is now the cloud—and the need for perimeter controls will vanish.

Last year GE announced they would move all but four datacenters to the cloud. That was 30 data centers and 9,000 workloads around the world. Why? A 52% reduction in total cost of ownership. In Australia, companies are blazing ahead with cloud deployments to offer new services, both in the public and private sectors. With the rapid growth of cloud services in Australia, AWS is looking to open a second regional zone in Melbourne just to keep up with the demand.

But that’s not the only trend impacting security.

Applications are the future

The future of security is the application. It’s the critical component of digital business and the digital economy, storing your most sensitive data and intellectual property. Think about it… Every day we use applications on our smart devices to play games, check email, talk to friends, pay bills and book travel by communicating with other cloud based applications.

The future of security is the application

As a society, we are already in the future, but security is trailing behind. We are focused on securing the infrastructure that will soon be outsourced, and ignoring the application. And it’s only getting harder, as new application technologies continually emerge. The devops and container movements are having a dramatic impact on how we secure applications and few are paying attention.

The traditional endpoint is dead

The endpoint as we currently know it is dead

BYOD (bring your own device) policies move beyond the mobile device to all end user devices. Once all of your IT infrastructure is in the cloud, why would you continue to issue desktops or laptops to employees? You won’t. You may provide a stipend for them to purchase their device of choice or just simply require them to use their own. In either case, the endpoint as we currently know it is dead, and so is the need to deploy and manage endpoint security controls, including:

  • Anti-virus, anti-malware and other preventive controls
  • File integrity, event logging and other monitoring controls
  • Forensics, incident investigation and other response controls

These endpoint security controls will either be provided as part of the base operating system, embedded in the hosts by the cloud providers, or will no longer provide value as stand-alone security controls.

Layered security creates gaps

Defense in depth leaves gaps in our security defenses

It’s obvious our approach to security is outdated. Defense in depth has created a culture of deploying isolated, point solutions that leave gaps in our security defenses. It’s these gaps that allow attackers to hide from detection, thus maximizing potential exposure and loss to the organization. Not only is the approach outdated; the technologies we’re deploying today won’t address the needs of the future. In our future world, these gaps get bigger before they get smaller. We need a new approach to resolve these gaps today and in the future.

Transforming security

It’s time to transform our approach to security. We must look at our security defenses as an integrated, holistic set of capabilities, working together to continuously improve our security defenses. We need to focus on capabilities, not tools.

Car

Bridging the gaps

On every journey, we’ll encounter gaps along the route. We need to bridge these gaps to continue on.

Before we begin our new security journey, let’s define the elements needed to build the bridge.

Security bridge

A bridge requires a strong foundation. Use the Visibility domains of Discover and Assess to build the foundation of the bridge; Discover to inventory your infrastructure and to eliminate blind spots; Assess to evaluate the security state of each asset and address any weaknesses. A strong foundation provides a resilient security roadmap.

With a foundation in place, you need spans to support the road. Use the Context domains of Monitor and Analyze to span the bridge. Monitoring the activities of your assets creates the long spans across the gaps, while analysis supports the future road.

With the supports in place, it’s time for the road, enabling action. Use the Action domains of Respond and Protect to build your road. Rapid response minimizes exposure and loss, while protection allows for future automation.

Map

The journey

The journey to comprehensive security is fraught with gaps and delays. Take each leg of the journey one step at a time, being careful to check back with your roadmap plan for guidance.

  1. Inventory all your assets – You can’t protect what you aren’t aware of. This is the first step on your journey, and the one step that is commonly skipped.
  2. Continuously assess device vulnerabilities – Regardless of your route, device vulnerabilities will always be landmines along the way.
  3. Assess application vulnerabilities – On your journey to the future, your main focus will change to application vulnerabilities rather than device vulnerabilities. Check such new landmarks as containers and microservices.
  4. Audit security configurations – Misconfigurations in device settings, cloud applications and cloud services can veer you off course. These are the core visibility areas that clear the way for your journey.
  5. Monitor and analyze all logs – Your map details expand as you drive into the future. Check network and device logs, but also be sure to monitor application, cloud service and cloud infrastructure logs.
  6. Monitor and analyze all user accounts – There are lots of other vehicles on the road with you. Drive defensively; check user controls and their access to all applications.
  7. Monitor and analyze all network traffic – As you leave the confines of your perimeter, you need to constantly check the traffic and behavior in the cloud.
  8. Respond to all incidents – Incidents are now prioritized, based on the intelligence you have gathered along the way. Stop and respond to critical incidents that impact your progress.
  9. Automate remediation – The final step requires trust in the data and analytics you have gathered. Just as it requires trust in technology to ride a self-driving car, so your journey into the future of security takes a while to achieve trust in cloud deployments.

The journey from defense in depth security to comprehensive security requires planning, a detailed map, and accurate directions. The route may be long, but your end goal is readily achieved if you take it one step at a time.

Reaching your destination

The security journey may start with a small step, but it is a challenging adventure through threats, attacks, and vulnerabilities. Plan your trip wisely and map out your priorities carefully to move toward your destination of comprehensive security for a brighter, more resilient future.

This blog is based my talk at the RSA Conference 2016 in Singapore. See the complete presentation on the RSA website.


Mr. Robot Season 2 Unmasked

$
0
0

In the season 2 premiere of Mr. Robot, our protagonist, Elliot, struggles with an internal battle involving the memory of his father and the ramifications of the last hack in season 1. In this episode we see several challenges that all information security professionals face on a daily basis, from the hacking tools our adversaries use to the internal struggle we often experience assessing technology-based risks at our organizations.

Tracking actions on the network

In one of the early scenes, Gideon talks to Elliot about how the FBI thinks Gideon is behind the hack or is somehow complicit. He then talks about giving over all evidence. Gideon believes the FBI is hacking his email; he knows someone is looking at his email. SecurityCenter Continuous View™ (SecurityCenter CV™) provides a full solution for monitoring events such as password changes, login and logoff events, and many of the concerns Gideon is worried about. The strength in the SecurityCenter CV system is the combination of data from several sources using Nessus® to actively scan and interrogate systems for vulnerabilities and configuration issues, passively monitoring the same systems looking for vulnerabilities and behaviors that are malicious in nature, and collecting logs from network devices and host operating systems. To further support the security team in identifying risk, SecurityCenter CV has a feed of reports, assurance report cards (ARC), and dashboards to monitor such changes. For example:

CSF Dashboard Main User

The exploit and post-exploitation

Later in the episode, Darlene uses the Social Engineer Toolkit (SET), written by Dave Kennedy at TrustedSec, to run her third party module which created an executable file called “cryptowall.exe” on a USB thumb drive. She gives this drive to Mobley, her right hand man, to plug into a workstation at E-Corp. Cryptowalls are a relatively new family of malware that acts as a form of ransomware, the most famous being CryptoLocker. The malware encrypts files on the operating system and requires payment to decrypt them.

Ransomware is quickly becoming the most prevalent malware grabbing headlines. While not typically used by penetration testers, the malware infection method used in this episode is a real and serious threat to our security. Training users on the authorized use of systems, and monitoring the use of privileged users is paramount to an organization's security. A USB drive and an unhappy employee can often circumvent the best security solutions.

However, tracking USB insertions is not the only issue plaguing E-Corp. They are also not properly securing the edge of the network, by using proxy servers and egress filters. If you follow the commands used by Darlene in the SET program, she is setting the call home server to 192.251.68.254. This IP is actually a public accessible IP address that is owned by NBC, but seems to be a C2 address with some interesting base64 encoded text. The SET tool is also pointing to port 80, which is used for HTTP. While HTTP is a difficult port to filter or block, by using a proxy server, the egress points in the network can be set up in such a manner to block port 80 from all systems except the proxy server. Had this been configured, and then using Tenable NetFlow Monitor and Tenable Network Monitor collectors, evidence of the call home might have been blocked or, at the very least, tracked more efficiently. SecurityCenter CV has the ability to identify traffic flows using port and IP combinations. With the advanced correlation abilities found in Log Correlation Engine™ (LCE®), anomalies and first time events can easily be identified. Here are a few SecurityCenter CV dashboards that are helpful when analyzing traffic patterns:

CSF Dashboard Continuous Monitoring

The regime

This first episode focuses on Elliot’s regime-maintaining control of his mind by maintaining control of information. This is an illusion many organizations have with respect to their security posture, as they may do the same thing day after day. They only scan the safe IPs and never truly scrutinize the network. By practicing continuous monitoring, and looking in the far reaches of your network, you can begin to see the true environment and assess the true risk. Tenable SecurityCenter CV offers five sensors - active scanning, intelligent connectors, agent scanning, passive listening, and host data - to deliver comprehensive intelligence that protects your organization. SecurityCenter CV can help security professionals expand their daily regime to include continuous visibility and context that facilitates decisive action.

Thanks to Matt Hand for his contributions to this article.

Post-Hunt Survival Skills: Scope and Triage

$
0
0

Inevitably when you threat hunt - you will find something. What happens next? A barrage of questions ensues:

  • Is it an incident, administrative activity, an external attacker in your environment?
  • How did the attackers get in, what did they touch, and what systems and services are impacted.
  • Is it simply a misconfiguration of a service, uncovered by the hunt?

Answering these questions can be the most unexpectedly challenging aspect of threat hunting, depending on the size and maturity of your organization. How do you determine when to escalate and call in responders?

Tenable provides many features to find and scope the breadth of an incident prior to the fire drill

The time to deploy new security technologies for response and forensics is not during an incident. Leveraging simple tools that you already have in your arsenal to follow up the hunt is essential to ensuring success. Tenable provides many features to find and scope the breadth of an incident prior to the fire drill to make it less frantic, and to arm incident responders with critical information to enable quick and decisive action.

Continuous endpoint data collection

Tenable captures ongoing information about hosts in several categories:

  • AutoRuns and Scheduled Tasks
  • File Downloads
  • Host Changes
  • Network Traffic
  • Processes Launched
  • Software/Services Installed
  • Threat Intelligence and Malware Indicators
  • User Creation and Modification

This information—already captured in your environment through regular monitoring—can provide the essential information to scope an incident and to identify what happened during an event.

Host persistence - autoruns and scheduled tasks

We’ve blogged before about leveraging the power of Nessus® with built-in threat intelligence to detect malicious or unique autoruns, scheduled tasks, and other registry entries that are signs of attacker persistence.

Autoruns and Scheduled Tasks

Output

Every scan, whether scheduled or ad-hoc that has the Windows plugin family enabled, comes with a host of autoruns and other detections that capture this invaluable information for responders to work with.

Monitoring host changes

The attackers will have touched something to forward their objective: running reconnaissance commands, installing persistent backdoors or credentials, or accessing and changing files. All of these things leave digital footprints that the responder can follow to recreate what happened. Tenable SecurityCenter Continuous View™ (Security Center CV™) host data sensors continuously monitor activity at the system level, capturing authentication, system changes, files in monitored directories, programs and processes launched, as well as anomalous system activity such as unique executables and binaries or unusual commands launched by a user.

ASD Top 4 Mitigation Strategies

Lateral movement and exfiltration

Lateral movement between hosts in your environment is almost as critical for an attacker’s success as the initial entry point. When attackers remain in one place for too long they get caught. The upside to this is that as an attacker traverses hosts it leaves a spider web of network activity tracking back to the initial entry point. SecurityCenter CV continually monitors network traffic at multiple layers from flow information through protocol inspection, and stores all of the findings without the storage overhead of deep packet inspection tools.

Unexpected connections between hosts are flagged as anomalies, particularly when using an administrative protocol such as SSH or VNC. Traffic is also flagged for abnormally large sizes and abnormally long durations, useful for detecting attempted data exfiltration.

Passive Network  Forensics Anomalies

Finally, threat intelligence plays a role here as well. As discussed in Threat Hunting 201, all traffic inspected is automatically matched against intelligence sources, to immediately flag attempts to communicate with known malicious destinations.

Passive Network Forensics Suspicious Activity

Targeted triage audits

Once an initial scope has been put together, you need a targeted data capture of the hosts involved to grab artifacts and to build an incident timeline. Nessus can facilitate data capture on hosts using customized audit files to run commands, pull information and organize the data.

For Windows hosts, running PowerShell Cmdlets from within Nessus policies opens up some very innovative use cases that rapidly perform targeted forensic searches across your environment and output the results as compliance findings in Nessus and SecurityCenter.

Compliance findings

PowerShell forensics is a rapidly emerging method for dealing with Windows incidents, and Nessus can facilitate calls to other tools like Invoke-IR PowerForensics by wrapping it in logic-based automation using an audit file.

Not content to work in Windows-only environments, audit files also have the ability in Linux/Unix environments to run CMD_EXEC checks, which execute a shell command and then compare the resulting output. The possibilities here are limited only by your team’s imagination and command-line familiarity.

<custom_item>
type: CMD_EXEC
description: "Make sure that we are running FreeBSD 4.9 or higher"
cmd: "uname –a"
timeout: 7200
expect: "FreeBSD (4\.(9|[1-9][0-9])|[5-9]\.)"
dont_echo_cmd: YES
</custom_item>

As with all forensic info-gathering, use with caution and understand what commands you are automating with Nessus, as well as their expected output. In an ideal scenario, your security team would preemptively build files with frequently needed triage commands and just leverage these audits when incidents occur to save time and ensure that scripts work as intended. More mature organizations can automate the whole process using SecurityCenter CV, and launch targeted Nessus audits against hosts triggered by real-time detections like high-level indicator events to provide a complete package of data before you even know you need it.

Using your existing security tools in unexpected ways not only makes your team more agile in their response but it also helps to reduce the crisis mindset

Hunting forces many organizations to deal with security incidents that they have never prepared for. Using your existing security tools in unexpected ways not only makes your team more agile in their response but it also helps to reduce the crisis mindset by leveraging familiar technologies that the team is already comfortable with. Tenable Nessus and SecurityCenter CV are extremely useful and versatile tools in your arsenal to thwart the bad guys—scoping incidents and quickly gathering artifacts, while continuously uncovering weaknesses in your environment and managing your vulnerabilities.

More information

See my previous blogs on threat hunting techniques:

And visit our website to learn more about Tenable’s Threat Hunting solution.

Metric-Based Security

$
0
0

Gavin Millard, Tenable’s Technical Director for EMEA, is a popular speaker and expert on information security. His presentations on security metrics always draw a crowd and impart insights and practical tips on implementing a security assurance program in any organization.

 

We recently caught up with Gavin at Infosec Europe to pick his brain about security metrics. What is metric-based security? How do you select metrics that are relevant to your business?  What frameworks and controls are the best to start with? Listen as Gavin addresses these issues to help you implement an effect metrics program.

Continuous Visibility of Vulnerabilities is More Critical than Ever for CISOs and Security Posture

$
0
0

Traditionally, vulnerability scanning—credentialed and/or non-credentialed—has been predominantly a security compliance exercise driven by regulations (e.g. HIPAA, FFIEC) or industry standards (e.g. PCI). The underlying rationale is that periodic scanning is a basic security hygiene process to identify and prioritize vulnerabilities and to demonstrate that security vulnerabilities are regularly patched or fixed. Most organizations perform quarterly scans, some monthly, some less than that.

Much like housekeeping, vulnerability scanning is a continuous process: you are never done. As vulnerabilities are patched or remediated, new ones are discovered even in a relatively static IT environment. 

As cyber threats increase at an accelerating pace, staying abreast of vulnerabilities and having near or real-time visibility into the vulnerabilities of the network is even more critical.

Why continuous scanning and visibility of vulnerabilities is critical

There are two major reasons why continuous visibility of vulnerabilities is critical in today’s cybersecurity environment:

  • Acceleration of network devices being continuously on-boarded into the IT environment
  • Increasing need to rapidly respond when a security breach or event occurs

Virtual computing and mobile devices are also increasing the rate of change within the IT. Virtual computing, whether on-premises or in the cloud, means that new system hosts are continuously being discovered in the network. Whereas it used to take weeks for a new host to be provisioned or deployed, now they are deployed within minutes. So every day or hour a new host may appear on the network which, if it is not scanned, introduces new vulnerabilities or re-introduces previously eradicated vulnerabilities.

BYOD, driven by the rapid adoption of mobile computing within the workplace, is also accelerating the rate of change within the network. Mobile devices are constantly logging into the network and introducing potential new vulnerabilities, again re-enforcing the need to scan these devices when they appear on the network.

Finally, in today’s cybersecurity environment, it is not if you will be breached or if a security event will occur, but when. Therefore, the most critical factor is to rapidly respond when a breach or event occurs. This means knowing your IT assets and security state. It means knowing which assets are vulnerable and need to be patched immediately. It means continuous scanning to ensure a known exploited vulnerability is not re-introduced into the network.

Conclusion

Periodic vulnerability scanning is no longer sufficient; continuous scanning is now required.

The accelerating rate of changes within the network—predominately driven by on-premises or in the cloud virtual computing and/or by mobile computing—also demands accelerating vulnerability scanning capabilities and processes.

Finally, continuous vulnerability scanning is a critical and necessary component to respond to a cyber breach or event caused by exploited vulnerabilities. 

Find out more about Continuous Monitoring in Tenable’s Solution Story.

Regional Cloud Adoption

$
0
0

As Tenable’s EMEA Technical Director, Gavin Millard has his finger on the pulse of information security internationally. He stays in touch with customers and practitioners via presentations and visits to organizations throughout England, Europe and the Middle East .

 

During InfoSec Europe 2016, Gavin took time to share his thoughts on cloud adoption and security. Listen as he summarizes the state of cloud security throughout the world, identifies early adopters and points out the benefits of a cloud first approach.

Mr. Robot Asks: Are You Hacker Proof?

$
0
0

In Mr. Robot Season 2 Episode 4, as Darlene and Elliot watch a horror movie, Darlene suggests calling a dinner delivery service called Postmates - a service she hacked using code injection. As they chat, Elliot recalls his previous job. He remembers the details of his pen testing projects, and says it was his job to keep hacking until the company’s systems were “hacker proof.” I asked myself, what is “hacker proof” and is such a thing even possible?

What is “hacker proof”?

The answer I came up with is that “hacker proof” must be like “waterproof.” When you buy a waterproof watch or phone case, the “proof” is more like “resistance,” meaning the device is only waterproof to a certain depth. When you learn to scuba dive, you are taught that our atmosphere exerts 14.7 psi (pounds per square inch) at sea level. Therefore every 33 feet under water, your body has another atmosphere of pressure exerted on it. The watch or camera case may be waterproof to 33 feet before the pressure will override the seals. So the device is water resistant to 33 feet, or 1 atmosphere. The more expensive the watch or case, the more atmospheres of pressure, but the device is not waterproof - not at all. A similar case can be made for making a system or network “hacker proof.”

Postmates was obviously not “hacker proof,” as Darlene was able to hack one of their proxies, and gets the proxy to send users to her affiliate link so she could get credit for referring customers to the Postmates website. The net result is that Darlene receives a $10 coupon anytime anyone places an order. Code injection is a common goal of malicious users, as they look to compromise a popular website and modify the code using an IFrame or inject some JavaScript. Then the malicious code is executed each time the web page is launched in a browser. Nessus® has several plugins used to identify malicious URLs on web pages, such as Web Site Links to Malicious Content (52670). Malicious users may also post a comment in a blog or other feedback systems to inject the malicious code. Users of SecurityCenter™ and Nessus can use this plugin - and many other plugins - to review website content for malicious activity. SecurityCenter users can also download the Web Services Indicator dashboard to easily monitor for security issues related to web services.

Web Services Indicator Dashboard

Multiple defenses

The security industry has talked about defense-in-depth for several years. While some professionals say the concept is no longer effective, others say the practice is still alive and proves its worth each day. However just like the “waterproof” watch, you need to consider to what depth is your system hacker proof? Can your system resist one level of compromise, two levels, or three? Securing a network is not a single solution; your security plan must include several layers, such as firewalls, IDS, anti-virus, and many more, which may protect you up to one or two compromises deep. However, some of the best security solutions are often not implemented, because the solutions can be hard to manage and administer. Implementing hardening standards that are suggested by organizations such as the Center for Internet Security (CIS) or National Institute of Standards and Technology (NIST) can provide some of the most effective security controls.

Customers using SecurityCenter and Nessus have the ability to scan systems for hardened configurations using over 400 audit files provided by Tenable. The audit files are created using guidelines from CIS, NIST, PCI DSS, and many other governing bodies. When combining hardening systems with good patch management and other security solutions, an organization can have a more “hacker resistant” system; should one layer fail, there are other layers to further protect the system.

CSF Compliance and Device Hardening

Continuous visibility

Later in the episode, Elliot says that accepting help is not Darlene’s strong suit. He then suggests that everyone has a problem accepting help, and says that the only way to deal with a vulnerability is by exposing it first, meaning that you can’t solve the security problem if you don’t know about the problem. Tenable SecurityCenter Continuous View™ constantly analyzes information from five sensors: Active Scanning, Intelligent Connectors, Agent Scanning, Passive Listening, and Host Data - providing the help needed by organizations to have a more complete and honest view into the operational risks related to information security. Tenable’s continuous visibility and multiple sensors go a long way to help you answer the question, “Are we hacker proof?”

Network Monitoring: A Vital Component of Your Security Program

$
0
0

Despite the many advances in information security, organizations are still experiencing breaches. Whether the root of an attack is human error or system weakness, network monitoring can help detect suspicious behavior and measure security configurations.

Network monitoring for information security has a long history. In a recent series in CSO, Dick Bussiere details the history of cybersecurity countermeasures, the current state of affairs, and trends in network monitoring. Read Network Monitoring: Past Present and Future for a fascinating perspective.

Read the full article


Black Hat 2016: Resilience and Community

$
0
0

The Black Hat conference is one of the most fun and dynamic security conferences all year. Infosec analysts, hackers, researchers, pen-testers, practitioners and vendors all come together to challenge each other, to learn new techniques, and to renew friendships. If you missed the 2016 conference, held in Las Vegas July 30-August 4, Tenable’s Cris Thomas (aka Space Rogue) shares his observations on hot topics, trends, and news.

Detecting Mr. Robot Malware

$
0
0

Season 2 Episode 5 of Mr. Robot starts with Elliot preparing a malicious payload delivery system that we later find out is a femtocell. The goal is to create a Man in the Middle (MitM) attack vector using the femtocell. A femtocell “is a small, low-power cellular base station, typically designed for use in a home or small business” (Wikipedia). The idea is to create a small device that can be battery operated or powered using Power over Ethernet (PoE) and placed as a malicious cell tower in E Corp on the floor where the FBI agents are investigating the breach. Mobile devices, such as the FBI Android phones, periodically check for updates from a centralized update service, and this service delivers Android Application Package (APK) files. Elliot’s goal is to deploy a MitM device that will send the malware package instead. For example, Elliot could use a tool like Evilgrade to perform such an attack (we aren’t sure what tool Elliot is using during his attack). From this point, there would be a post exploit, and the malware would dial home and request a list of post exploitation activities. Once Elliot comprises several FBI or E Corp Android devices, he will have access to stored credentials and other data. With stored credentials, Elliot will have root, admin, or God level access, which Elliot describes as the thrill of PWNING a system.

Do you think you could detect the malicious activity and malware in these 5 steps described in this episode?

Step 1 - Identify the target and its flaws

As Elliot describes his plan to infiltrate the FBI, he says the first step is to identify the targets and flaws or vulnerabilities in the target system. While Elliot does not go into details about how he identified the target, he has identified the most common mobile device supposedly used by the FBI. Many adversaries use software to sniff the airwaves and detect user-agent strings, or may use network scanning methods to identify systems based on open ports and other behaviors.

Using SecurityCenter Continuous View (SecurityCenter CV) and sending all the logs from firewalls, IDS, and other systems, the Tenable Log Correlation Engine (LCE®) can easily identify scanning activities. LCE uses advanced correlation technology to identify network probing activity for all available log sources.

Step 2 - Build malware and prepare the attack

Custom built malware can be difficult to identify using traditional methods; Tenable uses more advanced and behavior methods. For example, Nessus® supports several plugins that monitor AutoRuns, running processes, and other forensic indicators of malware. When used with SecurityCenter CV, the Passive Vulnerability Scanner (PVS) and LCE can also detect anomalous traffic patterns and Never Before Seen (NBS) events. Gaining an understanding of your network is key to learning when anomalies occur, and the Verizon 2016 DBIR - Incident Pattern Monitoring dashboard in SecurityCenter CV provides a series of indicators when this activity is prevalent.

Verizon 2016 DBIR dashboard

Step 3 - Load malware into the delivery system

While there are several steps included in Elliot’s step 3 (reverse shell, two stage exploit, the delivery system), the key point is to deliver the femtocell, exploit the targeted phones, and then pivot from the phone and attack the network. This third step can be the hardest for an attacker to be successful at. In Season 1, delivery systems included USB flash drives and a CD disguised as a music disk - easier to deliver. But in this season, the 23rd floor is locked down and physical access is tight. How can they gain access? Elliot and Darlene must use an insider: Angela.

Using the Insider Threat Dashboard, SecurityCenter uses can embrace all input sources to identify malicious insider activity. Additionally, detecting new devices on the network is a key defense to this threat vector; the CIS CSC: Devices and Ports (CSC 1,9) dashboard provides users with a great view into new device activity. These dashboards and many others provide your security operations team with clear insights for identifying new threats to the network.

Steps 4 & 5 - Write the script and launch the attack

Elliot understands that to truly gain access to systems, his malware must launch a stable and reliable script to send credentials and other critical information to allow further penetration. Episode 5 does not provide enough information about the success of the attack, but we can be sure the scripts are ready to cause havoc for the FBI and E Corp.

SecurityCenter CV provides a highly effective solution to help identify and track malicious activity. Customers have the ability to identify threats using data collected from several sensors: Active Scanning, Intelligent Connectors, Agent Scanning, Passive Listening, and Host Data. By looking into all available data, your organization can go a long way towards detecting each of Elliot’s steps to a penetration. Can your security program detect all 5 steps?

Enabling the Risk Management Framework

$
0
0

Moving beyond periodic certification of information systems to the Risk Management Framework requires standardizing and automating the assessment process.

Making decisions based on outdated information is a recipe for failure. No matter how good the data at the time it is collected, if it cannot be used quickly it loses its value for making critical cybersecurity decisions.

This has been demonstrated by federal agencies struggling to secure their information systems using Certification and Accreditation (C&A) schemes that call for periodic certification of static security controls. Government now is moving beyond C&A to continuous assessment of security status under a Risk Management Framework (RMF). But challenges remain in automating the processes needed to use the framework.

The Risk Management Framework

The Risk Management Framework is a risk-based approach to cybersecurity intended to enable continuous response to threats.

Under the Federal Information Security Management Act (FISMA)—now the Federal Information Security Modernization Act—IT systems were certified and accredited for operation every three years. Each agency or department had its own C&A process. Information was maintained in data silos and there was no standardization and no reciprocity. The results were time consuming and labor intensive, security controls were rarely up-to-date, and the lack of reciprocity required basic work to be duplicated by each agency.

The Risk Management Framework is a risk-based approach to cybersecurity intended to enable continuous response to threats

RMF is a unified framework for assessing organizational risk posed by IT systems, and managing that risk by selecting the appropriate security controls. The framework supports continuous assessment of security as the security status changes throughout the system lifecycle.

RMF includes six steps:

  • Step 1: Categorize the system and the information using impact analysis.
  • Step 2: Select an appropriate set of baseline security controls based on the potential impact and tailored to the assessment of risk.
  • Step 3: Implement those controls and document their deployment.
  • Step 4: Assess whether security controls are implemented correctly, operating as intended, and producing the desired outcome.
  • Step 5: Authorize the system’s operation based on a determination that its risk is acceptable.
  • Step 6: Monitor security controls on an ongoing basis to assess effectiveness, document changes to the system and environment, conduct security impact analyses of the changes, and report the security state.

Where the process falls short

Carrying out these steps requires access to near-real-time data on which to base decisions. Strides have been made in automating data collection to support the RMF. The Security Content Automation Protocol (SCAP) is a set of specifications assembled by the National Institute of Standards and Technology that lets commercial security tools interoperate and automate information gathering.

The process of accessing and using the data remains time consuming and labor intensive

But the process of accessing and using the data remains time consuming and labor intensive, with many manual steps. Data must be reviewed, standardized, moved to an authorization database, analyzed, and reports generated to support selection of controls and authorization to operate.

This can take months to complete. While this is an improvement over the static three-year certification timeline under the original C&A scheme, it remains inadequate in today’s rapidly evolving cyberthreat landscape. John Chirhart, federal technology director for Tenable, likens this to going to a dentist and waiting three months to get your X-rays back. No matter how good the data when it was gathered, it is outdated by then.

Automating the process

The key to effective risk management is automation and open standards to provide visibility and flexibility

The key to effective risk management is automation and open standards to provide visibility and flexibility to not only see data but to put it into context. Automating workflows gives stakeholders access to the information they need when they need it.

ASSESS (Automated Scalable Solution for Enterprise Systems Security), a cloud-based big data enterprise solution from Secure Innovations, provides access and visualization of data across disparate systems. IT and security officials can be confident in the security posture of their IT systems. ASSESS is product agnostic, leveraging industry standards to work with tools from industry-leading vendors, including SecurityCenter Continuous View™ from Tenable Network Security.

It coordinates and standardizes data from assessment and reporting tools, automates workflow and supports role-based access so that only those who need it have access to what they need. ASSESS is customizable to support compliance with any government or industry regulations. The result is a flexible and scalable solution that shortens the time for system authorization under the RMF from months to days, reducing costs while improving situational awareness and security.

Learn more

Vulnerability Management with Nessus in the Cloud

$
0
0

Regardless of whether you’re running applications and storing data in a physical, virtual or cloud environment (or a hybrid mix), a key responsibility for you as a security professional is to keep that environment free from vulnerabilities that attackers could use to get at your organization’s applications and data.

In some ways, managing vulnerabilities in the cloud is the same as managing vulnerabilities in other IT environments

In some ways, managing vulnerabilities in the cloud is the same as managing vulnerabilities in other IT environments. As we’ve highlighted in a previous blog, successful vulnerability management (VM) programs involve much more than simply putting a tool in place to scan IPs for flaws. Successful VM programs in any environment have stated goals, involve multiple stakeholders as active participants in the program, and address both security and compliance requirements. These core components of a successful program don’t change in any IT environment.

However, cloud technologies can introduce unique security and compliance challenges, many of which aren’t easily addressed with traditional vulnerability management tools and techniques. These challenges include:

Shared security responsibility

When an organization moves applications and data to the cloud, they will shift some - but not all - security responsibility to the cloud provider. Most cloud providers are responsible for securing their cloud infrastructure (such as physical data center security), while the cloud user is responsible for their applications and data running in the cloud platform. As a cloud user, it’s critical to know how responsibility is shared, as it can vary between cloud providers, and how to securely deploy and configure the applications or operating systems running on that cloud platform.

As a cloud user, it’s critical to know how responsibility is shared

Dynamic environments

It’s so quick and easy to spin up new instances of servers in the cloud, that organizations often create large numbers of cloud instances, increasing the number of servers to monitor and manage. But these cloud instances may only exist for a few hours or minutes, so tracking and updating what you have can be a challenge.

More players, different skillsets

Because it’s easy for anyone to spin up their own cloud resources, individuals and teams outside of IT can now deploy and manage their own IT environment. However, some of these individuals won’t have the expertise to set up their cloud environment securely. And even if they have cloud security expertise, that’s no guarantee that even a small configuration oversight won't have a negative or even devastating effect. For example, Code Spaces might still exist today if there had been tighter role-based access controls in their cloud environment.

How Nessus helps

Nessus® is designed to help organizations with vulnerability management in any IT environment. Organizations using Nessus get the benefit of using the same product in cloud, virtual and physical environments so you don’t need to license and learn new tools depending on where you’ve deployed your IT.

For vulnerability management in the cloud, Nessus delivers specific capabilities to address the challenges outlined above. For example:

  • Scan for vulnerabilities in cloud instances. Because of shared security responsibility models, it’s important that you scan for vulnerabilities in cloud instances. You can run Nessus natively in the cloud to scan for software flaws and Tenable makes it easy to access and launch Nessus from popular cloud providers like Amazon Web Services and Microsoft Azure.
  • Use agents to scan dynamic assets. Both Nessus Cloud and Nessus Manager include the ability to use agents. You can script the deployment of Nessus Agents so they install automatically with new cloud instances and use them to track and monitor for vulnerabilities as new instances are spun up.
  • Audit for configuration issues. Securely configuring your cloud environment is your responsibility. For example, are you enforcing a strong password policy or do you flag accounts that haven’t been used in more than 90 days? To help, Nessus comes with pre-built templates for auditing the configuration of popular cloud providers. We have an on-demand webcast that covers what Nessus provides for Amazon, Microsoft and Rackspace clouds in more detail. Our director of research Mehul Revankar has also written a number of detailed blog articles on what’s available in Nessus.

Summary

While in some ways, vulnerability management in the cloud is the same as any other environment, there are also some key differences. By understanding these differences and addressing them in your program, you can ensure vulnerability management success where ever you have your IT assets.

While in some ways, vulnerability management in the cloud is the same as any other environment, there are also some key differences

Is Mr. Robot in Your Network?

$
0
0

In Season 2 Episode 6 of Mr. Robot, Darlene and Angela continue with infiltration of the FBI and E-Corp, while Elliot is otherwise detained. Because Angela must plant the femtocell in E-Corp, members of F-Society help her learn the commands that are needed. They also offer an alternative called a “Rubber Ducky,” a USB device that registers itself as a Human Interface Device (HID) or keyboard. Since all systems trust HIDs, they are able to bypass policies that don’t allow USB storage devices. However, the infiltration plan is executed as planned with the femtocell device.

Angela makes her way to the 23rd floor, where the FBI team is working, and sets up the femtocell in the restroom. As she exits the restroom, a young FBI agent notices her unfamiliar face. Instead of checking her ID or other credentials, he takes the opportunity to ask for a date. Knowing she is almost caught, Angela plays his game and sets up a lunch date. Angela then makes her way to a cube where a small switch is located and puts the femtocell on the network. Darlene sees the femtocell collecting data, but then the data stops. As Angela re-enables the wireless network interfaces on the femtocell, Agent DiPierro interrupts her.

USB device tracking

Angela did not use the Rubber Ducky, but the threat of such a device is very real. Tenable SecurityCenter Continuous View™ (SecurityCenter CV™) supports two methods of detecting USB device installation.

The first method uses active scanning of a Windows host and provides a full history of USB devices. There are three plugins that provide a record of USB devices attached to a system:

  • USB Drives Enumeration (24274)
  • Microsoft Windows USB Device Usage Report (35730)
  • Microsoft Windows Portable Devices (65791)

The plugins analyze the registry and native commands to report on USB devices such as storage devices, multimedia devices and human interface devices.

SecurityCenter CV also includes USB analysis with the Log Correlation Engine™ (LCE®). For Windows hosts running the LCE Agent, the installation and removal of a USB device is detected within 15 minutes of the aforementioned action. Each time a USB device is connected to a system, the LCE agent sends a log notification to LCE.

USB Events

Detecting new devices on the network

There are many solutions in the marketplace to detect when a new device is on the network, called Network Access Control (NAC) systems. Many of these NAC systems run some type of a RADIUS server and interact with network systems and wireless systems. The RADIUS server then performs authentication against Active Directory or other type of user repository. While these systems are great for detecting obvious attempts to gain access, what about attempts that are not so obvious; do more stealthy attempts get tracked in the same manner?

SecurityCenter CV uses several methods of detecting systems on the network and can alert both help desk and security operations when such an event occurs. The LCE provides SecurityCenter users the ability to track when a new account is detected on a computer, when a device first connects to a network, and when a user runs a command for the first time. These three events are only a small sampling of the Never Before Seen (NBS) events that can be used when creating alerts within SecurityCenter. When SecurityCenter and LCE are fully implemented to monitor DHCP servers and network devices, there are other events such as switch port state changing, new MAC address tracking, and DHCP lease tracking, that can also track network access events.

Host Discovery Dashboard

The Daily Host Alerts and Host Discovery dashboards provide valuable information about new host activity from many sources. The filters in these dashboards can be converted into alerts to assist the security operations team in identifying new devices on the network. By monitoring the switch and DHCP server used by the femtocell, SecurityCenter could send email alerts when these new devices are present and security ops could begin to investigate if the system is authorized or not.

Additionally, SecurityCenter alerts can launch scans of the systems and provide more information about them. In this episode, as Angela boots her computer off the thumb drive, SecurityCenter would have used the Passive Vulnerability Scanner™ (PVS™) to detect a new OS that had not been scanned before. SecurityCenter would have launched a scan if there were no Linux systems authorized in the subnet, and then a further alert would be emailed to the security ops team.

When a security operations team fully embraces SecurityCenter CV, there are many layers of detections available to identify unauthorized systems on the network. Whether the attack vector is unauthorized computers, network probes, or USB devices, Tenable SecurityCenter CV combats the attacks to protect the security of your organization.

But what about Angela? Is she caught red-handed, or has she eluded capture by the FBI yet again?

Viewing all 1976 articles
Browse latest View live