Check out the NSA’s 10 key best practices for securing cloud environments. Plus, learn how cloud native computing could help streamline your AI deployments. Meanwhile, don’t miss the latest about cyberthreats against water treatment plants and critical infrastructure in general. And much more!
Dive into six things that are top of mind for the week ending March 22.
1 - Ten best practices for beefing up cloud security
Looking for advice on boosting the security of your cloud environment? Check out the U.S. National Security Agency’s new “Top Ten Cloud Security Mitigation Strategies” for improving an organization’s cloud security posture.
“As organizations shift their data to the cloud for ease of processing, storing, and sharing, they must take precautions to maintain parity with on-premises security and mitigate additional cloud-specific threats,” reads the NSA document.
Here are the 10 best practices:
- Understand your cloud service providers’ shared responsibility model, so that you know which security tasks fall on your shoulders and which ones are handled by your CSPs.
- Adopt secure practices for identity and access management (IAM), such as using multi-factor authentication and properly managing temporary credentials.
- Employ secure cloud key-management practices.
- Implement network micro-segmentation and end-to-end encryption.
- Protect cloud data via, for example, enforcing least privilege; creating immutable backups; and using object versioning.
- Secure continuous integration and continuous delivery (CI/CD) pipelines with, for example, strong IAM, log audits and secrets management.
- Use infrastructure-as-code to automate deployment of cloud resources.
- Prevent security gaps in hybrid and multi-cloud environments by, for example, using vendor-agnostic tools to manage and monitor multiple environments from a single location.
- Ensure that your managed service providers (MSPs) employ strong security standards and practices.
- Monitor and analyze cloud logs to detect anomalous events and potential compromises.
2 – CNCF: How cloud native can support AI deployments
While organizations have gone ga-ga over artificial intelligence’s potential to revolutionize their operations, it’s no secret that AI systems need lots of computing power to work their magic. This can be a roadblock for organizations otherwise eager to deploy AI and machine learning tools.
If your business is grappling with this issue, you might want to check out a new white paper published this week by the Cloud Native Computing Foundation which looks at how cloud native (CN) computing could help facilitate the adoption of AI and ML systems.
“While CN technologies readily support certain aspects of AI/ML workloads, challenges and gaps remain, presenting opportunities to innovate and better accommodate,” reads the document titled “Cloud Native Artificial Intelligence.”
The paper provides an overview of AI and ML techniques; explains what CN technologies offer; discusses existing technical challenges in areas such as data preparation, model training and user experience; and looks at ways to overcome these gaps.
“The paper will equip engineers and business personnel with the knowledge to understand the changing Cloud Native Artificial Intelligence (CNAI) ecosystem and its opportunities,” the document reads.
For more information about AI’s computing power needs:
- “AI and the Data Center: Challenges and Investment Strategies” (Information Week)
- “Compare GPUs vs. CPUs for AI workloads” (TechTarget)
- “Rising Data Center Costs Linked to AI Demands” (WSJ)
- “The U.S. Just Took a Crucial Step Toward Democratizing AI Access” (Time)
- “Navigating the High Cost of AI Compute” (Andreessen Horowitz)
3 – Biden administration sounds alarm on water plant cyberattacks
Highlighting the U.S. government’s concern with the cybersecurity of water and wastewater treatment plants, the White House invited representatives from all 50 states to discuss the issue.
The virtual meeting, held this week, focused on outlining gaps in cyber defenses; fostering collaboration between federal, state and water-plant leaders; and triggering immediate action.
“Disabling cyberattacks are striking water and wastewater systems throughout the United States,” reads the meeting-invitation letter sent to all 50 governors by the White House.
Although water treatment plants offer a critical service, they tend to have weak cybersecurity, due to lack of resources and technical knowhow, according to the letter, penned by Environmental Protection Agency Administrator Michael Regan and by Jake Sullivan, Assistant to the President for National Security Affairs.
“In many cases, even basic cybersecurity precautions – such as resetting default passwords or updating software to address known vulnerabilities – are not in place,” Regan and Sullivan wrote.
For more information about protecting water and wastewater systems from cyberattacks, check out these Tenable resources:
- “Shoring Up Water Security: Industry Leaders Testify Before Congress” (blog)
- “City of Raleigh Ensures Safety and Sustainability of Public Water” (case study)
- “Navigating Federal Cybersecurity Recommendations for Public Water Utilities” (blog)
- “The Constant Drip: EPA Water Regulations, Funding Sources, And How Tenable Can Help” (on-demand webinar)
- “Keep the Water Flowing for the DoD: Securing Operational Technology from Cyberattacks” (blog)
VIDEO
Marty Edwards, Tenable Deputy CTO for OT and IoT, testifies during congressional hearing “Securing Operational Technology: A Deep Dive into the Water Sector”
4 - Critical infrastructure leaders warned about Volt Typhoon
Cybersecurity agencies from the U.S. and other countries want critical infrastructure leaders to take concrete steps to protect their organizations from Volt Typhoon, a hacking group backed by the Chinese government.
In the joint fact sheet “PRC State-Sponsored Cyber Activity: Actions for Critical Infrastructure Leaders,” published this week, the agencies urge leaders of critical infrastructure organizations to take specific steps immediately, including:
- Apply detection and hardening best practices
- Involve representatives from across the business, including executive leaders, in developing comprehensive cybersecurity plans
- Conduct regular tabletop exercises
- Implement stringent vendor-risk management processes to reduce third-party risk
- Align cybersecurity measures among IT, OT, cloud, supply chain and business teams
“The authoring agencies urge leaders to recognize cyber risk as a core business risk. This recognition is both necessary for good governance and fundamental to national security,” the fact sheet reads.
The guidance, jointly issued by cyber agencies from the U.S., Australia, Canada, the U.K. and New Zealand, comes about a month after these same agencies published a joint advisory about Volt Typhoon aimed at IT and OT security teams.
That joint advisory, titled “PRC State-Sponsored Actors Compromise and Maintain Persistent Access to U.S. Critical Infrastructure,” warned that Volt Typhoon has quietly infiltrated the IT and OT environments of multiple critical infrastructure organizations, and could strike at a moment’s notice.
5 - CSA unpacks, contrasts and compares AI safety and AI security
If you’re involved with ensuring your organization uses AI both securely and responsibly, you might find interesting a new blog published this week by the Cloud Security Alliance that delves into how AI security and AI safety intersect and diverge.
AI security refers to the protection of AI systems from cyberattacks, while AI safety encompasses issues like ethics and fairness.
"While AI safety and AI security have distinct priorities and areas of focus, they are inextricably linked and must be addressed in tandem to create responsible, trustworthy and secure AI systems,” reads the article, titled “AI Safety vs. AI Security: Navigating the Commonality and Differences."
AI security topics addressed include:
- data privacy, availability and integrity
- model security and integrity
- system availability
Among the AI safety issues addressed are:
- Lack of transparency
- System bias
- Facial recognition misidentification
“Effective AI governance and risk management strategies should encompass both domains throughout the entire AI lifecycle, from design and development to deployment and monitoring,” reads the article.
For more information about AI security and AI safety:
- “Evaluate the risks and benefits of AI in cybersecurity” (TechTarget)
- “Assessing the pros and cons of AI for cybersecurity” (Security Magazine)
- “8 Questions About Using AI Responsibly, Answered” (Harvard Business Review)
- “Guidelines for secure AI system development” (U.K. National Cyber Security Centre)
- “Envisioning Cyber Futures with AI” (Aspen Institute)
VIDEO
Building Safe and Reliable Autonomous Systems (Stanford University)
6 - McKinsey: Four steps to manage GenAI risks
As the generative AI train keeps gathering speed and enterprises everywhere rush to adopt this technology, it’s imperative to properly manage its risks.
If your organization is looking for guidance, check out the most recent advice dispensed by McKinsey in its article “Implementing generative AI with speed and safety.”
Specifically, the management consulting firm recommends that enterprises take these four steps:
- Grasp and respond to inbound risks such as security threats; third-party risk; malicious use; and intellectual property infringement.
- List the cases for using generative AI and identify potential risks, such as bias in a customer-service chatbot, and outline mitigation and governance strategies.
- Adapt and expand existing governance by creating a cross-functional generative AI steering group; crafting responsible AI guidelines and policies; and cultivating staff AI skills.
- Develop an operating model for how four critical roles will interact throughout the generative AI lifecycle: designers, engineers, governors and end users.
For more information about managing generative AI risks:
- “Top 10 Critical Vulnerabilities for Large Language Model Applications” (OWASP)
- “A CISOs Guide: Generative AI and ChatGPT Enterprise Risks” (Team8)
- “Adversarial Machine Learning and Cybersecurity: Risks, Challenges, and Legal Implications” (Stanford University and Georgetown University)
- “Security Implications of ChatGPT” (Cloud Security Alliance)
- “Considerations for Implementing a Generative Artificial Intelligence Policy” (ISACA)