Industry Insights with Charmaine Valmonte

Artificial Intelligence & Machine Learning , Governance & Risk Management , Next-Generation Technologies & Secure Development

Responsible Use of Artificial Intelligence

A Practical Guide for Cybersecurity Professionals
Responsible Use of Artificial Intelligence

The democratization of artificial intelligence technologies has made the threat landscape more dangerous by giving hackers the ability to design effective social engineering campaigns, write malware code and create deepfakes to inflict reputational damage. But organizations are realizing AI's potential to strengthen their threat hunting and incident response capabilities.

See Also: How UK government can continue to innovate while reducing tech costs

AI, machine learning, automation and orchestration tools can augment or replace manual detection and investigation of threats, and IBM said organizations that use AI and automation extensively for security saved $1.76 million on average in breach remediation costs and reduced the time to identify and contain a breach by more than 100 days.

Though AI has the potential to revolutionize cybersecurity, the effectiveness of AI systems depend on the quality of the datasets used to train them and whether those datasets are free of bias and are tested and refined to respond to varied threats. Here's how organizations can build and deploy safe, responsible and accountable AI systems that safeguard data privacy and enhance our ability to defeat cybercrime.

AI in Cybersecurity

AI in cybersecurity is powerful, but we need to use it wisely. Years of machine learning and deep learning have built strong defenses, but new threats demand more. We must balance AI's potential with responsible use to stay ahead. It's not just about tech; it's about working together to make the future more secure.

AI is a tool. It cannot replace human intelligence or capabilities. AI can analyze and correlate large amounts of data, identify patterns and provide natural language output, but it cannot understand context. It requires human guidance to analyze and provide ethical judgments and context for potential threats.

Building AI Responsibly

Building AI responsibly requires:

  • Privacy and security in the DNA: Prioritize privacy and security in every step of AI development. Minimize data collection, collect respectfully and use the highest security practices to prevent breaches and harm.
  • Limited and ethical use: Design algorithms to protect personal data. Only share personal information for specific functions, and get approval from a dedicated ethics officer, following specific industry standards.
  • Fortresslike defenses: Provide robust security for data and algorithms. Implement best practices and industry standards to create an unshakable digital shield.
  • Data protection by design: Plan data architecture and keep privacy in mind, especially when handling sensitive personal information. Use anonymization techniques when possible.
  • Tailored protections: Customize privacy, security and legal frameworks for each AI application within an organization.
  • Strict controls and access: Grant access to sensitive data sets selectively and responsibly, and set up clear rules about use and disposal of data.

Safety and Reliability of AI

Making AI safe and reliable requires:

  • Consistency: Ensure that the AI system produces consistent and predictable results under normal operating conditions. Think of it as a car that always starts and gets you to your destination without hiccups.
  • Data integrity: Ensure that the data used to train and run the AI is accurate, complete and free of errors or biases. Imagine building a house with sturdy, reliable materials to create a stable or strong structure.
  • Dependability: Ensure that the AI system is available when needed and performs its tasks reliably, even in the face of minor disruptions or unexpected inputs. Picture a robot assistant that always cleans your house, even if you move furniture around occasionally.
  • Prevention of harm: Ensure that the AI system does not cause physical or psychological harm to users, the environment or society. Think of safeguards installed in machinery to prevent injuries.
  • Minimization of risk: Identify potential risks, such as biased decisions or unintended consequences, and minimize them through proactive measures, such as testing and simulations. Think of it as stress-testing a bridge to ensure it won't collapse under normal traffic.
  • Predictive modeling: Use algorithms to anticipate and mitigate potential problems before they occur. Consider how smoke detectors predict and alert us to fire risks.
  • Technical robustness: Build the AI system with security in mind. Make it resistant to hacking, manipulation or misuse. Imagine a bank vault that relies on multiple layers of protection to keep valuables safe.
  • Prevention of misuse: Put measures in place to prevent data exploitation that could harm users, individuals or communities. Think of guardrails on a racetrack that keep speeding cars from crashing.
  • Continuous development: Continuously assess and improve reliability and safety throughout the life cycle of the AI system. Think of it as regularly servicing your car to ensure it stays reliable and safe on the road.

Responsible and Accountable AI

Everyone involved in creating and using AI has a shared ethical responsibility. Designers, vendors, developers, owners and even those who evaluate AI systems must all be accountable for the potential consequences of their actions. This requires:

  • Human oversight: Provide human guidance and supervision at every step of the AI journey, from design to deployment. Think of it as having a responsible adult watch over a child learning to ride a bike.
  • Clear ownership: The system owner is responsible for the system. Imagine the captain of the ship, ensuring it navigates safely and ethically.
  • Fairness: AI shouldn't discriminate or bias based on unfair criteria. Provide continuous monitoring and control mechanisms to maintain fairness throughout the system's lifespan.
  • Values in action: Be guided by ethical principles and a commitment to minimizing harm when making any decision or taking any action throughout the AI life cycle.

Making AI a Force for Good

Embrace a framework such as OWASP's LLM AI Security & Governance Checklist to ensure that AI can become a force for good.

The OWASP AI Security and Privacy Guide working group highlights the following areas:

  • Human agency and oversight
  • Technical robustness and safety
  • Privacy and data governance
  • Transparency
  • Diversity, nondiscrimination and fairness
  • Societal and environmental well-being
  • Accountability

The OWASP Top 10 for Large Language Model Applications includes the following:

Only through proactive stewardship and continuous learning can we unlock the full potential of AI while safeguarding our digital future from evolving cyberthreats.



About the Author

Charmaine Valmonte

Charmaine Valmonte

CISO, Aboitiz Group

Valmonte is the chief information security officer at Aboitiz Group. Her specialties include information security, government liaison, IT program management, disaster recovery, IT security, risk and compliance, IT governance, security operations, and threat intelligence.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing careersinfosecurity.co.uk, you agree to our use of cookies.