Detail kurzu

Certified AI Security Professional (CAISP)

EDU Trainings s.r.o.

Popis kurzu

The Certified AI Security Professional course offers an in-depth exploration of the risks associated with the AI supply chain, equipping you with the knowledge and skills to identify, assess, and mitigate these risks.
Through hands-on exercises in our labs, you will tackle various AI security challenges. You will work through scenarios involving model inversion, evasion attacks, and the risks of using publicly available datasets and models. The course also covers securing data pipelines, ensuring model integrity, and protecting AI infrastructure.
We start with an overview of the unique security risks in AI systems, including adversarial machine learning, data poisoning, and the misuse of AI technologies. Then, we delve into security concerns specific to different AI applications, such as natural language processing, computer vision, and autonomous systems.
In the final sections, you’ll map AI security risks against frameworks like the MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) and explore best practices for managing these risks. The course also covers secure AI development techniques, including differential privacy, federated learning, and robust AI model deployment.
By the end of this course, you will have a thorough understanding of the threats facing AI systems and strategies to secure them, ensuring the safe and ethical deployment of AI technologies in various industries.
Course Inclusions:

Course Manual
Course Videos and Checklists
30+ Guided Exercises
60 days Online Lab Access
Access to a dedicated Mattermost channel
One exam attempt Upon successful completion of this course, students will be able to:

Understand the critical role of AI security in protecting organizations from various threats.
Identify the types of attacks targeting AI systems, including adversarial attacks, data poisoning, and model inversions.
Develop strategies for assessing and mitigating security risks in AI models, data pipelines, and infrastructure.
Apply best practices for securing AI systems, leveraging guidance from frameworks like MITRE ATLAS and other industry standards.

Obsah kurzu

Chapter 1: Introduction to AI Security


Course Introduction (About the course, syllabus, and how to approach it)
About Certification and how to approach it
Course Lab Environment
Lifetime course support (Mattermost)
An overview of AI Security
Basics of AI and ML

What is AI?
History and evolution of AI
Key concepts in AI


Types of AI

Narrow AI vs. General AI
Supervised Learning
Unsupervised Learning
Reinforcement Learning
Natural Language Processing (NLP)
Computer Vision


Core Components of AI Systems

Algorithms and Models
Data
Computing Power


Introduction to Machine Learning

What is Machine Learning?
Differences between AI and ML
Key ML concepts


Retrieval Augmented Generation
Basics of Deep Learning

What is Deep Learning?
Introduction to Neural Networks
Brief overview of Convolutional Neural Networks (CNNs)


Hands-on Exercise:

Learn how to use our browser-based lab environment
Setup Invoke Ai a creative visual AI tool
Create a chatbot with Python and Machine learning
Text classification with TensorFlow
Implementing Duckling for converting Text into Structured Data



Chapter 2: Attacking and Defending Large Language Models


Introduction to Large Language Models

Definition of Large Language Models
How LLMs work
Importance and impact of LLMs in AI


Understanding LLM’s

GPT (Generative Pre-trained Transformer)
BERT (Bidirectional Encoder Representations from Transformers)


Training and Augmenting LLMs

Foundational model and fine tuned model
Retrieval augmented generation


Use Cases of LLMs

Text Generation
Text Understanding
Conversational AI


Attack Tactics and Techniques

Mitre ATT&CK
Mitre ATLAS matrix
Reconnaissance tactic
Resource development tactic
Initial access tactic
ML model access tactic
Execution tactic
Persistence tactic
Privilege escalation tactic
Defense evasion tactic
Credential access tactic
Discovery tactic
Collection tactic
ML attack staging
Exfiltration tactic
Impact tactic


Real-World LLM attack tools on the internet

XXXGPT
WormGPT
FraudGPT


Hands-on Exercises:

Scanning an LLM for agent based vulnerabilities
Attacking AI Chat Bots
Perform adversarial attacks using text attack
Perform Webscraping using PyScrap
Hide data in images using StegnoGAN
Adversarial Robustness Toolbox
Bias Auditing & “Correction” using aequitas



Chapter 3: LLM Top 10 Vulnerabilities


Introduction to the OWASP Top 10 LLM attacks
Prompt Injection

System prompts versus user prompts
Direct and Indirect prompt injection
Prompt injection techniques
Mitigating prompt injection


Insecure Output Handling

Consequences of insecure output handling
Mitigating insecure output handling


Training Data Poisoning

LLM’s core learning approaches
Mitigating training data poisoning


Model Denial of Service

DoS on networks, applications, and models
Context windows and exhaustions
Mitigating denial of service


Supply Chain Vulnerabilities

Components or Stages in an LLM
Compromising LLM supply chain
Mitigating supply chain vulnerabilities


Sensitive Information Disclosure

Exploring data leaks in various incidents
Mitigating sensitive information disclosure


Insecure Plugin Design

Plugin/Connected software attack scenarios
Mitigating insecure plugin design


Excessive Agency

Excessive permissions and autonomy
Mitigating excessive agency


Overreliance

Understanding hallucinations
Overreliance examples
Mitigating overreliance


Model Theft

Stealing models
Mitigating model theft


Hands-on Exercises:

Prompt Injection
Training Data Poisoning
Excessive agency attack
Adversarial attacks using foolbox
Overreliance attack
Insecure plugins
Insecure output handling attack
Exploiting Data Leakage
Permission Issues in LLM



Chapter 4: AI Attacks on DevOps Teams


Introduction to AI in DevOps

Definition and principles of DevOps
The role of AI in enhancing DevOps practices


Types of AI attacks on DevOps

Data Poisoning in CI/CD Pipelines
Model Poisoning
Adversarial Attacks
Dependency Attacks
Insider Attacks


Real-World case of AI Attacks on DevOps

Hugging Face artificial intelligence (AI) platform
NotPetya attack
SAP AI Core vulnerabilities


Hands-on Exercises:

Poisoned pipeline attack
Dependency confusion attacks
Exploitation of Automated Decision-Making Systems
Compromising CI/CD infrastructure



Chapter 5: AI Threat Modelling


Introduction to AI Threat Modelling

Definition and purpose of threat modeling
Importance in the context of AI security


Key Concepts in AI Threat Modelling

Assets
Threats
Vulnerabilities
Attack Vectors


AI Threat Modeling Methodologies

STRIDE framework
STRIDE GPT
LINDDUN Framework
MITRE ATLAS


Tools for AI Threat Modelling

Automated Threat Modelling Tools
Manual Techniques


Best Practices for AI Threat Modelling
Hands-on Exercises:

OWASP Threat Dragon
Iruis Risk lab for threat modeling
StrideGPT



Chapter 6: Supply Chain Attacks in AI


An overview of the Supply Chain Security
Introduction to AI Supply Chain Attacks
Data, model, and infrastructure based attacks
Abusing GenerativeAI for package masquerading
Vetting Software frameworks

Creating a vetting process
Automating vetting of third party code
Scanning for vulnerabilities
Mitigating dependency confusion
Dependency pinning


Supply chain frameworks

SLSA
Software Component Verification Standard (SCVS)


Transparency and Integrity in AI Supply Chain

Generate a Software Bill of Materials
SBOMs, Provenance, and Attestations
Model Cards and MLBOMs
Model Signing


Hands-on Exercises:

Supply Chain Dependency Attack
Flagging vulnerable dependencies using flag
Backdoor attacks using BackdoorBox
Model editing
Generating SBOMs
Attestations
Model Signing



Chapter 7: Emerging Trends in AI Security


Explainable AI (XAI)

Importance of Explainability
Techniques for Explainability


AI Governance and Compliance

Regulatory Requirements
Best Practices for AI Governance


Future Trends in AI Security

Emerging Threats
Innovations in AI Security


Hands-on Exercises:

Explainable AI basics
AuditNLG to audit generative AI
Scan malicious python packages using aura.



 
Certifikát Na dotaz.
Hodnocení




Organizátor