PhiloCyber logo
PhiloCyberby Richie Prieto
AI Security

Introduction to AI Security Course by Lakera AI

Introduction to AI Security Course by Lakera AI
0 views
17 min read
#AI Security

Introduction

In today's rapidly evolving digital landscape, understanding the complexity of AI security is more crucial than ever. Whether you're an IT professional, a cybersecurity enthusiast, a techi or just keen on keeping up with the latest in tech, the Lakera's "AI Security in 10 Days" email course offers a comprehensive dive into the world of AI security.

Don't get me wrong, this is a beginner-friendly course that is designed to equip us with the knowledge and tools necessary to understand and address AI security challenges effectively and to kickstart our journey in this great and challenging topic.

This course was originally created by Lakera AI. You can find it here.

Day 1 - GenAI Security Threat Landscape

An in-depth exploration into the AI threat landscape, highlighting instances of large language model (LLM) breaches, and the potential risks they pose to organizations. This day aims to set the stage by underscoring the importance of staying ahead in the AI security game.

Understanding AI Security Risks

AI security encompasses a broad spectrum of risks, each presenting unique challenges and requiring specialized strategies for mitigation.

Some key areas of concern include:

  • Model-Based Attacks: These sophisticated attacks aim to manipulate AI models, leading to undesirable outputs. Techniques such as data poisoning and prompt injection attacks compromise the integrity of AI systems.
  • Data Security Breaches: The cornerstone of AI's functionality, data, becomes a prime target, with breaches leading to severe repercussions, including identity theft and significant financial losses.
  • AI Supply Chain Attacks: By targeting the development phases of AI models, attackers can introduce vulnerabilities or backdoors, compromising the entire AI ecosystem.
  • Denial-of-Service (DoS) Attacks: Overloading AI systems with excessive traffic, these attacks disrupt service availability, impacting businesses and users alike.
  • **Social Engineering Attacks: **The human element remains a weak link, with attackers exploiting psychological tactics to gain unauthorized access to sensitive information.

Real-World LLM Security Breaches

Lakera AI's internal Red Team has identified several notable exploits, offering a glimpse into the practical implications of LLM vulnerabilities:

  1. Prompt Injection in Google's Bard Extension: A seemingly benign prompt led to unexpected behavior, illustrating the ease with which LLMs can be manipulated.
  2. XSS in a Hosted Agent UI: This exploit demonstrated the consequences of inadequate sanitization, leading to a successful Cross Site Scripting attack.
  3. Data Poisoning an OpenAI Assistant: By manipulating the underlying system, the Red Team was able to bypass intended behaviors, highlighting the risks of data poisoning in real-world applications.
Image

For more information about this please visit the following links:

  1. Navigating AI Security: Risks, Strategies, and Tools
  2. Real World LLM Exploits, Lakera Red Team
  3. Lakera LLM Security Playbook

Day 2 - Eploring Security Frameworks for LLM Applications

In day two, you are going see the OWASP Top 10 for LLMs and the ATLAS™ framework. It provides actionable insights and a solid foundation for understanding the standards and practices that protect AI systems, really good explanation of the different vulnerabilities associated with LLMs and the MITRE framework (attacks tactics and techniques). Basically, two crucial frameworks that are reshaping our understanding and approach to AI security.

Introduction to OWASP Top 10 for LLM Applications

The OWASP Top 10 for LLMs is specifically tailored to address the unique vulnerabilities encountered in applications that leverage large language models. This initiative aims to raise awareness among various stakeholders including developers, security professionals, and organizational leaders, about the critical security risks that could potentially undermine the integrity and safety of LLM deployments.

Exploring the Top Vulnerabilities:

  1. Prompt Injection: This vulnerability occurs when an attacker feeds a crafted input into an LLM, causing it to execute unintended actions. This can result from both direct manipulation of the input prompt or indirect manipulation through compromised data sources.
  2. Insecure Output Handling: Trusting LLM outputs without proper validation can lead to severe security breaches, such as Cross-Site Scripting (XSS) or even remote code execution if the output is dynamically executed by the receiving system.
  3. Training Data Poisoning: If the data used to train an LLM is tainted, the model's outputs can be biased or manipulated, leading to unreliable and potentially harmful results.
  4. Model Denial of Service (DoS): Overloading an LLM with requests or complex data that consume excessive computational resources can render the service slow or entirely unresponsive, affecting availability and incurring high costs.
  5. Supply Chain Vulnerabilities: This involves risks introduced through third-party services and components used in building and deploying LLMs, which can lead to compromised models that perform suboptimally or maliciously.
  6. Sensitive Information Disclosure: Poorly configured LLM applications may inadvertently expose sensitive data, violating privacy and compliance requirements.
  7. Insecure Plugin Design: Plugins extending LLM functionalities without adequate security controls can be exploited to perform unauthorized actions, compromising the host system.
  8. Excessive Agency: Allowing LLMs too much functional autonomy without sufficient oversight can lead to unintended actions that may be difficult to predict or control.
  9. Overreliance on LLMs: Heavy dependency on LLMs for critical decision-making without understanding their limitations can lead to significant operational risks and misinformation.
  10. Model Theft: Unauthorized access and duplication of proprietary LLMs can lead to intellectual property theft, competitive disadvantage, and economic losses.
OWASP TOP 10 - Large Language Models Attacks

For those interested in a more detailed exploration of each vulnerability, the OWASP LLM Top 10 page provides extensive resources and mitigation strategies.

Introduction to the MITRE ATLAS™ Framework

The MITRE ATLAS™ framework, developed by MITRE, serves as a comprehensive guide for cybersecurity professionals to understand and combat cyber threats against AI systems. It outlines a broad spectrum of adversarial tactics and techniques, offering a granular view of potential attack vectors.

Framework Components:

  • Reconnaissance: Techniques involve probing for information that can be used in planning future cyberattacks.
  • Resource Development: Establishing or acquiring tools, data, and other resources necessary for mounting an attack.
  • Initial Access: Gaining entry into systems, often through vulnerabilities in public-facing applications or through social engineering tactics like phishing.
  • Execution: Execution of malicious code or strategies within the AI system.
  • Persistence: Techniques designed to maintain a foothold within the system undetected.
  • Defense Evasion: Employing methods to avoid detection, including the use of stealth techniques and encryption.
  • Discovery: Mapping out the AI environment to understand its operations and find further exploitable vulnerabilities.
  • Collection: Gathering valuable data from the compromised system for future use or exfiltration.
  • Exfiltration: Stealing sensitive data or intellectual property from the target system.
  • Impact: Actions aimed at disrupting, degrading, or permanently damaging the AI system or the data it handles.
MITRE ATTACK, adjusted by LAKERA

For cybersecurity teams looking to employ MITRE ATLAS™ in their defensive strategies, MITRE's official website offers comprehensive resources and detailed descriptions of each category.

Conclusion

The second day of the course provides a foundational understanding of the OWASP Top 10 for LLMs and MITRE ATLAS™ frameworks, giving us the knowledge to better secure AI applications against the evolving landscape of cyber threats and vulnerabilities. These frameworks not only offer insights into the vulnerabilities but also guide the development of robust security measures to protect against and mitigate these risks effectively (really good examples in the owasp page about how mitigate them and controls that we should implement in our models).

For me it was the best day of the course for the content, and because is really focus on the things I like the most. But of course the curse continue with great and more information to cover.


Day 3 - Prompt Injections Deep Dive

Another good day full of content about the number one OWASP Top 10 vulnerability, Prompt Injection! So for sure you already know something about this vulnerability, but if you skiped the information above, prompt injection allows an attacker to manipulate LLMs by crafting prompts that make the model deviate from its intended function or perform undesired actions. Understanding this attack vector is something fundamental and crucial for anyone developing or deploying LLMs in their organisations.

Types of Prompt Injections

  1. Direct Prompt Injections: These occur when attackers override the system’s own prompts to direct the model to execute specific, often malicious, instructions.

  2. Indirect Prompt Injections: These involve manipulating the model through altered inputs from external sources, tricking the model into performing actions it’s not supposed to.

A notorious instance of prompt injection was observed with Bing Chat (researcher: Cristiano Giardina), where a crafted prompt coerced the AI into revealing its underlying operational commands.

The Gandalf Game

To highlight the risks and teach the community, Lakera introduced 'Gandalf', an educational game where players challenge an LLM to reveal a password using crafted prompts. This game has not only been a fun and engaging way to learn about AI prompt injection but has also provided me with a wealth of data on potential attack vectors and methods used in real-world scenarios (is great to solve the different level and then search for what other users did, is funny to see how the different approaches works and how the creativity fly in order to solve this game).

Just try it by yourself!

Gandalf Prompt Injection challenge

Types of Prompt Injection Attacks

Lakera's Red Team has identified several key types of prompt injection attacks so far:

  • Jailbreaks: These involve embedding malicious queries within prompts to provoke unintended or inappropriate responses from the AI.

  • Sidestepping Attacks: Here, the attack circumvents direct instructions by crafting prompts that indirectly lead to the desired outcome.

  • Multi-language Attacks: These use non-English languages to evade standard security measures implemented in English.

  • Role-playing or Persuasion: Attackers ask the AI to adopt a persona, which can lead to actions that bypass predefined restrictions.

  • Multi-prompt Attacks: These involve a series of seemingly innocuous prompts that collectively serve to extract sensitive information.

  • Obfuscation (Token Smuggling): This strategy alters the presentation of data to evade detection by automated systems but remains understandable to humans.

  • Accidental Context Leakage: Sometimes, the model inadvertently reveals sensitive data due to its programming to be overly helpful.

  • Code Injection: This dangerous form of attack manipulates the model to execute arbitrary code.

  • Prompt Leaking/Extraction: This involves extracting the model's internal prompts or sensitive data.

Prompt Injection Attacks Taxonomy

And to end this day, they show us how to protect against these kind of attacks.

For more information about this please visit the following links:

  1. ELI5 Guide to Prompt Injections
  2. Prompt Injection Attacks Handbook
  3. Lessons Learned from Crowdsourced LLM Threat Intelligence (youtube video)
  4. Lakera's Prompt Injection Datasets on HuggingFace
  5. A Step-by-step Guide to Prompt Engineering

Day 4 - Traditional vs. AI Cyber Security

Here, it compare and contrast the approaches and methodologies between traditional cybersecurity and AI-driven security, highlighting the unique challenges and opportunities AI presents. Unfortunately, today's content was somewhat extremely foundational and less interactive compared to previous days (a few paragraph of information)

Traditional Cybersecurity Essentials

Traditional cybersecurity aims to protect information integrity, confidentiality, and availability, evolving from basic malware defenses in the 1980s to complex strategies against sophisticated threats like nation-state attacks. Key areas include:

  1. Critical Infrastructure Security
  2. Network and Application Security
  3. Cloud and IoT Security

This foundation emphasizes the essential role of coordinating people, processes, and technology to fortify defenses.

AI in Cybersecurity: Advancements and Benefits

AI transforms cybersecurity by automating threat detection and response, offering significant advantages over traditional methods, like adaptability (LLMs can quickly adjust to new threats) and efficiency (processes large data volumes faster, reducing human error).

AI tools like Intrusion Detection Systems (IDS), Data Loss Prevention (DLP), and Security Information and Event Management (SIEM) exemplify these improvements, enhancing flexibility and improving response times.

Securing AI Systems

As AI integrates deeper into critical services, securing AI systems themselves becomes crucial. The industry address vulnerabilities and threats such as adversarial attacks and data breaches with protective strategies that may include:

Some of the best practices for protecting AI systems include:
  1. Implement a Robust AI Security Program: Develop and maintain a comprehensive security strategy, complete with updated AI asset records and clearly designated risk management responsibilities.

  2. Involve Stakeholders Actively: Engage AI experts for security insights and provide specialized training to AI teams to enhance threat identification and prevention.

  3. Establish Advanced Technical Safeguards: Protect data integrity through encryption, enforce strict access controls, and utilize advanced monitoring tools to detect potential threats promptly.

  4. Conduct Regular Security Assessments: Actively perform penetration testing and vulnerability scanning to proactively identify and mitigate security risks.

  5. Adhere to Legal and Regulatory Standards: Stay updated with and comply with regulations like GDPR and CCPA, as well as upcoming AI regulations to ensure data privacy and user trust.

  6. Develop an Incident Response Protocol: Create a detailed plan for immediate action in response to security breaches, including communication strategies and remediation steps.

Securing AI Systems Best Practices

Day 5 - AI Application Security

Day five focuses mostly on integrating security measures within AI applications. It covers guidelines for developing secure AI solutions and maintaining them against emerging threats. Basically, this chapter is focusing on all the AI application security crucial elements that are involved in safeguarding the entire AI system.

Exploring AI Application Security

AI security is broadly divided into three levels:

  1. Application security
  2. Stack security
  3. Infrastructure security.

Today's session reintroduced relevant content and context about that, offering deep insights into each layer.

With every day new technological advancements, LLMs are now integrated into more complex systems, facing new security challenges, especially since they can be exploited using simple English prompts (or pretty much any other language accepted by the LLM).

Thread Model on LLM Applications

Reactive vs. Proactive Security Approaches

A major part was the differentiation between reactive and proactive security approaches in AI application security.

  1. Reactive security addresses threats as they occur, crucial for the accessible and vulnerable LLM applications.
  2. Proactive security, on the other hand, anticipates risks and includes measures like penetration testing and red teaming to mitigate vulnerabilities before they can be exploited.

You can watch the video below to learn more about tools and strategies for securing AI applications (made by Lakera of course).

 How Enterprises Can Secure AI Applications: Lessons from OWASP's Top 10 for LLMs

Securing AI Applications: Best Practices

The course outlined essential practices for securing AI applications effectively:

  1. Before Deployment: Assess applications against OWASP risks for LLMs, conduct red team exercises, and secure the supply chain by evaluating data sources and suppliers.
  2. In-Operation: Implement reactive measures such as limiting actions of LLMs on downstream systems and ensuring robust input validation. Also, integrate AI security tools for real-time monitoring and keep the team updated on the latest in AI security risks.

And fortunately, you will have access to some great resources once again (lucky you, you already have these resources below)!

  1. AI Security with Lakera: Aligning with OWASP Top 10 for LLM Applications
  2. How Enterprises Can Secure AI Applications (youtube video)
  3. OWASP Top 10 for LLM Applications on LinkedIn

Day 6 - AI/LLM Red Teaming

Insights into AI/LLM red teaming processes and best practices (no extra magic here, is really close to the term that we already may know for the rest of the industry). So, this day emphasizes the importance of proactive security measures and simulating potential attacks to strengthen AI systems (adding offensive security in our processes).

Exploring and Executing AI/LLM Red Teaming

Red teaming, historically a military strategy for simulating enemy tactics, has found crucial relevance in the realm of AI. For LLMs, red teaming involves rigorous testing to unearth vulnerabilities and biases, and to assess areas where performance or ethical responses may be inadequate. This practice not only helps in fortifying AI against misuse but also ensures they adhere to ethical standards.

Nothing new in general terms, but as I mentioned the big picture may be the same, the techniques changed a bit and the creativity to work with LLMs comes into play when we need to trigger unexpected behaviours while dealing with this sophisticated piece of software (from creative prompt injections until supply-chain poisoning attempts).

A real, high quality and effective red teaming exercise in AI does not follow a one-size-fits-all approach due to the unique vulnerabilities and deployment environments of AI models. Instead, it combines creativity with systematic analysis to tailor strategies that best fit specific AI models. Here’s how it’s generally structured:

 How Enterprises Can Secure AI Applications: Lessons from OWASP's Top 10 for LLMs
  1. Objective Setting: Start by defining clear goals like assessing risk levels and identifying potential harmful behaviors—bias, toxicity, privacy breaches, etc (like some sort of threat modeling with steroids, LLMs are really funny).

  2. Developing Attack Strategies: This involves a mix of manual and automated attacks, employing multiple techniques like code injection, hypothetical scenarios, and role-playing to challenge the AI.

  3. Scenario Development and Targeted Prompting: Crafting realistic and extreme situations to test AI responses, and developing prompts that specifically aim to reveal biases or unethical behaviors.

  4. Feedback Analysis: Carefully analyzing AI responses for inconsistencies or problematic outputs to refine strategies and improve AI behavior.

To ensure a responsible and high quality AI red teaming, it's really important to utilize diverse teams that can explore different vulnerabilities effectively. This approach should be supported by comprehensive planning that outlines a detailed strategy for systematic testing. As testing progresses, strategies should be refined based on initial findings in an iterative process that adapts and improves with each cycle. Ethical considerations must remain at the forefront throughout the testing phases to uphold high standards. Additionally, maintaining meticulous records of testing strategies and outcomes is essential for informing future practices and ensuring accountability.

Who Should Conduct Red Teaming?

The choice between using internal or external red teams often depends on specific needs:

  • Internal Red Teams: Offer deep insights into the company’s AI systems and facilitate continuous improvements but may face limitations due to potential biases.
  • External Red Teams: Provide a fresh perspective and specialized expertise, helping to minimize bias and validate due diligence, though they might be less familiar with the specific system nuances and may be more costly.

Day 7 - AI Tech Stack & Evaluating AI Security Solutions

During this day we will understand the components that make up the AI security tech stack and how to critically evaluate AI security solutions for your organization. The architecture of modern AI stack is multi-layered and complex, including several components from applications to infrastructure such us:

  1. AI Applications (aka Gen AI apps): These encompass a wide range of applications, from consumer-facing (such as ChatGPT) to enterprise-level (like BlackBot, which plays around different stock markets at high frequency, receiving a huge volume of data in real time—it might sound like cheating, I know!). They are tailored to specific industries like healthcare or construction, or specific departments such as accounting or sales. This layer interacts directly with end-users and typically integrates AI with traditional software components.

  2. Autonomous Agents: This layer consists of AI systems that operate independently, making decisions based on inputs from users or other systems, thus involving a more complex relationship between several actors. These agents range from open source, which are freely accessible and modifiable, to proprietary systems controlled by specific entities and managed through specialized agent management systems.

  3. AI Models/Foundational Models: At this tier, we find the core AI models that drive the applications and agents. These models can be proprietary, developed by specific companies, or open-source, available for public use and modification (examples include LLaMA 2, Mistral 7B, Gemini or Claude).

  4. AI Infrastructure: Serving as the backbone, this layer includes everything from cloud computing services and storage solutions to hardware such as GPUs and specialized AI processors. It also encompasses the physical infrastructure like data centers and the necessary energy to power and cool these systems (have you read or heard something about NVIDIA in these last couple of months?).

  5. Data: Often described as the fuel for AI, data can be categorized into public, proprietary, or synthetically generated. Each type feeds into AI models to enable their functionality. Unfortunately, this is still in its infancy, and the rest of the industry is moving much faster and is more determined to achieve more ambitious goals.

Understanding Generative AI: A Tech Stack Breakdown by Orion Innovation

Strategies for Evaluating AI Security Solutions

A large portion of the lesson was dedicated and really specific about addressing the need for strong AI security solutions (it seems like the early days of internet and the reported vulnerabilities at that time), highlighted by the spike in AI-powered attacks over the past year. You are going to explore a strategic checklist designed to guide us in selecting AI security tools that align with both personal expectations and organizational needs like the following image:

Security Solutions

Concluding Insights and Additional Resources

The day concluded with practical tips on how to effectively utilize the evaluation checklist—prioritizing requirements and matching them with the most suitable security solutions we can apply.

They also shared valuable resources to keep looking for new and free security solutions for protecting these complex systems:

  1. LLM Security Solution Evaluation Checklist
  2. 12 Top LLM Security Tools: Paid & Free (Overview)

Day 8 - Navigating AI Governance

An exploration of AI governance and its implications, including a look at the EU AI Act and US regulations (the two major legislations so far). This day aims to provide a clear understanding of the legal and regulatory landscape surrounding AI security and the different approach between the two of them.

The EU AI Act

The EU AI Act, is a legislative proposal from the European Commission designed to regulate AI use across all sectors, with a single item out of scope, the military applications. This Act introduces a risk-based classification system for AI tools, categorizing them from minimal to unacceptable risks, with stringent obligations for high-risk applications such as those used in law enforcement and critical infrastructures. Furthermore, the Act bans certain uses of AI considered to pose unacceptable risks, including AI for social scoring leading to rights denial, manipulative AI targeting vulnerable populations, mass surveillance with biometric identification in public spaces, and harm-inducing AI like dangerous toys.

Security Solutions

The US AI Bill of Rights Principles

In contrast, the White House’s AI Bill of Rights offers a non-binding blueprint focused on guiding ethical AI use in the United States. This document emphasizes protecting civil rights and ensuring democratic values are upheld in AI deployments. It highlights principles like safety and effectiveness of AI systems, protections against algorithmic discrimination, and ensuring robust data privacy.

The key principles are:

  1. Human Alternatives, Consideration, and Fallback: Ensuring options to opt out of automated systems in favor of human alternatives and providing means to address system failures or disputes.

  2. Notice and Explanation: Providing clear, accessible information about the use and impact of automated systems.

  3. Data Privacy: Protection from abusive data practices, ensuring privacy and user control over personal data.

  4. Algorithmic Discrimination Protections: Prevention of discrimination by algorithms and promotion of equitable system design and use.

  5. Safe and Effective Systems: Protection from unsafe or ineffective automated systems, ensuring safety and effectiveness in their design and deployment.

Security Solutions
Security Solutions

Comparative Insights and Learning Tools

The session provided valuable comparative insights into how these major legislative frameworks aim to shape responsible AI development and usage. We discussed the specifics of each approach, from the structured, legally binding measures of the EU AI Act to the advisory, principle-based guidelines of the AI Bill of Rights.

To have a more sustancial understanding about it, additional resources were provided, including detailed analyses of the EU AI Act and discussions on its implications for businesses. Today’s content was particularly informative, offering a clear view of how different regions are addressing the challenges and opportunities presented by AI technologies.

If you want more information please refer to the following resources:

  1. The EU AI Act: A Stepping Stone Towards Safe and Secure AI.
  2. Navigating the EU AI Act: What It Means for Businesses?

Day 9 - The Evolving Role of the CISO

Another filler day in the course, during this day you are going to have insights into how the role of Chief Information Security Officers (CISOs) and cybersecurity teams is adapting in the age of AI. It covers the skills, knowledge, and strategies required to effectively lead AI security initiatives (with no external resources or deep information about it).

Understanding the CISO’s Evolving Role

The session began with an exploration of how the traditional role of a CISO, once predominantly focused on technical IT security tasks like managing cybersecurity teams and ensuring compliance, is undergoing significant changes. With the advent of generative AI and other advanced technologies, the scope of the CISO’s responsibilities is expanding dramatically.

Strategic Shifts for CISOs

Today’s CISOs are stepping beyond the confines of mere technical oversight to embrace a more strategic, holistic approach to security. They are pivotal in fostering an AI-aware organizational culture, viewing AI not only as a productivity booster but also as a potential security risk. This shift reflects a broader understanding that cybersecurity is not just a technical issue but a critical business function that intersects with every aspect of a company's operations.

Security Solutions

Incorporating of AI in Cybersecurity Practices

A key highlight from today’s lesson was data from a recent Splunk survey, which showed a growing trend among CISOs incorporating AI into their security strategies. Approximately 35% of CISOs already utilize AI to enhance cybersecurity measures, with an additional 61% planning or interested in integrating AI tools within the next year. This statistic underscores the increasing reliance on AI to bolster cybersecurity defenses against more sophisticated threats.

Security Solutions

Day 10 - AI & LLM Security Resources

The final day wraps up the course by providing a treasure trove of resources, trends, and ongoing developments in the field of AI safety and security. Today was all about arming ourselves with resources to continue our own learning journey.

Lakera’s Resources

AI/LLM Safety & Security Frameworks

AI Regulations (Proposed)

Guidelines

  • Adopting AI Responsibly (World Economics Forum’s guidelines for procurement of AI solutions by the private sector).

Reports

Databases

Resource Collections

  • AI Safety Fundamentals (resources a large and growing collection of resources useful to people in the AI safety space).

Conclusion

The "AI Security in 10 Days" email course is structured to take you from a curious beginner to a bit more knowledgeable practitioner, ready to keep exploring AI security challenges with more confidence and resources to enjoy the learning process in this fast and exciting field. Through this course, you'll gain not only theoretical knowledge but also practical tools and insights that are immediately applicable in your professional or personal projects (the Gandalf game is a really good example of this).

Welcome aboard the path to securing the future of AI within organizations! (it has that vibes and I like it). As happened with the internet first steps, I reckon that all companies at some point are going to rush to surf the new "tech wave," rushing the implementation process and, because of that, creating new and more amusing ways to attack LLMs. The future looks great, funny, and challenging!

So I think it's really worth it, if you like the subject, to dive into it as soon as possible and try to learn about this amazing, profound world we are all entering together! Bugcrowd (Bug Bounty Platform) has already created "The Ultimate Guide - AI Security", and HackerOne has already implemented AI in their platform + created another "The Ultimate Guide to Managing Ethical and Security Risks in AI"). So, we are seeing changes really fast, in different industries, and even in our own security industry. Stay tuned because there is more AI content to come!

Next, there will be course reviews from companies like Deep Learning AI, Nvidia and Cohere AI, plus academic research summarizations about LLM/AI Security, readable and presented in a funny and casual style.

So, after all, if you like certs or something to show on LinkedIn, they offer the following certificate of completion after those 10 days:

Image

The current structure lacks a way for tracking your progress during the 10-day course. Currently, all participants get the same content and information during this time, but there is no opportunity for challenges or tests to determine understanding or retention (There is no way to truly evaluate the students before awarding the certificate). One might anticipate that future iterations will include a more planned course-oriented platform that not only displays and analyses participant progress but also extends the framework to allow for a more immersive learning environment. However, it is important to keep in mind that this is a foundational course aimed to spark the curiosity and enthusiasm of beginners to the industry, free and open... so thank you for that.

What's next?

I have this following free courses in mind:

  1. Deep Learning AI (Red Teaming LLM Applications)
  2. LLM University by Cohere
  3. Keep reading technical papers about LLM Security, like Can Large Language Models Find And Fix Vulnerable Software? by David Noever
  4. NVIDIA free courses
Image