verified Verified Information • Last Updated Mar 2026

Securing Generative AI

This course offers a comprehensive exploration into the crucial security measures necessary for the deployment and development of various AI implementations, including large language models (LLMs) and Retrieval-Augmented Generation (RAG). It addresses critical considerations and mitigations to reduce the overall risk in organizational AI system development processes. Experienced author and trainer Omar Santos emphasizes “secure by design” principles, focusing on security outcomes, radical transparency, and building organizational structures that prioritize security. You will be introduced to AI threats, LLM security, prompt injection, insecure output handling, and Red Team AI models. The course concludes by teaching you how to protect RAG implementations. You learn about orchestration libraries such as LangChain, LlamaIndex, and others, as well as securing vector databases, selecting embedding models, and more.
Duration 5 Months
Institution Pearson
Format Online

Eligibility Criteria

school

Academic Foundation

A recognized Bachelor’s degree or high school equivalent required for admission into Pearson.

language

Language Proficiency

English proficiency required. IELTS, TOEFL, or standard medium-of-instruction certificates accepted.

Detailed Fees Breakdown

Base Tuition Fee $112
Total Est. Investment $112

Scholarships and early-bird waivers may apply. Contact admissions for exact institutional fees.

Academic Trajectory

Program Outcome

Graduates of the Securing Generative AI program at Pearson are equipped with global perspectives, ready to excel in international markets and top-tier career opportunities.

headset_mic
Get In Touch