Developing Custom AI Tools
Custom AI tools introduce non-determinism, external dependencies, and the risk of excessive agency. Traditional IT controls are necessary but not sufficient. You will learn to set trust boundaries, constrain tool scopes, and enforce latency budgets that match business SLAs.

Course Content
Custom AI tools introduce non-determinism, external dependencies, and the risk of excessive agency. Traditional IT controls are necessary but not sufficient. You will learn to set trust boundaries, constrain tool scopes, and enforce latency budgets that match business SLAs.
You will use NIST AI RMF to decide what you map, measure, and manage; CISA Secure by Design to implement least privilege, secure defaults, and strong input validation; MITRE ATLAS to anticipate adversary tactics such as prompt injection and data poisoning; and OWASP Top 10 for LLMs to address issues like LLM01 Prompt Injection and LLM06 Excessive Agency early.
The course links governance to practice. You will connect design choices to SOC runbooks, ITSM workflows, and platform telemetry. Examples use Microsoft function calling and Prompt Flow evaluation, Google Gemini function calling and Agent Builder, Splunk AI Assistant for SPL, and Elastic AI Assistants for Security and Observability with connectors.
Your outputs will be operational. Each lesson produces an artifact that engineering and GRC can review together, from a pattern decision record to an evaluation plan and a deployment runbook.
Course Objectives
By the end of this course, you will be able to:
- Translate business workflows into implementable AI tool patterns. You will be able to select sidecars, form fillers, classifiers or routers, and agent workflows for a given task. Why it matters: clear patterns cut delivery time and reduce rework.
- Design function and tool schemas with strong validation and idempotency. You will be able to define inputs, outputs, and failure modes that are predictable. Why it matters: predictable tools reduce failures and improve reliability.
- Apply safety and security controls mapped to leading frameworks. You will be able to align controls to NIST AI RMF, CISA Secure by Design, MITRE ATLAS, and OWASP Top 10 for LLMs. Why it matters: mapped controls lower breach risk and improve audit readiness.
- Evaluate AI tools with measurable metrics and golden datasets. You will be able to set pass or fail thresholds and run repeatable tests. Why it matters: consistent evaluation improves accuracy and reduces MTTR.
- Instrument cost, latency, and quality telemetry for operations. You will be able to capture P50 and P95 latency, token cost, accuracy, and audit logs. Why it matters: telemetry keeps the solution within budgets and SLAs.
- Deploy and operate AI tools with DevSecOps practices. You will be able to create release gates, rollback paths, and incident playbooks. Why it matters: disciplined releases reduce incidents and speed recovery.














