Prisma AIRS Cursor Hooks intercepts prompts and AI responses in the Cursor IDE, scanning them in real-time via the Prisma AI Runtime Security (AIRS) Sync API. Detects prompt injections, malicious code, sensitive data leakage, toxic content, and policy violations before they reach the LLM or the developer.
Install¶
How It Works¶
flowchart LR
A[Developer Prompt] --> B[beforeSubmitPrompt Hook]
B -->|AIRS Scan| C{Verdict}
C -->|Allow| D[Cursor AI Agent]
C -->|Block| E[Block Message]
D --> F[AI Response]
F --> G[afterAgentResponse Hook]
G -->|AIRS Scan| H{Verdict}
H -->|Allow| I[Display to Developer]
H -->|Block| J[Block Message]
Capabilities¶
-
Prompt Scanning
Scans every prompt before it reaches the AI agent. Detects prompt injection, DLP violations, toxicity, and custom topic policy violations.
-
Response & Code Scanning
Parses AI responses to extract code blocks separately. Natural language and code are scanned independently, enabling malicious code detection via WildFire/ATP.
-
Enforce or Observe
Three modes:
observe(log only),enforce(block on detection),bypass(skip). Start in observe mode to audit, switch to enforce when ready. -
Fail-Open Design
Never blocks the developer on infrastructure failures. Circuit breaker pattern bypasses scanning after consecutive API failures with automatic recovery.
Get Started¶
-
Install
Install from npm, set environment variables, and register hooks in Cursor.
-
Quick Start
Get scanning in under 5 minutes.
-
Configure
Modes, enforcement actions, profiles, circuit breaker, and logging.
-
Architecture
Scanning flow, module design, and key decisions.