installation
Tracerney is available as a free npm package. Install it in seconds with zero dependencies.
Requirements
- • Node.js 16.0 or higher
- • npm 7.0 or higher
Version: 0.9.10+ (free SDK with 258 embedded patterns)
quick start
The SDK analyzes the prompt against 258 embedded attack patterns in real-time. Returns a result object with suspicious, patternName, and severity. Your code decides how to handle flagged prompts.
basic usage
initialization
Create a Tracerney instance (no configuration needed for free SDK):
scanning prompts
Check if a prompt is suspicious before sending to your LLM:
result object
The scanPrompt method returns a result object with pattern detection info:
how it works
Tracerney uses Layer 1: Pattern Matching to detect prompt injection attacks. The free SDK analyzes input against 258 real-world attack patterns in real-time with zero network overhead.
layer 1: pattern detection
All 258 patterns are embedded in the SDK and run locally on your machine. Patterns detect:
- • System instruction overrides ("ignore all instructions")
- • Role-play jailbreaks ("act as unrestricted AI")
- • Context confusion attacks
- • Data extraction attempts
- • Code execution risks
detection flow
- 1. User input received
- 2. Normalized (unicode tricks removed)
- 3. Compared against 258 embedded patterns
- 4. Returns result: suspicious=true/false, patternName, severity
- 5. Your code handles the result
no data leaves your server
All detection happens locally. The SDK never sends data to external servers. Your prompts stay completely private. Zero telemetry by default.
performance
Pattern matching completes in <5ms per prompt on modern hardware. Suitable for real-time applications.
258 embedded patterns
Tracerny includes 258 curated attack patterns covering known and novel injection techniques:
instruction override
Patterns detecting attempts to bypass system instructions
context confusion
Detects prompt injections that exploit context windows
role play exploitation
Catches attempts to change AI persona or instructions
code execution
Blocks attempts to trigger code generation attacks
jailbreak attempts
Detects known jailbreak techniques and variations
data extraction
Prevents prompts designed to leak sensitive data
Patterns are regularly updated and tested against real-world attacks. New patterns are added as attack techniques evolve.
api reference
Tracerney constructor
No configuration required for the free SDK. All 258 patterns are enabled by default. Telemetry and LLM Sentinel are disabled by default.
scanPrompt()
Analyzes a prompt against all 258 patterns. Returns a result object.
ScanResult interface
suspicious- true if pattern was matchedpatternName- Name of matched pattern (e.g., "Ignore Instructions")severity- Threat level ("CRITICAL", "HIGH", "MEDIUM", "LOW")blocked- false for free SDK (true only with backend verification)
what is prompt injection?
Prompt injection is a vulnerability where an attacker manipulates the input to an AI model to bypass safety measures or change its behavior. The attacker injects malicious instructions into the prompt that the AI follows instead of the original system instructions.
example
Original system instruction:
Malicious user input (injection):
Without protection, the AI might follow the injected instruction instead of the system prompt.
why it matters
- • Can expose sensitive data
- • Can bypass security controls
- • Can manipulate AI behavior
- • Growing threat as AI becomes mainstream
common attack techniques
1. Direct instruction override
Directly tells the AI to ignore previous instructions
2. Role-play exploitation
Attempts to change the AI's role or persona
3. Context confusion
Tries to confuse the AI about its previous context
4. Code execution
Attempts to trigger harmful code generation
5. Data extraction
Tries to extract sensitive system prompts or data
frequently asked questions
Is my data sent to any servers?
No. Tracerny runs entirely locally. All pattern matching happens on your machine. Your prompts are never sent anywhere.
How accurate is the detection?
The 258 patterns are based on real-world attacks and have been tested extensively. They catch known injection techniques with high confidence. False positives are minimal.
Can I use this in production?
Yes. Tracerny is designed for production use. It has minimal overhead (<10ms per scan) and can handle high throughput.
What if I get false positives?
You can adjust pattern sensitivity or exclude specific patterns. Check the API reference for configuration options. Report false positives on GitHub.
Is it free?
Yes. Tracerny is open source and free to use. No accounts, no licensing, no restrictions.
How do I contribute?
Visit the GitHub repository to contribute code, report issues, or suggest new patterns.