While building browser automation tools with Azure OpenAI, I discovered something frustrating: the content filter blocks perfectly safe instructions based on word choice rather than actual risk.
This isn’t about bypassing legitimate safety measures. It’s about a filter that can’t distinguish between malicious intent and standard developer terminology.
The Problem
When defining tools for function calling, certain terms trigger Azure’s content filter even when the context is completely benign:
run script→ Blockedclick element→ Blockedfill form field→ Blocked
These are standard operations for any browser automation tool. Playwright, Puppeteer, Selenium—they all use this exact terminology. But Azure’s filter treats them as threats.
The Workaround
The solution is embarrassingly simple: use neutral synonyms.
| Blocked Term | Accepted Alternative |
|---|---|
run script | process dynamic content |
click element | activate page item |
fill form field | update an input area |
execute code | evaluate expression |
inject | insert |
The identical intent with neutral language passes instantly.
Why This Matters
This reveals something important about how the filter works: it screens tool names and descriptions as part of the prompt itself. It’s pattern matching on keywords, not analyzing actual risk.
A tool called clickElement that automates form submissions is blocked. The same tool called activatePageItem doing the exact same thing passes. The filter provides no additional safety—it just forces developers to use euphemisms.
Comparison with Google Gemini
I tested the same tool definitions with Google’s Gemini models. No friction whatsoever with procedural phrasing. The tools worked exactly as expected without needing to sanitize the vocabulary.
This isn’t about one provider being “less safe.” It’s about Azure implementing safety theater that inconveniences legitimate developers while providing minimal actual protection.
The Deeper Issue
Anyone with malicious intent will simply use the euphemisms. The filter doesn’t stop bad actors—it just adds friction for legitimate use cases.
Real safety comes from:
- Understanding context and intent
- Rate limiting and monitoring
- User authentication and audit trails
- Clear terms of service with enforcement
Keyword blocking is the security equivalent of banning the word “knife” from cooking websites.
Practical Advice
If you’re building tools with Azure OpenAI function calling:
- Audit your tool names for trigger words before deployment
- Use neutral, abstract terminology in descriptions
- Test with actual API calls early—the playground may behave differently
- Document the translations so your team understands the mapping
Here’s an example of a sanitized tool definition:
{
"name": "activatePageItem",
"description": "Activates an interactive item on the page at the specified coordinates",
"parameters": {
"type": "object",
"properties": {
"x": { "type": "number", "description": "Horizontal position" },
"y": { "type": "number", "description": "Vertical position" }
}
}
}
Instead of the more natural:
{
"name": "clickElement",
"description": "Clicks an element on the page at the specified coordinates",
"parameters": { ... }
}
Conclusion
Azure’s content filter for function calling needs refinement. Pattern matching on keywords without context analysis creates friction for developers while providing minimal security benefit.
Until that changes, the workaround is simple: speak in euphemisms. Your browser automation tool doesn’t “click buttons”—it “activates interactive page items.”
Building AI tools that need browser automation? I’ve navigated these restrictions extensively. Get in touch if you’re facing similar challenges.