TL;DR
- Plugins are the policy layer in NucleusIQ.
- Start with call limits, then add retry, tool guard, and PII guard based on risk.
- Use
HumanApprovalPluginfor sensitive operations. - Compose plugins by workload instead of enabling everything by default.
Why Plugins Matter in NucleusIQ
In NucleusIQ, plugins are where policy and reliability become enforceable. They help you control:
- cost,
- risk,
- safety,
- tool behavior,
- context behavior.
Without plugins, your agent may still work. With plugins, your agent can be governed.
Plugin Categories You Should Use First
Cost Control
ModelCallLimitPluginToolCallLimitPlugin
Reliability
ToolRetryPluginModelFallbackPlugin
Security and Governance
ToolGuardPluginPIIGuardPluginHumanApprovalPlugin
Context Management
ContextWindowPlugin
Start simple, then layer only what your workload needs.
Basic Plugin Setup Example
import asyncio
from nucleusiq.agents import Agent
from nucleusiq.agents.config import AgentConfig, ExecutionMode
from nucleusiq.plugins.builtin import ModelCallLimitPlugin, ToolCallLimitPlugin
from nucleusiq_openai import BaseOpenAI
agent = Agent(
name="governed_agent",
llm=BaseOpenAI(model_name="gpt-4o-mini"),
config=AgentConfig(execution_mode=ExecutionMode.STANDARD),
plugins=[
ModelCallLimitPlugin(max_calls=10),
ToolCallLimitPlugin(max_calls=8),
],
)
async def main():
await agent.initialize()
result = await agent.execute({"id": "p1", "objective": "Answer this task within limits."})
print(result)
asyncio.run(main())
This gives immediate guardrails for budget and loop control.
Tool Safety Example: ToolGuardPlugin
Use allowlist or blocklist behavior for tool access.
from nucleusiq.plugins.builtin import ToolGuardPlugin
guard = ToolGuardPlugin(
blocked=["delete_file"],
on_deny=lambda name, args: f"SECURITY: Tool '{name}' is blocked by policy."
)
Apply it in your agent:
agent = Agent(
name="safe_tool_agent",
llm=BaseOpenAI(model_name="gpt-4o-mini"),
tools=[...],
config=AgentConfig(execution_mode=ExecutionMode.STANDARD),
plugins=[guard],
)
This pattern is essential in enterprise environments.
Data Protection Example: PIIGuardPlugin
Protect sensitive fields in user input and model output.
from nucleusiq.plugins.builtin import PIIGuardPlugin
pii_guard = PIIGuardPlugin(
pii_types=["email", "phone", "ssn"],
strategy="redact",
apply_to_output=True,
)
Attach it to your agent plugin list for data-safe workflows.
Reliability Example: Retry + Fallback
Use retries for transient issues and fallback for model/provider resilience.
from nucleusiq.plugins.builtin import ToolRetryPlugin, ModelFallbackPlugin
plugins = [
ToolRetryPlugin(max_retries=2, base_delay=0.5, max_delay=5.0),
ModelFallbackPlugin(fallbacks=["gpt-4o-mini"], retry_on=(Exception,)),
]
This protects production traffic from temporary failures.
Approval Workflow Example: HumanApprovalPlugin
For risky tools (deletion, external writes, high-impact actions), require approval.
from nucleusiq.plugins.builtin import HumanApprovalPlugin
async def approval_policy(tool_name: str, tool_args: dict) -> bool:
safe_tools = {"search_contacts", "add", "multiply"}
return tool_name in safe_tools
approval = HumanApprovalPlugin(
approval_callback=approval_policy,
auto_approve=["add", "multiply"],
)
This pattern provides controlled autonomy with explicit human gates.
Suggested Plugin Stack by Risk Level
Low Risk
ModelCallLimitPlugin
Medium Risk
ModelCallLimitPluginToolCallLimitPluginToolRetryPlugin
High Risk
ModelCallLimitPluginToolCallLimitPluginToolRetryPluginToolGuardPluginPIIGuardPluginHumanApprovalPlugin
Do not use every plugin by default. Use policy-driven composition.
Plugin Order and Observability Tips
- Keep cost-control plugins always enabled.
- Add safety plugins before broad tool exposure.
- Log plugin decisions for audits.
- Track denial events, retries, and fallback frequency.
- Review plugin configs each release cycle.
Your plugin layer should evolve with your risk profile.
Annotated End-to-End Plugin Stack Example
This example uses a practical plugin stack for a governed Standard-mode assistant. Code comments explain the purpose of each component.
import asyncio
from nucleusiq.agents import Agent
from nucleusiq.agents.config import AgentConfig, ExecutionMode
from nucleusiq.plugins.builtin import (
ModelCallLimitPlugin,
ToolCallLimitPlugin,
ToolRetryPlugin,
ToolGuardPlugin,
PIIGuardPlugin,
HumanApprovalPlugin,
)
from nucleusiq.tools import BaseTool
from nucleusiq_openai import BaseOpenAI
def search_contacts(query: str) -> str:
# Simulate a retrieval tool with potentially sensitive data.
return "Alice - alice@company.com - Phone: 555-123-4567"
search_tool = BaseTool.from_function(
search_contacts,
name="search_contacts",
description="Search company contact records",
)
async def approval_policy(tool_name: str, tool_args: dict) -> bool:
# Approve only read-style tools; deny destructive actions by default.
return tool_name in {"search_contacts"}
agent = Agent(
name="governed_contact_agent",
llm=BaseOpenAI(model_name="gpt-4o-mini"),
tools=[search_tool],
config=AgentConfig(execution_mode=ExecutionMode.STANDARD),
plugins=[
ModelCallLimitPlugin(max_calls=12), # Cost guardrail
ToolCallLimitPlugin(max_calls=8), # Tool-loop guardrail
ToolRetryPlugin(max_retries=2, base_delay=0.5, max_delay=5.0), # Reliability
ToolGuardPlugin(allowed=["search_contacts"]), # Tool governance
PIIGuardPlugin( # Data protection
pii_types=["email", "phone"],
strategy="redact",
apply_to_output=True,
),
HumanApprovalPlugin(approval_callback=approval_policy), # Human-in-the-loop gate
],
)
async def main():
await agent.initialize()
task = {"id": "plug-01", "objective": "Find Alice contact and summarize without exposing PII."}
result = await agent.execute(task)
print(result)
asyncio.run(main())
Example Output and Policy Effects
Expected behavior in this stack:
- tool call is allowed only for
search_contacts, - raw contact details from tool are sanitized before user-visible response,
- if tool transiently fails, retry policy attempts recovery,
- model/tool call budgets are enforced.
Example output:
Contact found for Alice. Email and phone were redacted by policy.
Please request secure channel access for full details.
Example policy event log:
[plugin] ToolGuard: ALLOW search_contacts
[plugin] PIIGuard: REDACT email, phone
[plugin] HumanApproval: APPROVED search_contacts
This “output + policy log” pairing is important for audits.
Plugin Composition Strategy by Environment
Development
- focus on observability and debugging,
- lower limits are fine for quick feedback,
- keep logs verbose.
Staging
- mimic production plugin stack,
- run synthetic tasks with known failure cases,
- validate retries, denies, and redactions.
Production
- enforce strict limits and allowlists,
- enable approval for sensitive operations,
- collect policy metrics for compliance reviews.
Treat plugin configuration like infrastructure policy, not optional app logic.
Recommended Test Cases for Plugin-Based Agents
To make plugin behavior reliable, automate tests around these cases:
- Limit enforcement test
- force repeated calls and verify
ModelCallLimitPluginorToolCallLimitPluginhalts execution.
- force repeated calls and verify
- Retry behavior test
- simulate transient tool failure and verify retry count/backoff behavior.
- PII sanitization test
- provide known PII payloads and verify output redaction/masking.
- Approval denial test
- return
Falsein approval callback and verify safe failure message.
- return
- Tool governance test
- trigger blocked tool and verify deny hook output.
These tests reduce regression risk when teams update prompts, models, or tools.
Failure-Mode Design: What to Return to Users
Plugin rejections should produce deterministic, user-friendly responses. Recommended response patterns:
- Denied action: “This action is restricted by policy.”
- Approval required: “This request needs manual approval before execution.”
- Sensitive data redacted: “Some fields were masked for security.”
- Retry exhausted: “Temporary service issue; please retry shortly.”
Consistent failure messaging improves trust and supportability.
Plugin Metrics That Actually Matter
Many teams track only latency and cost. For plugin-governed systems, also track:
- deny rate per tool name
- redaction rate by PII type
- approval acceptance/denial rate
- retry success rate after first failure
- percentage of tasks completed without policy intervention
These metrics tell you whether policies are tuned correctly or overly restrictive.
Plugin Governance Review Template (Monthly)
A simple monthly review keeps plugin policy aligned with real product behavior:
Inputs
- top denied tools and deny reasons,
- redaction counts by PII category,
- retry distribution and success after retry,
- approval queue volume and decision latency.
Questions
- Are we blocking legitimate user value too often?
- Are there tools that should move from deny to controlled approval?
- Are retry settings too aggressive for low-value failures?
- Is any PII type under-detected in current policy patterns?
Outputs
- plugin config changes for next release,
- updated allowlist/denylist for sensitive operations,
- documentation updates for support and compliance teams.
This governance loop turns plugins into a living policy system instead of a one-time static configuration.
Incident Response Tip: Use Plugin Events as First Signal
When a production incident happens, plugin events often reveal the root cause faster than raw model logs. Start by checking:
- tool deny bursts,
- retry spikes on one tool,
- sudden PII redaction jumps,
- increased approval denials in one workflow.
These signals help teams distinguish policy misconfiguration from model quality issues and reduce mean time to recovery.
Final Takeaway
NucleusIQ plugins convert “best practice” into executable policy. They are how you enforce limits, safety, and reliability in real deployments.
Start with call limits, then add tool/security controls, and finally add approval + PII protection for sensitive workflows.
Footnotes:
Additional Reading
- GitHub: NucleusIQ
- AI Agents: The Next Big Thing in 2025
- Logistic Regression for Machine Learning
- Cost Function in Logistic Regression
- Maximum Likelihood Estimation (MLE) for Machine Learning
- ETL vs ELT: Choosing the Right Data Integration
- What is ELT & How Does It Work?
- What is ETL & How Does It Work?
- Data Integration for Businesses: Tools, Platform, and Technique
- What is Master Data Management?
- Check DeepSeek-R1 AI reasoning Papaer
OK, thatโs it, we are done now. If you have any questions or suggestions, please feel free to comment. Iโll come up with more topics on Machine Learning and Data Engineering soon. Please also comment and subscribe if you like my work, any suggestions are welcome and appreciated.