Skip to content

How to use NucleusIQ tools plugins memory

NucleusIQ tools plugins memory By Nucleusbox

TL;DR

  • Reliable NucleusIQ systems are built by combining mode + tools + plugins + memory.
  • Use lightweight stacks for low-risk traffic and stricter stacks for high-stakes workflows.
  • Standard mode is the default production baseline for most teams.
  • Tune combinations using telemetry (cost, latency, correction rate, denial/retry signals).

Why Combine Tools, Plugins, and Memory?

Most production issues are not caused by one missing component. They come from missing combinations.

Examples:

  • tools without guardrails -> expensive or unsafe behavior,
  • memory without context controls -> noisy prompts and drift,
  • plugins without useful tools -> safe but weak outcomes.

NucleusIQ works best when these three are composed intentionally.


The Layered NucleusIQ Pattern

Think in layers:

  1. Execution mode (DIRECT, STANDARD, AUTONOMOUS)
  2. Tools (capabilities and integrations)
  3. Memory (context retention policy)
  4. Plugins (safety, limits, governance, resilience)

This layering gives a clean way to scale from prototype to production.


Pattern A: Direct + Light Plugins (High Speed)

Use for low-risk, high-volume traffic.

import asyncio
from nucleusiq.agents import Agent
from nucleusiq.agents.config import AgentConfig, ExecutionMode
from nucleusiq.plugins.builtin import ModelCallLimitPlugin, PIIGuardPlugin
from nucleusiq_openai import BaseOpenAI

agent = Agent(
    name="fast_support_bot",
    llm=BaseOpenAI(model_name="gpt-4o-mini"),
    config=AgentConfig(execution_mode=ExecutionMode.DIRECT),
    plugins=[
        ModelCallLimitPlugin(max_calls=5),
        PIIGuardPlugin(pii_types=["email"], strategy="redact"),
    ],
)

Use when speed is primary and risk is limited.


Pattern B: Standard + Tools + Reliability Plugins (Default Production)

Use for most business workflows.

import asyncio
from nucleusiq.agents import Agent
from nucleusiq.agents.config import AgentConfig, ExecutionMode
from nucleusiq.plugins.builtin import ModelCallLimitPlugin, ToolCallLimitPlugin, ToolRetryPlugin
from nucleusiq.tools import BaseTool
from nucleusiq_openai import BaseOpenAI

def search_docs(query: str) -> str:
    return f"Search results for: {query}"

search_tool = BaseTool.from_function(
    search_docs,
    name="search_docs",
    description="Search internal docs and return top results",
)

agent = Agent(
    name="ops_assistant",
    llm=BaseOpenAI(model_name="gpt-4o-mini"),
    tools=[search_tool],
    config=AgentConfig(execution_mode=ExecutionMode.STANDARD),
    plugins=[
        ToolCallLimitPlugin(max_calls=12),
        ModelCallLimitPlugin(max_calls=15),
        ToolRetryPlugin(max_retries=2, base_delay=0.5, max_delay=5.0),
    ],
)

Use when tasks need tool loops and stable behavior at scale.


Pattern C: Standard + Tools + Memory + Guardrails (Conversation Workloads)

Use for chat-style assistants that must remember context.

from nucleusiq.memory.factory import MemoryFactory, MemoryStrategy
from nucleusiq.plugins.builtin import ContextWindowPlugin, ToolGuardPlugin

memory = MemoryFactory.create_memory(MemoryStrategy.SUMMARY_WINDOW)

agent = Agent(
    name="customer_assistant",
    llm=BaseOpenAI(model_name="gpt-4o-mini"),
    tools=[search_tool],
    memory=memory,
    config=AgentConfig(execution_mode=ExecutionMode.STANDARD),
    plugins=[
        ContextWindowPlugin(max_messages=24, keep_recent=6),
        ToolGuardPlugin(allowed=["search_docs"]),
        ModelCallLimitPlugin(max_calls=20),
    ],
)

Use when conversation continuity and policy control are both required.


Pattern D: Autonomous + Validation-Focused Plugins (High Stakes)

Use for risk-sensitive analysis where quality is more important than speed.

from nucleusiq.agents.config import AgentConfig, ExecutionMode
from nucleusiq.plugins.builtin import (
    ModelCallLimitPlugin,
    ToolCallLimitPlugin,
    HumanApprovalPlugin,
    ToolGuardPlugin,
)

async def approval_policy(tool_name: str, tool_args: dict) -> bool:
    # Allow read-only tools; deny write/destructive operations.
    return tool_name in {"search_docs", "lookup_financials"}

agent = Agent(
    name="due_diligence_agent",
    llm=BaseOpenAI(model_name="o3"),
    tools=[search_tool],
    memory=MemoryFactory.create_memory(MemoryStrategy.SUMMARY_WINDOW),
    config=AgentConfig(execution_mode=ExecutionMode.AUTONOMOUS),
    plugins=[
        ToolGuardPlugin(allowed=["search_docs", "lookup_financials"]),
        HumanApprovalPlugin(approval_callback=approval_policy),
        ToolCallLimitPlugin(max_calls=40),
        ModelCallLimitPlugin(max_calls=50),
    ],
)

Use when output mistakes can cause legal, financial, or strategic damage.


Workload TypeModeToolsMemoryPlugins
FAQ / quick chatDirectOptionalFull/slidingCall limit, PII guard
Internal assistantStandardYesSliding/summaryCall limits, retry, tool guard
Long support sessionsStandardYesSummary + windowContext window, retry, PII guard
Due diligence / auditAutonomousYesSummary + windowTool guard, approval, strict limits

This matrix is a practical starting point. Tune it using telemetry and error impact.


Implementation Checklist

  • Define workload categories by risk and complexity.
  • Assign mode by category.
  • Add minimum plugin baseline (limits first).
  • Add memory strategy for conversation-heavy flows.
  • Restrict tools with allowlists for sensitive paths.
  • Add streaming for visibility and debugging.
  • Review metrics monthly and update routing/policies.

Common Production Mistakes

1) Same Stack for Every Endpoint

Different workloads require different combinations.

2) No Policy Layer

Tools without guardrails create silent risk.

3) Memory Without Measurement

Track token usage and correction rates by strategy.

4) Autonomous Without Approval

High-stakes automation should include explicit checks.


Reference Architecture: Request Routing + Mode Policy

A scalable NucleusIQ deployment should route requests before agent execution. This keeps the right stack aligned with each workload profile.

Suggested routing policy:

  1. Classify task by complexity, risk, and SLA.
  2. Select execution mode (DIRECT, STANDARD, AUTONOMOUS).
  3. Attach mode-specific plugin bundles.
  4. Attach memory strategy by session type.
  5. Execute with streaming and telemetry enabled.

This policy-driven approach keeps production behavior consistent as traffic grows.


Annotated Production Example (Single Entry, Dynamic Composition)

import asyncio
from nucleusiq.agents import Agent
from nucleusiq.agents.config import AgentConfig, ExecutionMode
from nucleusiq.memory.factory import MemoryFactory, MemoryStrategy
from nucleusiq.plugins.builtin import (
    ModelCallLimitPlugin,
    ToolCallLimitPlugin,
    ToolRetryPlugin,
    ToolGuardPlugin,
    PIIGuardPlugin,
)
from nucleusiq.tools import BaseTool
from nucleusiq_openai import BaseOpenAI

def search_docs(query: str) -> str:
    # Replace with real retrieval integration in production.
    return f"Top docs for: {query}"

search_tool = BaseTool.from_function(
    search_docs,
    name="search_docs",
    description="Search internal documentation",
)

def build_agent(risk_level: str) -> Agent:
    # Dynamically choose mode and policies by risk class.
    if risk_level == "low":
        mode = ExecutionMode.DIRECT
        memory = MemoryFactory.create_memory(MemoryStrategy.SLIDING_WINDOW)
        plugins = [
            ModelCallLimitPlugin(max_calls=6),
            PIIGuardPlugin(pii_types=["email"], strategy="redact"),
        ]
    elif risk_level == "medium":
        mode = ExecutionMode.STANDARD
        memory = MemoryFactory.create_memory(MemoryStrategy.SUMMARY_WINDOW)
        plugins = [
            ToolCallLimitPlugin(max_calls=12),
            ModelCallLimitPlugin(max_calls=15),
            ToolRetryPlugin(max_retries=2, base_delay=0.5, max_delay=5.0),
            ToolGuardPlugin(allowed=["search_docs"]),
        ]
    else:
        mode = ExecutionMode.AUTONOMOUS
        memory = MemoryFactory.create_memory(MemoryStrategy.SUMMARY_WINDOW)
        plugins = [
            ToolCallLimitPlugin(max_calls=40),
            ModelCallLimitPlugin(max_calls=50),
            ToolRetryPlugin(max_retries=2, base_delay=0.5, max_delay=5.0),
            ToolGuardPlugin(allowed=["search_docs"]),
            PIIGuardPlugin(pii_types=["email", "phone", "ssn"], strategy="redact"),
        ]

    return Agent(
        name=f"policy_agent_{risk_level}",
        llm=BaseOpenAI(model_name="gpt-4o-mini"),
        tools=[search_tool],
        memory=memory,
        config=AgentConfig(execution_mode=mode),
        plugins=plugins,
    )

async def main():
    agent = build_agent("medium")
    await agent.initialize()
    result = await agent.execute({"id": "prod-1", "objective": "Find onboarding requirements for vendor integration."})
    print(result)

asyncio.run(main())

Sample Operational Results (What Good Looks Like)

When teams adopt layered composition, typical outcomes include:

  • lower correction rates for medium/high-risk flows,
  • reduced runaway loops due to tool/model call limits,
  • better compliance posture via tool allowlists and PII handling,
  • more stable latency by keeping low-risk traffic on Direct mode.

Example monthly result summary:

Endpoint Group: Internal Assistants
Direct-only baseline correction rate: 18.4%
Policy-routed composition correction rate: 7.9%
Tool failure recovery rate after retries: 91%
Policy denial events with clear messaging: 100% handled

These numbers are illustrative, but they represent the kind of deltas teams should validate in their own environments.


Deployment Checklist for Production Readiness

Architecture

  • Define risk tiers and workload classes.
  • Assign mode defaults per class.
  • Define plugin baseline per tier.

Safety and Governance

  • Enforce tool allowlists for sensitive paths.
  • Add PII handling policies where user data appears.
  • Add human approval for high-impact operations.

Reliability

  • Add retries for transient tool failure.
  • Set strict tool/model call caps.
  • Ensure deterministic fallback responses.

Observability

  • Track stream events and tool traces.
  • Capture plugin decisions (allow, deny, redact, retry).
  • Measure cost/latency/quality by mode and endpoint.

Continuous Improvement

  • Review policy interventions monthly.
  • Promote/demote workloads between modes.
  • Re-tune memory and plugin configs based on telemetry.

Common Integration Pattern by Team Type

Startup teams

  • begin with Standard mode + 2 to 3 core plugins,
  • add memory only where sessions are multi-turn,
  • avoid overbuilding Autonomous paths too early.

Platform teams

  • define shared policy bundles by risk class,
  • expose composable defaults for app teams,
  • centralize telemetry and policy governance.

Enterprise teams

  • prioritize auditability and deterministic controls,
  • use strict allowlists + approval workflows,
  • validate plugin behavior through automated compliance tests.

This team-aware rollout pattern reduces friction and improves adoption.


Benchmarking Framework for Mode Combinations

Before standardizing architecture, run controlled benchmark tasks:

  1. Choose 20 to 50 representative tasks per workload class.
  2. Execute with at least two stack variants.
  3. Compare:
    • success/correction rate,
    • cost per successful task,
    • latency percentiles,
    • policy intervention rates.
  4. Select best stack per class, not one stack for everything.

This avoids premature architectural lock-in and gives objective rollout confidence.


Final Takeaway

NucleusIQ is strongest when you compose its core capabilities:

  • mode for orchestration depth,
  • tools for capabilities,
  • memory for continuity,
  • plugins for policy and reliability.

The winning pattern is not “maximum autonomy everywhere.” It is context-aware composition by workload.

Footnotes:

Additional Reading

OK, thatโ€™s it, we are done now. If you have any questions or suggestions, please feel free to comment. Iโ€™ll come up with more topics on Machine Learning and Data Engineering soon. Please also comment and subscribe if you like my work, any suggestions are welcome and appreciated.

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments