DEC-LLC Whitepaper — April 2026

Domain-Aware AI for Infrastructure

Intelligence that understands your firewall rules, your network topology, your VMs, and your backups — running locally, never leaving your network.

← Back to DEC-LLC

The Problem with General-Purpose AI in Infrastructure

Large language models are powerful. They can write code, summarize documents, and answer questions. But ask a general-purpose AI to analyze your firewall rules and it will give you generic advice. Ask it to audit your network topology and it will suggest best practices it read in a textbook. It doesn't understand your infrastructure.

Worse, most AI services require sending your configuration data — firewall rules, network maps, credentials, security policies — to a third-party cloud. For organizations in healthcare, finance, government, and critical infrastructure, this is a non-starter.

Infrastructure AI should understand your domain, run under your control, and never send your data anywhere.

Our Approach: One Engine, Domain-Specific Knowledge

Every DEC-LLC product ships with an AI plugin that shares a common inference engine but carries domain-specific knowledge unique to its role. The AI doesn't just process text — it understands the structure, semantics, and operational patterns of the infrastructure it manages.

ProductAI DomainWhat It Understands
OpenUTMSecurityFirewall rule ordering, shadowed rules, nftables patterns, threat signatures, VPN tunnel health, IPS alerts, DNS filtering patterns. Covers on-prem, cloud, and hybrid perimeters.
NIVMIANetworkingCisco/Juniper/Arista config syntax, SNMP MIB semantics, interface health, VLAN/BGP/OSPF validation, broadcast storm detection, QoS capacity modeling, config standardization patterns, fleet-wide change impact analysis. Includes third-party vendor equipment — ISP routers, MSP firewalls, carrier CPE, SD-WAN appliances.
IVMIAComputeVM placement, resource contention and redistribution, hardware compatibility validation, migration policies, storage performance, capacity forecasting — plus workstation fleet health, endpoint compliance, physical device inventory across Windows, macOS, Linux, POS terminals, and field devices.
VaultSyncData ProtectionBackup chain integrity, deduplication ratios, replication lag, retention compliance, capacity forecasting. Includes mobile device backup analysis and business data extraction from backup sets.

The same engine powers all four products. The difference is what each one knows about its domain — embedded through system prompts, domain schemas, specialized analyzers, and product-specific knowledge bases.

Shared Infrastructure (identical across products) +-----------------------------------------------+ | decllc-ai-engine Local LLM inference | | decllc-ai-rag Vector search for KB | | decllc-ai-guardrails Hard safety limits | +-----------------------------------------------+ Product-Specific Knowledge (unique per product) +---------------+ +---------------+ +---------------+ +---------------+ | OpenUTM | | NIVMIA | | IVMIA | | VaultSync | | - nftables | | - SNMP MIBs | | - VM metrics | | - snapshots | | - CVE feeds | | - CLI syntax | | - hypervisor | | - storage API | | - threat sigs | | - drift rules | | - migration | | - recovery | +---------------+ +---------------+ +---------------+ +---------------+

What the AI Actually Does

The AI plugin operates in two phases, matching the trust level between the operator and the system.

Community & Standard Tier

Phase 1: Analyze and Advise

Read-only analysis of your infrastructure. The AI examines your configuration, identifies issues, and provides actionable recommendations. It never modifies anything.

Enterprise Tier

Phase 2: Managed Operations

The AI can propose and execute configuration changes within strict guardrails. Every action requires approval (or passes through a risk classifier for semi-autonomous operation).

Protecting the Customer's Investment

The AI plugin is not a replacement for the operator. It is a force multiplier that helps the operator make better decisions, catch mistakes before they happen, and maintain a state of well-being across the infrastructure.

Continuous Health Monitoring

Each product's AI runs periodic health checks against its domain knowledge:

These checks run under the customer's control, using the customer's own data. The results feed into the platform's notification system — the operator gets an alert when something needs attention, along with a specific recommendation from the AI.

Institutional Knowledge That Doesn't Walk Out the Door

Every organization has an engineer who knows why that firewall rule exists, why that VLAN is configured that way, or why backups run at 2 AM instead of midnight. When that engineer leaves, the knowledge leaves with them.

The AI plugin captures and retains this institutional knowledge. When the AI analyzes a rule and the operator adds context ("this rule exists because vendor X requires port 8443 for their health checks"), that context becomes part of the knowledge base. The next engineer who asks "why is this rule here?" gets the answer.

The customer's investment is not just in hardware and software. It is in the accumulated understanding of how their infrastructure works and why it is configured the way it is. The AI protects that investment by making institutional knowledge persistent and queryable.

Hard Guardrails: The AI Cannot Hurt You

The managed AI plugin (Phase 2) enforces hard guardrails at the engine level. These are not suggestions. They are compiled constraints that cannot be bypassed by prompt, configuration, or operator error.

Writes Target Draft Only

All AI modifications go to draft config. Apply is always an explicit, separate step.

Validation Before Apply

Every apply is preceded by automated validation. Invalid configs cannot be applied.

Transaction Limits

Maximum 5 rule changes per transaction. Prevents runaway modifications.

Protected Interfaces

Management and uplink interfaces cannot be disabled. The AI cannot lock you out.

Protected Ports

SSH and management API ports cannot be blocked on the management interface.

No Auth Modifications

Users, roles, and credentials are read-only to the AI. It cannot escalate its own privileges.

No License Modifications

The licensing subsystem is completely isolated from AI operations.

No Backup Deletion

Backup and recovery systems are read-only. The AI can analyze but never destroy.

Auto-Rollback

Loss of management connectivity triggers automatic restoration within 60 seconds.

Complete Audit Trail

Every AI action is logged to an append-only audit file. Tamper-evident by design.

Local Inference: Your Data Never Leaves

The AI engine runs entirely on the customer's hardware using local LLM inference. No API calls to OpenAI. No data sent to any cloud service. No telemetry. The model runs on the appliance itself or on a GPU-equipped server on the customer's network.

ComponentWhere It RunsWhat It Needs
LLM InferenceLocal (Ollama)GPU recommended (8+ GB VRAM), CPU fallback available
Knowledge BaseLocal (vector DB)Product-specific, encrypted with license-derived key
AnalyzersOn the appliancePython, no external dependencies
GuardrailsOn the applianceCompiled binary, cannot be modified at runtime

For organizations that cannot have AI processing on the appliance itself (resource-constrained edge deployments), the inference engine can run on a dedicated GPU server on the same network. The data still never leaves the customer's environment.

Privacy by Architecture, Not by Policy

We don't promise to keep your data private. We make it architecturally impossible for your data to leave. There is no API endpoint to call, no telemetry to disable, no opt-out to configure. The inference engine talks to localhost. That's it.

Cross-Product Intelligence

When multiple DEC-LLC products are deployed together, their AI plugins share context through the SDNS platform's internal communication bus. This creates intelligence that no single product could achieve alone:

Each product's AI is an expert in its domain. Together, they form a team that covers the full infrastructure stack — security, networking, compute, and data protection — with correlated intelligence and coordinated response.

The Human Is Always in Control

The AI is a tool, not an authority. It analyzes, recommends, and (when authorized) executes. But the human operator is always the final authority. The approval workflow ensures this at every level:

  1. Supervised mode (default): Every AI action requires explicit human approval. The AI proposes; the human decides.
  2. Semi-autonomous mode: Low-risk actions (adding a log rule, adjusting a DHCP scope) proceed automatically. High-risk actions (deleting rules, changing addressing) require approval.
  3. Autonomous mode: Available for environments that need it (automated remediation, after-hours response). Hard guardrails still apply. Auto-rollback still protects against mistakes.

The operator can step in at any time, override any AI decision, and roll back any change. The AI assists the operator's judgment. It does not replace it.

The platform helps the operator not get hurt. The operator decides what happens. This is assisted infrastructure management — intelligence in service of human judgment, not in place of it.

Maintaining a State of Well-Being

Infrastructure has a natural tendency toward entropy. Configurations drift. Rules accumulate. Backups go untested. Capacity creeps toward limits. Small issues compound into outages.

The AI plugin's primary mission is to maintain a state of well-being across the customer's infrastructure investment. This means:

This is not artificial intelligence replacing human intelligence. It is artificial intelligence preserving human intelligence — capturing what the team knows, applying it consistently, and alerting when attention is needed.

The Goal

The customer's infrastructure should be healthier at the end of every month than it was at the beginning. Not because of heroic intervention, but because the platform is quietly, continuously maintaining a state of well-being — finding drift, flagging risks, preserving knowledge, and keeping the operator informed.

© 2026 Diwan Enterprise Consulting LLC (DEC-LLC). All rights reserved.
For more information, contact info@decllc.biz or visit dec-llc.biz.