Intelligence that understands your firewall rules, your network topology, your VMs, and your backups — running locally, never leaving your network.
Large language models are powerful. They can write code, summarize documents, and answer questions. But ask a general-purpose AI to analyze your firewall rules and it will give you generic advice. Ask it to audit your network topology and it will suggest best practices it read in a textbook. It doesn't understand your infrastructure.
Worse, most AI services require sending your configuration data — firewall rules, network maps, credentials, security policies — to a third-party cloud. For organizations in healthcare, finance, government, and critical infrastructure, this is a non-starter.
Infrastructure AI should understand your domain, run under your control, and never send your data anywhere.
Every DEC-LLC product ships with an AI plugin that shares a common inference engine but carries domain-specific knowledge unique to its role. The AI doesn't just process text — it understands the structure, semantics, and operational patterns of the infrastructure it manages.
| Product | AI Domain | What It Understands |
|---|---|---|
| OpenUTM | Security | Firewall rule ordering, shadowed rules, nftables patterns, threat signatures, VPN tunnel health, IPS alerts, DNS filtering patterns. Covers on-prem, cloud, and hybrid perimeters. |
| NIVMIA | Networking | Cisco/Juniper/Arista config syntax, SNMP MIB semantics, interface health, VLAN/BGP/OSPF validation, broadcast storm detection, QoS capacity modeling, config standardization patterns, fleet-wide change impact analysis. Includes third-party vendor equipment — ISP routers, MSP firewalls, carrier CPE, SD-WAN appliances. |
| IVMIA | Compute | VM placement, resource contention and redistribution, hardware compatibility validation, migration policies, storage performance, capacity forecasting — plus workstation fleet health, endpoint compliance, physical device inventory across Windows, macOS, Linux, POS terminals, and field devices. |
| VaultSync | Data Protection | Backup chain integrity, deduplication ratios, replication lag, retention compliance, capacity forecasting. Includes mobile device backup analysis and business data extraction from backup sets. |
The same engine powers all four products. The difference is what each one knows about its domain — embedded through system prompts, domain schemas, specialized analyzers, and product-specific knowledge bases.
The AI plugin operates in two phases, matching the trust level between the operator and the system.
Read-only analysis of your infrastructure. The AI examines your configuration, identifies issues, and provides actionable recommendations. It never modifies anything.
The AI can propose and execute configuration changes within strict guardrails. Every action requires approval (or passes through a risk classifier for semi-autonomous operation).
The AI plugin is not a replacement for the operator. It is a force multiplier that helps the operator make better decisions, catch mistakes before they happen, and maintain a state of well-being across the infrastructure.
Each product's AI runs periodic health checks against its domain knowledge:
These checks run under the customer's control, using the customer's own data. The results feed into the platform's notification system — the operator gets an alert when something needs attention, along with a specific recommendation from the AI.
Every organization has an engineer who knows why that firewall rule exists, why that VLAN is configured that way, or why backups run at 2 AM instead of midnight. When that engineer leaves, the knowledge leaves with them.
The AI plugin captures and retains this institutional knowledge. When the AI analyzes a rule and the operator adds context ("this rule exists because vendor X requires port 8443 for their health checks"), that context becomes part of the knowledge base. The next engineer who asks "why is this rule here?" gets the answer.
The customer's investment is not just in hardware and software. It is in the accumulated understanding of how their infrastructure works and why it is configured the way it is. The AI protects that investment by making institutional knowledge persistent and queryable.
The managed AI plugin (Phase 2) enforces hard guardrails at the engine level. These are not suggestions. They are compiled constraints that cannot be bypassed by prompt, configuration, or operator error.
All AI modifications go to draft config. Apply is always an explicit, separate step.
Every apply is preceded by automated validation. Invalid configs cannot be applied.
Maximum 5 rule changes per transaction. Prevents runaway modifications.
Management and uplink interfaces cannot be disabled. The AI cannot lock you out.
SSH and management API ports cannot be blocked on the management interface.
Users, roles, and credentials are read-only to the AI. It cannot escalate its own privileges.
The licensing subsystem is completely isolated from AI operations.
Backup and recovery systems are read-only. The AI can analyze but never destroy.
Loss of management connectivity triggers automatic restoration within 60 seconds.
Every AI action is logged to an append-only audit file. Tamper-evident by design.
The AI engine runs entirely on the customer's hardware using local LLM inference. No API calls to OpenAI. No data sent to any cloud service. No telemetry. The model runs on the appliance itself or on a GPU-equipped server on the customer's network.
| Component | Where It Runs | What It Needs |
|---|---|---|
| LLM Inference | Local (Ollama) | GPU recommended (8+ GB VRAM), CPU fallback available |
| Knowledge Base | Local (vector DB) | Product-specific, encrypted with license-derived key |
| Analyzers | On the appliance | Python, no external dependencies |
| Guardrails | On the appliance | Compiled binary, cannot be modified at runtime |
For organizations that cannot have AI processing on the appliance itself (resource-constrained edge deployments), the inference engine can run on a dedicated GPU server on the same network. The data still never leaves the customer's environment.
We don't promise to keep your data private. We make it architecturally impossible for your data to leave. There is no API endpoint to call, no telemetry to disable, no opt-out to configure. The inference engine talks to localhost. That's it.
When multiple DEC-LLC products are deployed together, their AI plugins share context through the SDNS platform's internal communication bus. This creates intelligence that no single product could achieve alone:
Each product's AI is an expert in its domain. Together, they form a team that covers the full infrastructure stack — security, networking, compute, and data protection — with correlated intelligence and coordinated response.
The AI is a tool, not an authority. It analyzes, recommends, and (when authorized) executes. But the human operator is always the final authority. The approval workflow ensures this at every level:
The operator can step in at any time, override any AI decision, and roll back any change. The AI assists the operator's judgment. It does not replace it.
The platform helps the operator not get hurt. The operator decides what happens. This is assisted infrastructure management — intelligence in service of human judgment, not in place of it.
Infrastructure has a natural tendency toward entropy. Configurations drift. Rules accumulate. Backups go untested. Capacity creeps toward limits. Small issues compound into outages.
The AI plugin's primary mission is to maintain a state of well-being across the customer's infrastructure investment. This means:
This is not artificial intelligence replacing human intelligence. It is artificial intelligence preserving human intelligence — capturing what the team knows, applying it consistently, and alerting when attention is needed.
The customer's infrastructure should be healthier at the end of every month than it was at the beginning. Not because of heroic intervention, but because the platform is quietly, continuously maintaining a state of well-being — finding drift, flagging risks, preserving knowledge, and keeping the operator informed.
© 2026 Diwan Enterprise Consulting LLC (DEC-LLC). All rights reserved.
For more information, contact info@decllc.biz or visit dec-llc.biz.