NetOrca Pack
NetOrca Pack is an AI-powered automation engine within NetOrca that revolutionizes network configuration management. By leveraging Large Language Models (LLMs), NetOrca Pack eliminates the need for manual scripting and complex configuration management, transforming high-level service requirements into precise network infrastructure configurations.
This intelligent system bridges the gap between service intent and network implementation, automatically generating configuration data for various network platforms like F5 AS3, Palo Alto firewalls, Terraform infrastructure-as-code, and other network devices.
How It Works
NetOrca Pack uses AI Processors — specialized AI agents that perform specific tasks across the platform. Depending on the type, an AI Processor can generate configurations, validate change requests, verify outputs, optimize prompts, or render structured UI responses.
Pack Pipeline
The core workflow is the Pack Pipeline, which chains up to three sequential stages:
- CONFIG — The AI analyzes service item declarations and automatically generates infrastructure configurations.
- VERIFY (optional) — The generated configurations are sent to the same or a different LLM for review, checking for conflicts, security issues, or syntax errors.
- EXECUTION (optional) — Both configuration and verification data are sent to an LLM for final testing and deployment readiness assessment.
Each stage produces PackData — the stored output that can be pushed to infrastructure or consumed by the next stage.
Change Instance Validation
Before configurations are even generated, the Change Instance Validator can automatically evaluate whether a consumer's declaration should be approved or rejected — enforcing business logic, policy constraints, or cross-field dependencies that go beyond JSON Schema validation.
Self-Healing and Retrigger
NetOrca Pack features a self-healing mechanism with retrigger capability. If configurations cannot be deployed successfully, the system can be retriggered to rerun all stages and attempt to fix the problematic configuration. Service Owners can provide feedback comments to guide the AI toward improved results in subsequent iterations.
Getting Started
1. Configure LLM Models (Admin)
A NetOrca administrator must first create and configure the LLM Models that will power AI integration across NetOrca. This includes setting up the base system prompt and AI model parameters. LLM Models serve as connectors that integrate NetOrca Pack with various Large Language Models, allowing organizations to use their own LLMs with custom gateway integrations.
2. Create AI Processors (Service Owner)
Once LLM Models are available, Service Owners create AI Processors for their services. For the Pack Pipeline, this means setting up a CONFIG processor (required) and optionally VERIFY and EXECUTION processors. For each AI Processor, Service Owners configure:
- A Prompt — instructions that guide the LLM's behavior for that stage
- A Response Schema (optional) — the expected JSON structure of the AI's output
- Documents (optional) — additional knowledge and context such as compliance requirements, vendor guides, or naming conventions
- Generative UI (optional) — rich UI rendering of the AI's response
3. Trigger the Pipeline
The pipeline can be triggered automatically when a Change Instance is approved, manually via API, or by pushing PackData from external sources. After the pipeline finishes, the PackData generated at each stage can be pushed to infrastructure and viewed on the Service Item detail page.