Skip to content

NeOrca Pack

NetOrca Pack is an AI-powered automation engine within NetOrca that revolutionizes network configuration management. By leveraging Large Language Models (LLMs), NetOrca Pack eliminates the need for manual scripting and complex configuration management, transforming high-level service requirements into precise network infrastructure configurations.

This intelligent system bridges the gap between service intent and network implementation, automatically generating configuration data for various network platforms like F5 AS3, Palo Alto firewalls, Terraform infrastructure-as-code, and other network devices.

How It Works

NetOrca Pack follows a pipeline that transforms service declarations into production-ready network configurations:

Pipeline

NetOrca consumers begin by submitting Service Item Declarations for specific services. These declarations contain the high-level requirements, policies, and specifications that define what the service needs from the network infrastructure.

  1. NetOrca Pack captures these Service Item Declarations and sends them to configured LLM models. The AI analyzes the service requirements and automatically generates appropriate infrastructure configurations.
  2. For enhanced reliability, NetOrca Pack includes an optional verification stage where the generated configurations are sent back to the same or a different LLM model for review and could provide confidence in the generated output before deployment. This verification can be used for any purpose such as checking for potential conflicts or security issues and validating configuration syntax and structure depending on the prompt given by the Service Owner.
  3. As a final optional stage, NetOrca Pack can send both the configuration and verification data to the same or a different LLM model for final testing and verification. This RESULTS stage provides an additional layer of assurance about the generated configuration quality and readiness for deployment.

Self Healing and Retrigger

NetOrca Pack features a self-healing mechanism with retrigger capability. If tests fail or configurations cannot be deployed successfully, the system can be retriggered to rerun all stages and attempt to fix the problematic configuration. This process can incorporate Service Owner Feedback to provide additional context about what went wrong, allowing the AI to learn from failures and generate improved configurations in subsequent iterations.

Getting Started

First, a NetOrca admin must create and configure the LLM Models that will power the AI integration all over NetOrca. This includes setting up the base system prompt, which act as a wrapper, and the AI model specific parameters. LLM Models serve as connectors that integrate NetOrca Pack with various Large Language Models. This allows organizations to configure their own LLMs with their custom gateway integration.

Once LLM models are available, Service Owners need to create three separate optional AI Processors (AI Agents) for each service - one for each processing stage, i.e. CONFIG, VERIFY, and RESULTS Processors. For each AI Processor, Service Owners must choose the related service, stage-specific prompts, and optionally, the response schemas that match their requirements.

When all the processors are configured, the pipeline can be triggered either automatically when a Change Instance of the Service Item is approved, or manually via endpoint. After the pipeline finishes successfully, the PackData generated at each stage can be pushed to infrastructure and viewed on the Service Item detail page.