AI Processors
AI Processors are the specialized agents within NetOrca Pack, defined by Service Owners, that execute specific stages of the automated configuration generation workflow. Each AI Processor is configured for a specific service and processing stage, i.e., CONFIG, VERIFY, or EXECUTION. By combining multiple AI Processors, service owners can build their own multi-stage pipelines that generate, validate, and finalize network configurations with minimal human intervention.
Understanding AI Processor Types
NetOrca Pack operates through three distinct processing stages, each serving a specific purpose in the configuration generation lifecycle:
CONFIG
The CONFIG stage is responsible for generating initial implementation-ready configurations based on service item declarations like JSON configuration files for network devices, Infrastructure-as-code templates, Policy definitions, and rule sets. This is where the primary transformation occurs - converting high-level service requirements into vendor-specific configuration files.
VERIFY
The VERIFY stage validates and reviews generated configurations. It can be used for validation reports, checking for potential issues, security assessment execution, recommendations for configuration improvements. This optional stage provides quality assurance before configurations are applied to production infrastructure.
EXECUTION
This stage performs the final execution, testing, quality assessment, and preparation of configurations for deployment. This optional stage provides the final layer of confidence before production deployment.
Setting Up AI Processors
AI Processors are configured by Service Owners after LLM Models have been set up by administrators. Each service can have one AI Processor for each stage. To set up an AI Processor:
POST /v1/external/serviceowner/ai_processors/ HTTP/1.1
Content-Type: application/json
Authorization: Token <YOUR_TOKEN>
{
"name": "Awesome Agent",
"service": <service_id>>,
"llm_model": <llm_model_id>,
"action_type": "<config|verify|execution>",
"prompt": "Generate an AS3 config for this service item.",
"response_schema": {
"type": "object",
"title": "AS3Config",
"$schema": "http://json-schema.org/draft-07/schema#",
"required": [
"declaration"
],
"properties": {
"declaration": {
"type": "object",
}
}
}
}

Response Schema
Define the expected structure of the AI's response using JSON Schema format. This ensures consistent, parseable outputs that can be processed by the infrastructure apis.
JSON Schema Support Considerations
Important: Not all LLM models natively support structured JSON schema response formatting. NetOrca Pack handles this limitation through an automatic fallback mechanism:
- Models with JSON schema support: The response schema is passed as a structured parameter to the LLM, ensuring strict adherence to the defined format
- Models without JSON schema support: The response schema is automatically converted to text instructions and included in the prompt itself
Fallback Behavior
When an LLM model lacks native JSON schema support, NetOrca Pack will:
- Convert the schema to prompt instructions: The JSON schema is transformed into natural language instructions that describe the expected response format
- Append to the main prompt: These format instructions are added to your AI Processor prompt
- Rely on model training: The LLM must follow the format instructions based on its training rather than enforced constraints
Implications for Service Owners
- Reliability: Models without schema support may occasionally produce responses that don't perfectly match the expected format
- Error handling: Downstream systems should include robust validation and error handling for malformed responses
- Model selection: For critical configurations requiring strict format compliance, prefer LLM models with native JSON schema support
- Prompt design: When using models without schema support, consider including additional format reminders in your custom prompts
Prompt
Define the prompt that customizes the LLM's behavior for this specific processor and stage. This prompt is a wrapper for the consumer's intent.
CONFIG Prompt Examples:
Generate a complete F5 AS3 configuration for the following service requirements.
Include virtual server, pool, SSL profile, and health monitor configurations.
Ensure all configurations follow F5 best practices and include proper error handling.
Return only valid JSON in AS3 format.
VERIFY Prompt Examples:
Review the following F5 AS3 configuration for security vulnerabilities,
best practice compliance, and potential conflicts. Provide a detailed
validation report with pass/fail status and specific recommendations
for any identified issues.
EXECUTION Prompt Examples:
Perform final quality assessment of the configuration and verification execution.
Generate a deployment-ready package with documentation, rollback procedures,
and final validation summary. Confirm readiness for production deployment.
Final Prompt
Here is the final prompt that the AI Processor will send to the LLM models:
{
"netorca_prompt": "<system prompt provided by LLM Model>",
"serviceowner_prompt": "<AI Processor Prompt provided by the Service Owner>",
"service_item": {... service item declaration},
"serviceowner_comment" (only in retrigger mode): "optional comment from Service Owner",
"config" (only in verify stage or retrigger mode): {... Pack Data generated by config AI Processor},
"verify" (only in execution stage): {... Pack Data generated by verify AI Processor},
"execution" (only in retrigger): {... Pack Data generated by execution AI Processor},
}
Document
The Service Owner can create a new document in NetOrca as an extra context or knowledge for the LLMs. Every document is linked to a specific Service, and all AI Processors associated with that service can access it.
This feature allows you to provide supporting materials that improve AI Processor output. For example, it can ensure the generated configurations are aligned with compliance requirements and regulations, specific user guides which not all general llms have access to, etc.
Service Owners can upload multiple Documents. However, since including all documents in every AI interaction can sometimes reduce output quality, exceed the LLM token limit, or cost high, NetOrca allows you to selectively enable documents only when they are needed.
Example document
```markdown
Create comprehensive technical documentation for [Service] that follows this structure:
1. **Introduction**: Start with a clear definition of what [Service] is and its primary purpose within the NetOrca ecosystem.
2. **Key Concepts**: Explain the main components or types, using subsections (###) for each distinct element. Include practical use cases for each.
3. **Setup/Configuration**: Provide step-by-step instructions with:
- HTTP API examples using proper request/response format
- Code blocks with realistic parameter values
- GUI screenshots or references where applicable
- Use tabbed sections: === "HTTP request" and === "GUI"
4. **Technical Details**: Include any important technical considerations, limitations, or fallback behaviors that users should understand.
5. **Examples**: Provide concrete, realistic examples for different scenarios or use cases. Use proper code formatting and include explanatory text.
6. **Advanced Features**: Document any additional settings or configurations, explaining when and why users might need them.
Format requirements:
- Use clear, descriptive headings (#, ##, ###)
- Include code blocks with proper syntax highlighting
- Add practical examples with real-world context
- Explain both the "what" and the "why" for each feature
- Keep explanations concise but comprehensive
- Use bullet points for lists and features
- Include any relevant API endpoints with full request examples
Target audience: Service Owners and technical users who need to implement and configure this feature in production environments.
```
Extra Data Settings
In some situations it might be the case that the Service Owner creating the AI processor, would need some extra context to improve the efficiency and accuracy of the AI processor's output. For that reason, we've included a few extra data settings that can be configured during the creation of the AI Processor.
Include Change Instance
When enabled, this setting includes detailed information about the latest change instance in the payload sent to your AI processor. The change instance contains metadata about the specific change request being processed, including the change state, associated submission details, and log. This is useful when your AI processor needs to understand the context of the change being made - for example, to make different processing decisions based on whether it's a new deployment, an update, or a rollback operation.
Include Previous Declaration
When enabled, this setting adds the previous approved declaration for the service item to the payload, allowing your AI processor to compare the current state with the previous configuration. This enables powerful use cases such as generating incremental or differential configurations, understanding exactly what changes are being made, performing rollback operations, or providing change impact analysis. The previous declaration data is included under the previous_declaration key and represents the last approved state before the current change
