Skip to content

AI Validators

AI Validators are the specialized agents within NetOrca Pack, defined by Service Owners, that validate the contents of an AI Processor's pipeline.

Understanding AI Validators

Similarly to AI Processors, we can create an AI Validator for a specific service, with our choice of LLM model, and prompt. However, the difference with the AI Validators is that it acts as the intermediary stage between just service config generation, and processing that service's change instance state.

Setting Up AI Validators

To create our first AI Validator (from the Service Owner POV), we can go to the AI Validators tab at the top of the AI Integration page, and Create AI Validator. Each service can have one AI Validator, but we can create multiple test cases, add a setting for auto-approve and auto-reject, send service information, and existing services too.

POST /v1/external/serviceowner/ai_validators/ HTTP/1.1
Content-Type: application/json
Authorization: Token <YOUR_TOKEN>
{
  "name": "Awesome Validator",
  "service": <service_id>>,
  "llm_model": <llm_model_id>,
  "prompt": "Validate (some context) for all items of this service.",
  "allow_auto_approval": true | false,
  "allow_auto_rejection": true | false,
  "send_service_info": true | false,
  "send_existing_service_items": true | false,
}

Prompt

Define the prompt that customizes the LLM's behavior for this specific validation stage. This prompt is a wrapper for the consumer's intent.

Final Prompt

Here is the final prompt that the AI Processor will send to the LLM models:

{
  "netorca_prompt": "<system prompt provided by LLM Model>",
  "serviceowner_prompt": "<AI Validator Prompt provided by the Service Owner>",
  "declaration": "<Service Item Declaration>",
  "service_name": "<Service Name>",
  "service_schema": "<Service Schema>",
  "description": "<Description of the Service>",
}