Skip to content

LLM Models

LLM Models serve as connectors that integrate NetOrca Pack with various Large Language Models, such as OpenAI, Google Gemini, Grok, etc., providing the intelligence needed for NetOrca AI features. Additionally, organizations can connect to their private LLMs through internal gateways or enterprise AI platforms, ensuring data privacy and compliance with organizational policies.

Setting Up LLM Models

Each LLM Model in NetOrca represents a configured connection to a Large Language Model with specific parameters. Different LLM providers require different configuration parameters, authentication methods, and connection settings to establish successful connections. So far, the following LLM providers are supported. In order to configure the LLM Model, you need to provide the following information:

Provider Examples:

POST /v1/external/llm_models/ HTTP/1.1
Content-Type: application/json
Authorization: Token <YOUR_TOKEN>
{
    "name": "MyAwesomeOpenAI",
    "prompt": "Do whatever service owner asks",
    "extra_data": {
        "model_type": "OpenAI",
        "OPENAI_API_KEY": "sk-...", 
        "OPENAI_ASSISTANT_ID": "asst_..."
    },
}
POST /v1/external/llm_models/ HTTP/1.1
Content-Type: application/json
Authorization: Token <YOUR_TOKEN>
{
    "name": "MyAwesomeLLMAPI",
    "prompt": "Do whatever service owner asks",
    "extra_data": {
        "model_type": "GenericAPICall",
        "base_url": "https://api.example.com",
        "authorization": "<auth header, e.g. 'Bearer AwesomeKey'>",
        "timeout": 30,
        "verify": true,
    },
}
POST /v1/external/llm_models/ HTTP/1.1
Content-Type: application/json
Authorization: Token <YOUR_TOKEN>
{
    "name": "MyAwesomeGenAI",
    "prompt": "Do whatever service owner asks",
    "extra_data": {
        "model_type": "GenAI"
        "token_url": "https://auth.example.com",
        "request_url": "https://gateway.example.com",
        "credentials": "<auth header, e.g. 'Bearer AwesomeKey'>",
        "verify_ssl": false,
        "timeout": 30
    },
}

Prompt (System Prompt)

The base system prompt that defines the AI model's role, expertise, and instructions. This is the foundational instruction set that shapes how the AI will interpret and respond to requests. The prompt acts as a wrapper that provides context and guidelines for all interactions with this model.

Recommendation for writing effective prompts: Clearly define the role the AI should act as, specify the technical domain and knowledge areas, and set boundaries or quality requirements to ensure reliable responses. An Example:

You are an AWS DevOps engineer who knows all AWS configurations and endpoints.
You will be given a service owner prompt that acts as your further instructions. 
Your inputs are consumer declarations, and based on them, you must generate accurate, 
ready-to-deploy AWS configuration files.