Skip to content

Code

app.agents.agent_system

Agent system utilities for orchestrating multi-agent workflows.

This module provides functions and helpers to create, configure, and run agent systems using Pydantic AI. It supports delegation of tasks to research, analysis, and synthesis agents, and manages agent configuration, environment setup, and execution. Args: provider (str): The name of the provider. provider_config (ProviderConfig): Configuration settings for the provider. api_key (str): API key for authentication with the provider. prompts (dict[str, str]): Configuration for prompts. include_researcher (bool): Flag to include the researcher agent. include_analyst (bool): Flag to include the analyst agent. include_synthesiser (bool): Flag to include the synthesiser agent. query (str | list[dict[str, str]]): The query or messages for the agent. chat_config (ChatConfig): The configuration object for agents and providers. usage_limits (UsageLimits): Usage limits for agent execution. pydantic_ai_stream (bool): Whether to use Pydantic AI streaming.

Functions:

Name Description
get_manager

Initializes and returns a manager agent with the specified configuration.

run_manager

Asynchronously runs the manager agent with the given query and provider.

setup_agent_env

Sets up the environment for an agent by configuring provider settings, prompts, API key, and usage limits.

Classes

Functions

get_manager(provider, provider_config, api_key, prompts, include_researcher=False, include_analyst=False, include_synthesiser=False)

Initializes and returns a Agent manager with the specified configuration. Args: provider (str): The name of the provider. provider_config (ProviderConfig): Configuration settings for the provider. api_key (str): API key for authentication with the provider. prompts (PromptsConfig): Configuration for prompts. include_researcher (bool, optional): Flag to include analyst model. Defaults to False. include_analyst (bool, optional): Flag to include analyst model. Defaults to False. include_synthesiser (bool, optional): Flag to include synthesiser model. Defaults to False. Returns: Agent: The initialized Agent manager.

Source code in src/app/agents/agent_system.py
def get_manager(
    provider: str,
    provider_config: ProviderConfig,
    api_key: str | None,
    prompts: dict[str, str],
    include_researcher: bool = False,
    include_analyst: bool = False,
    include_synthesiser: bool = False,
) -> Agent[None, BaseModel]:
    """
    Initializes and returns a Agent manager with the specified configuration.
    Args:
        provider (str): The name of the provider.
        provider_config (ProviderConfig): Configuration settings for the provider.
        api_key (str): API key for authentication with the provider.
        prompts (PromptsConfig): Configuration for prompts.
        include_researcher (bool, optional): Flag to include analyst model.
            Defaults to False.
        include_analyst (bool, optional): Flag to include analyst model.
            Defaults to False.
        include_synthesiser (bool, optional): Flag to include synthesiser model.
            Defaults to False.
    Returns:
        Agent: The initialized Agent manager.
    """

    # FIXME context manager try-catch
    # with error_handling_context("get_manager()"):
    model_config = EndpointConfig.model_validate(
        {
            "provider": provider,
            "prompts": prompts,
            "api_key": api_key,
            "provider_config": provider_config,
        }
    )
    models = get_models(
        model_config, include_researcher, include_analyst, include_synthesiser
    )
    return _create_manager(prompts, models)

run_manager(manager, query, provider, usage_limits, pydantic_ai_stream=False) async

Asynchronously runs the manager with the given query and provider, handling errors and printing results. Args: manager (Agent): The system agent responsible for running the query. query (str): The query to be processed by the manager. provider (str): The provider to be used for the query. usage_limits (UsageLimits): The usage limits to be applied during the query execution. pydantic_ai_stream (bool, optional): Flag to enable or disable Pydantic AI stream. Defaults to False. Returns: None

Source code in src/app/agents/agent_system.py
async def run_manager(
    manager: Agent[None, BaseModel],
    query: UserPromptType,
    provider: str,
    usage_limits: UsageLimits | None,
    pydantic_ai_stream: bool = False,
) -> None:
    """
    Asynchronously runs the manager with the given query and provider, handling errors
        and printing results.
    Args:
        manager (Agent): The system agent responsible for running the query.
        query (str): The query to be processed by the manager.
        provider (str): The provider to be used for the query.
        usage_limits (UsageLimits): The usage limits to be applied during the query
            execution.
        pydantic_ai_stream (bool, optional): Flag to enable or disable Pydantic AI
            stream. Defaults to False.
    Returns:
        None
    """

    # FIXME context manager try-catch
    # with out ? error_handling_context("run_manager()"):
    model_name = getattr(manager, "model")._model_name
    mgr_cfg = {"user_prompt": query, "usage_limits": usage_limits}
    logger.info(f"Researching with {provider}({model_name}) and Topic: {query} ...")

    if pydantic_ai_stream:
        raise NotImplementedError(
            "Streaming currently only possible for Agents with "
            "output_type str not pydantic model"
        )
        # logger.info("Streaming model response ...")
        # result = await manager.run(**mgr_cfg)
        # aync for chunk in result.stream_text():  # .run(**mgr_cfg) as result:
        # async with manager.run_stream(user_prompt=query) as stream:
        #    async for chunk in stream.stream_text():
        #        logger.info(str(chunk))
        # result = await stream.get_result()
    else:
        logger.info("Waiting for model response ...")
        # FIXME deprecated warning manager.run(), query unknown type
        # FIXME [call-overload] error: No overload variant of "run" of "Agent"
        # matches argument type "dict[str, list[dict[str, str]] |
        # Sequence[str | ImageUrl | AudioUrl | DocumentUrl | VideoUrl |
        # BinaryContent] | UsageLimits | None]"
        result = await manager.run(**mgr_cfg)  # type: ignore[reportDeprecated,reportUnknownArgumentType,reportCallOverload,call-overload]

    logger.info(f"Result: {result}")
    logger.info(f"Usage statistics: {result.usage()}")

setup_agent_env(provider, query, chat_config, chat_env_config)

Sets up the environment for an agent by configuring provider settings, prompts, API key, and usage limits.

Parameters:

Name Type Description Default
provider str

The name of the provider.

required
query UserPromptType

The messages or queries to be sent to the agent.

required
chat_config ChatConfig | BaseModel

The configuration object containing provider and prompt settings.

required
chat_env_config AppEnv

The application environment configuration containing API keys.

required

Returns:

Name Type Description
EndpointConfig EndpointConfig

The configuration object for the agent.

Source code in src/app/agents/agent_system.py
def setup_agent_env(
    provider: str,
    query: UserPromptType,
    chat_config: ChatConfig | BaseModel,
    chat_env_config: AppEnv,
) -> EndpointConfig:
    """
    Sets up the environment for an agent by configuring provider settings, prompts,
    API key, and usage limits.

    Args:
        provider (str): The name of the provider.
        query (UserPromptType): The messages or queries to be sent to the agent.
        chat_config (ChatConfig | BaseModel): The configuration object containing
            provider and prompt settings.
        chat_env_config (AppEnv): The application environment configuration
            containing API keys.

    Returns:
        EndpointConfig: The configuration object for the agent.
    """

    if not isinstance(chat_config, ChatConfig):
        raise TypeError("'chat_config' of invalid type: ChatConfig expected")
    msg: str | None
    # FIXME context manager try-catch
    # with error_handling_context("setup_agent_env()"):
    provider_config = get_provider_config(provider, chat_config.providers)

    prompts = chat_config.prompts
    api_key = get_api_key(provider, chat_env_config)

    if provider.lower() == "ollama":
        # TODO move usage limits to config
        usage_limits = UsageLimits(request_limit=10, total_tokens_limit=100000)
    else:
        if api_key is None:
            msg = f"API key for provider '{provider}' is not set."
            logger.error(msg)
            raise ValueError(msg)
        # TODO Separate Gemini request into function
        if provider.lower() == "gemini":
            if isinstance(query, str):
                query = ModelRequest.user_text_prompt(query)
            elif isinstance(query, list):  # type: ignore[reportUnnecessaryIsInstance]
                # query = [
                #    ModelRequest.user_text_prompt(
                #        str(msg.get("content", ""))
                #    )  # type: ignore[reportUnknownArgumentType]
                #    if isinstance(msg, dict)
                #    else msg
                #    for msg in query
                # ]
                raise NotImplementedError("Currently conflicting with UserPromptType")
            else:
                msg = f"Unsupported query type for Gemini: {type(query)}"
                logger.error(msg)
                raise TypeError(msg)
        # TODO move usage limits to config
        usage_limits = UsageLimits(request_limit=10, total_tokens_limit=10000)

    return EndpointConfig.model_validate(
        {
            "provider": provider,
            "query": query,
            "api_key": api_key,
            "prompts": prompts,
            "provider_config": provider_config,
            "usage_limits": usage_limits,
        }
    )

app.agents.llm_model_funs

Utility functions and classes for managing and instantiating LLM models and providers.

This module provides functions to retrieve API keys, provider configurations, and to create model instances for supported LLM providers such as Gemini and OpenAI. It also includes logic for assembling model dictionaries for system agents.

Classes

Functions

get_api_key(provider, chat_env_config)

Retrieve API key from chat env config variable.

Source code in src/app/agents/llm_model_funs.py
def get_api_key(
    provider: str,
    chat_env_config: AppEnv,
) -> str | None:
    """Retrieve API key from chat env config variable."""

    provider = provider.upper()
    if provider == "OLLAMA":
        return None
    else:
        key_name = f"{provider}{API_SUFFIX}"
        if hasattr(chat_env_config, key_name):
            logger.info(f"Found API key for provider '{provider}'")
            return getattr(chat_env_config, key_name)
        else:
            raise KeyError(
                f"API key for provider '{provider}' not found in configuration."
            )

get_models(endpoint_config, include_researcher=False, include_analyst=False, include_synthesiser=False)

Get the models for the system agents. Args: endpoint_config (EndpointConfig): Configuration for the model. include_analyist (Optional[bool]): Whether to include the analyst model. Defaults to False. include_synthesiser (Optional[bool]): Whether to include the synthesiser model. Defaults to False. Returns: Dict[str, GeminiModel | OpenAIModel]: A dictionary containing the models for the system agents.

Source code in src/app/agents/llm_model_funs.py
def get_models(
    endpoint_config: EndpointConfig,
    include_researcher: bool = False,
    include_analyst: bool = False,
    include_synthesiser: bool = False,
) -> ModelDict:
    """
    Get the models for the system agents.
    Args:
        endpoint_config (EndpointConfig): Configuration for the model.
        include_analyist (Optional[bool]): Whether to include the analyst model.
            Defaults to False.
        include_synthesiser (Optional[bool]): Whether to include the synthesiser model.
            Defaults to False.
    Returns:
        Dict[str, GeminiModel | OpenAIModel]: A dictionary containing the models for the
            system agents.
    """

    model = _create_model(endpoint_config)
    return ModelDict.model_validate(
        {
            "model_manager": model,
            "model_researcher": model if include_researcher else None,
            "model_analyst": model if include_analyst else None,
            "model_synthesiser": model if include_synthesiser else None,
        }
    )

get_provider_config(provider, providers)

Retrieve configuration settings for the specified provider.

Source code in src/app/agents/llm_model_funs.py
def get_provider_config(
    provider: str, providers: dict[str, ProviderConfig]
) -> dict[str, str | HttpUrl]:
    """Retrieve configuration settings for the specified provider."""

    try:
        model_name = providers[provider].model_name
        base_url = providers[provider].base_url
    except KeyError as e:
        msg = get_key_error(str(e))
        logger.error(msg)
        raise KeyError(msg)
    except Exception as e:
        msg = generic_exception(str(e))
        logger.exception(msg)
        raise Exception(msg)
    else:
        return {
            "model_name": model_name,
            "base_url": base_url,
        }

app.config.config_app

Configuration constants for the application.

app.config.data_models

Data models for agent system configuration and results.

This module defines Pydantic models for representing research and analysis results, summaries, provider and agent configurations, and model dictionaries used throughout the application. These models ensure type safety and validation for data exchanged between agents and system components.

Classes

AgentConfig

Bases: BaseModel

Configuration for an agent

Source code in src/app/config/data_models.py
class AgentConfig(BaseModel):
    """Configuration for an agent"""

    model: Model  # (1) Instance expected
    output_type: type[BaseModel]  # (2) Class expected
    system_prompt: str
    # FIXME tools: list[Callable[..., Awaitable[Any]]]
    tools: list[Any] = []  # (3) List of tools will be validated at creation
    retries: int = 3

    # Avoid pydantic.errors.PydanticSchemaGenerationError:
    # Unable to generate pydantic-core schema for <class 'openai.AsyncOpenAI'>.
    # Avoid Pydantic errors related to non-Pydantic types
    model_config = ConfigDict(
        arbitrary_types_allowed=True
    )  # (4) Suppress Error non-Pydantic types caused by <class 'openai.AsyncOpenAI'>

    @field_validator("tools", mode="before")
    def validate_tools(cls, v: list[Any]) -> list[Tool | None]:
        """Validate that all tools are instances of Tool."""
        if not v:
            return []
        if not all(isinstance(t, Tool) for t in v):
            raise ValueError("All tools must be Tool instances")
        return v
Functions
validate_tools(v)

Validate that all tools are instances of Tool.

Source code in src/app/config/data_models.py
@field_validator("tools", mode="before")
def validate_tools(cls, v: list[Any]) -> list[Tool | None]:
    """Validate that all tools are instances of Tool."""
    if not v:
        return []
    if not all(isinstance(t, Tool) for t in v):
        raise ValueError("All tools must be Tool instances")
    return v

AnalysisResult

Bases: BaseModel

Analysis results from the analysis agent.

Source code in src/app/config/data_models.py
class AnalysisResult(BaseModel):
    """Analysis results from the analysis agent."""

    insights: list[str]
    recommendations: list[str]
    approval: bool

AppEnv

Bases: BaseSettings

Application environment settings loaded from environment variables or .env file.

This class uses Pydantic’s BaseSettings to manage API keys and configuration for various inference endpoints, tools, and logging/monitoring services. Environment variables are loaded from a .env file by default.

Source code in src/app/config/data_models.py
class AppEnv(BaseSettings):
    """
    Application environment settings loaded from environment variables or .env file.

    This class uses Pydantic's BaseSettings to manage API keys and configuration
    for various inference endpoints, tools, and logging/monitoring services.
    Environment variables are loaded from a .env file by default.
    """

    # Inference endpoints
    GEMINI_API_KEY: str = ""
    GITHUB_API_KEY: str = ""
    GROK_API_KEY: str = ""
    HUGGINGFACE_API_KEY: str = ""
    OPENROUTER_API_KEY: str = ""
    PERPLEXITY_API_KEY: str = ""
    RESTACK_API_KEY: str = ""
    TOGETHER_API_KEY: str = ""

    # Tools
    TAVILY_API_KEY: str = ""

    # Logging/Monitoring/Tracing
    AGENTOPS_API_KEY: str = ""
    LOGFIRE_API_KEY: str = ""
    WANDB_API_KEY: str = ""

    model_config = SettingsConfigDict(
        env_file=".env", env_file_encoding="utf-8", extra="ignore"
    )

ChatConfig

Bases: BaseModel

Configuration settings for agents and model providers

Source code in src/app/config/data_models.py
class ChatConfig(BaseModel):
    """Configuration settings for agents and model providers"""

    providers: dict[str, ProviderConfig]
    inference: dict[str, str | int]
    prompts: dict[str, str]

EndpointConfig

Bases: BaseModel

Configuration for an agent

Source code in src/app/config/data_models.py
class EndpointConfig(BaseModel):
    """Configuration for an agent"""

    provider: str
    query: UserPromptType = None
    api_key: str | None
    prompts: dict[str, str]
    provider_config: ProviderConfig
    usage_limits: UsageLimits | None = None

ModelDict

Bases: BaseModel

Dictionary of models used to create agent systems

Source code in src/app/config/data_models.py
class ModelDict(BaseModel):
    """Dictionary of models used to create agent systems"""

    model_manager: Model
    model_researcher: Model | None
    model_analyst: Model | None
    model_synthesiser: Model | None
    model_config = ConfigDict(arbitrary_types_allowed=True)

ProviderConfig

Bases: BaseModel

Configuration for a model provider

Source code in src/app/config/data_models.py
class ProviderConfig(BaseModel):
    """Configuration for a model provider"""

    model_name: str
    base_url: HttpUrl

ResearchResult

Bases: BaseModel

Research results from the research agent.

Source code in src/app/config/data_models.py
class ResearchResult(BaseModel):
    """Research results from the research agent."""

    topic: str | dict[str, str]
    findings: list[str] | dict[str, str | list[str]]
    sources: list[str] | dict[str, str | list[str]]

ResearchSummary

Bases: BaseModel

Expected model response of research on a topic

Source code in src/app/config/data_models.py
class ResearchSummary(BaseModel):
    """Expected model response of research on a topic"""

    topic: str
    key_points: list[str]
    key_points_explanation: list[str]
    conclusion: str
    sources: list[str]

app.evals.metrics

Functions

output_similarity(agent_output, expected_answer)

Determine to what degree the agent’s output matches the expected answer.

Parameters:

Name Type Description Default
agent_output str

The output produced by the agent.

required
expected_answer str

The correct or expected answer.

required

Returns:

Name Type Description
bool bool

True if the output matches the expected answer, False otherwise.

Source code in src/app/evals/metrics.py
def output_similarity(agent_output: str, expected_answer: str) -> bool:
    """
    Determine to what degree the agent's output matches the expected answer.

    Args:
        agent_output (str): The output produced by the agent.
        expected_answer (str): The correct or expected answer.

    Returns:
        bool: True if the output matches the expected answer, False otherwise.
    """

    # TODO score instead of bool
    return agent_output.strip() == expected_answer.strip()

time_taken(start_time, end_time)

Calculate duration between start and end timestamps

Parameters:

Name Type Description Default
start_time float

Timestamp when execution started

required
end_time float

Timestamp when execution completed

required

Returns:

Type Description
float

Duration in seconds with microsecond precision

Source code in src/app/evals/metrics.py
def time_taken(start_time: float, end_time: float) -> float:
    """Calculate duration between start and end timestamps

    Args:
        start_time: Timestamp when execution started
        end_time: Timestamp when execution completed

    Returns:
        Duration in seconds with microsecond precision
    """

    # TODO implement
    return end_time - start_time

app.main

Main entry point for the Agents-eval application.

This module initializes the agentic system, loads configuration files, handles user input, and orchestrates the multi-agent workflow using asynchronous execution. It integrates logging, tracing, and authentication, and supports both CLI and programmatic execution.

Classes

Functions

main(chat_provider=CHAT_DEFAULT_PROVIDER, query='', include_researcher=False, include_analyst=False, include_synthesiser=False, pydantic_ai_stream=False, chat_config_file=CHAT_CONFIG_FILE) async

Main entry point for the application.

Parameters:

Name Type Description Default
chat_provider str

The inference chat_provider to be used.

CHAT_DEFAULT_PROVIDER
query str

The query to be processed by the agent.

''
include_researcher bool

Whether to include the researcher in the process.

False
include_analyst bool

Whether to include the analyst in the process.

False
include_synthesiser bool

Whether to include the synthesiser in the process.

False
pydantic_ai_stream bool

Whether to use Pydantic AI streaming.

False
chat_config_file str

Full path to the configuration file.

CHAT_CONFIG_FILE

Returns:

Type Description
None

None

Source code in src/app/main.py
@op()
async def main(
    chat_provider: str = CHAT_DEFAULT_PROVIDER,
    query: str = "",
    include_researcher: bool = False,
    include_analyst: bool = False,
    include_synthesiser: bool = False,
    pydantic_ai_stream: bool = False,
    chat_config_file: str = CHAT_CONFIG_FILE,
) -> None:
    """
    Main entry point for the application.

    Args:
        chat_provider (str): The inference chat_provider to be used.
        query (str): The query to be processed by the agent.
        include_researcher (bool): Whether to include the researcher in the process.
        include_analyst (bool): Whether to include the analyst in the process.
        include_synthesiser (bool): Whether to include the synthesiser in the process.
        pydantic_ai_stream (bool): Whether to use Pydantic AI streaming.
        chat_config_file (str): Full path to the configuration file.

    Returns:
        None
    """

    logger.info(f"Starting app '{PROJECT_NAME}' v{__version__}")
    try:
        with span("main()"):
            if not chat_provider:
                chat_provider = input("Which inference chat_provider to use? ")
            if not query:
                query = input("What would you like to research? ")

            chat_config_path = Path(__file__).parent / CHAT_CONFIG_FILE
            eval_config_path = Path(__file__).parent / EVAL_CONFIG_FILE
            chat_config = load_config(chat_config_path, ChatConfig)
            eval_config = load_config(eval_config_path, EvalConfig)
            chat_env_config = AppEnv()
            agent_env = setup_agent_env(
                chat_provider, query, chat_config, chat_env_config
            )
            # TODO remove noqa and type ignore for unused variable
            metrics_and_weights = eval_config.metrics_and_weights  # noqa: F841  # type: ignore[reportUnusedVariable]

            # FIXME enhance login, not every run?
            login(PROJECT_NAME, chat_env_config)

            manager = get_manager(
                agent_env.provider,
                agent_env.provider_config,
                agent_env.api_key,
                agent_env.prompts,
                include_researcher,
                include_analyst,
                include_synthesiser,
            )
            await run_manager(
                manager,
                agent_env.query,
                agent_env.provider,
                agent_env.usage_limits,
                pydantic_ai_stream,
            )
            logger.info(f"Exiting app '{PROJECT_NAME}'")

    except Exception as e:
        msg = generic_exception(f"Aborting app '{PROJECT_NAME}' with: {e}")
        logger.exception(msg)
        raise Exception(msg) from e

app.utils.error_messages

Error message utilities for the Agents-eval application.

This module provides concise helper functions for generating standardized error messages related to configuration loading and validation.

Functions

api_connection_error(error)

Generate a error message for API connection error.

Source code in src/app/utils/error_messages.py
def api_connection_error(error: str) -> str:
    """
    Generate a error message for API connection error.
    """
    return f"API connection error: {error}"

failed_to_load_config(error)

Generate a error message for configuration loading failure.

Source code in src/app/utils/error_messages.py
def failed_to_load_config(error: str) -> str:
    """
    Generate a error message for configuration loading failure.
    """
    return f"Failed to load config: {error}"

file_not_found(file_path)

Generate an error message for a missing configuration file.

Source code in src/app/utils/error_messages.py
def file_not_found(file_path: str | Path) -> str:
    """
    Generate an error message for a missing configuration file.
    """
    return f"File not found: {file_path}"

generic_exception(error)

Generate a generic error message.

Source code in src/app/utils/error_messages.py
def generic_exception(error: str) -> str:
    """
    Generate a generic error message.
    """
    return f"Exception: {error}"

get_key_error(error)

Generate a generic error message.

Source code in src/app/utils/error_messages.py
def get_key_error(error: str) -> str:
    """
    Generate a generic error message.
    """
    return f"Key Error: {error}"

invalid_data_model_format(error)

Generate an error message for invalid pydantic data model format.

Source code in src/app/utils/error_messages.py
def invalid_data_model_format(error: str) -> str:
    """
    Generate an error message for invalid pydantic data model format.
    """
    return f"Invalid pydantic data model format: {error}"

invalid_json(error)

Generate an error message for invalid JSON in a configuration file.

Source code in src/app/utils/error_messages.py
def invalid_json(error: str) -> str:
    """
    Generate an error message for invalid JSON in a configuration file.
    """
    return f"Invalid JSON: {error}"

invalid_type(expected_type, actual_type)

Generate an error message for invalid Type.

Source code in src/app/utils/error_messages.py
def invalid_type(expected_type: str, actual_type: str) -> str:
    """
    Generate an error message for invalid Type.
    """
    return f"Type Error: Expected {expected_type}, got {actual_type} instead."

app.utils.load_configs

Configuration loading utilities.

Provides a generic function for loading and validating JSON configuration files against Pydantic models, with error handling and logging support.

Functions

load_config(config_path, data_model)

Generic configuration loader that validates against any Pydantic model.

Parameters:

Name Type Description Default
config_path str | Path

Path to the JSON configuration file

required
model

Pydantic model class for validation

required

Returns:

Type Description
BaseModel

Validated configuration instance

Source code in src/app/utils/load_configs.py
def load_config(config_path: str | Path, data_model: type[BaseModel]) -> BaseModel:
    """
    Generic configuration loader that validates against any Pydantic model.

    Args:
        config_path: Path to the JSON configuration file
        model: Pydantic model class for validation

    Returns:
        Validated configuration instance
    """

    try:
        with open(config_path, encoding="utf-8") as f:
            data = json.load(f)
        return data_model.model_validate(data)
    except FileNotFoundError as e:
        msg = file_not_found(config_path)
        logger.error(msg)
        raise FileNotFoundError(msg) from e
    except json.JSONDecodeError as e:
        msg = invalid_json(str(e))
        logger.error(msg)
        raise ValueError(msg) from e
    except ValidationError as e:
        msg = invalid_data_model_format(str(e))
        logger.error(msg)
        raise ValidationError(msg) from e
    except Exception as e:
        msg = failed_to_load_config(str(e))
        logger.exception(msg)
        raise Exception(msg) from e

app.utils.load_settings

Utility functions and classes for loading application settings and configuration.

This module defines the AppEnv class for managing environment variables using Pydantic, and provides a function to load and validate application configuration from a JSON file.

Classes

AppEnv

Bases: BaseSettings

Application environment settings loaded from environment variables or .env file.

This class uses Pydantic’s BaseSettings to manage API keys and configuration for various inference endpoints, tools, and logging/monitoring services. Environment variables are loaded from a .env file by default.

Source code in src/app/utils/load_settings.py
class AppEnv(BaseSettings):
    """
    Application environment settings loaded from environment variables or .env file.

    This class uses Pydantic's BaseSettings to manage API keys and configuration
    for various inference endpoints, tools, and logging/monitoring services.
    Environment variables are loaded from a .env file by default.
    """

    # Inference endpoints
    GEMINI_API_KEY: str = ""
    GITHUB_API_KEY: str = ""
    GROK_API_KEY: str = ""
    HUGGINGFACE_API_KEY: str = ""
    OPENROUTER_API_KEY: str = ""
    PERPLEXITY_API_KEY: str = ""
    RESTACK_API_KEY: str = ""
    TOGETHER_API_KEY: str = ""

    # Tools
    TAVILY_API_KEY: str = ""

    # Logging/Monitoring/Tracing
    AGENTOPS_API_KEY: str = ""
    LOGFIRE_TOKEN: str = ""
    WANDB_API_KEY: str = ""

    model_config = SettingsConfigDict(
        env_file=".env", env_file_encoding="utf-8", extra="ignore"
    )

Functions

load_config(config_path)

Load and validate application configuration from a JSON file.

Parameters:

Name Type Description Default
config_path str

Path to the JSON configuration file.

required

Returns:

Name Type Description
ChatConfig ChatConfig

An instance of ChatConfig with validated configuration data.

Raises:

Type Description
FileNotFoundError

If the configuration file does not exist.

JSONDecodeError

If the file contains invalid JSON.

Exception

For any other unexpected errors during loading or validation.

Source code in src/app/utils/load_settings.py
def load_config(config_path: str | Path) -> ChatConfig:
    """
    Load and validate application configuration from a JSON file.

    Args:
        config_path (str): Path to the JSON configuration file.

    Returns:
        ChatConfig: An instance of ChatConfig with validated configuration data.

    Raises:
        FileNotFoundError: If the configuration file does not exist.
        json.JSONDecodeError: If the file contains invalid JSON.
        Exception: For any other unexpected errors during loading or validation.
    """

    try:
        with open(config_path) as f:
            config_data = json.load(f)
    except FileNotFoundError as e:
        msg = file_not_found(config_path)
        logger.error(msg)
        raise FileNotFoundError(msg) from e
    except json.JSONDecodeError as e:
        msg = invalid_json(str(e))
        logger.error(msg)
        raise json.JSONDecodeError(msg, str(config_path), 0) from e
    except Exception as e:
        msg = failed_to_load_config(str(e))
        logger.exception(msg)
        raise Exception(msg) from e

    return ChatConfig.model_validate(config_data)

app.utils.log

Set up the logger with custom settings. Logs are written to a file with automatic rotation.

app.utils.login

This module provides utility functions for managing login state and initializing the environment for a given project. It includes functionality to load and save login state, perform a one-time login, and check if the user is logged in.

Classes

Functions

login(project_name, chat_env_config)

Logs in to the workspace and initializes the environment for the given project. Args: project_name (str): The name of the project to initialize. chat_env_config (AppEnv): The application environment configuration containing the API keys. Returns: None

Source code in src/app/utils/login.py
def login(project_name: str, chat_env_config: AppEnv):
    """
    Logs in to the workspace and initializes the environment for the given project.
    Args:
        project_name (str): The name of the project to initialize.
        chat_env_config (AppEnv): The application environment configuration
            containing the API keys.
    Returns:
        None
    """

    try:
        logger.info(f"Logging in to the workspaces for project: {project_name}")
        environ["AGENTOPS_LOGGING_TO_FILE"] = "FALSE"
        agentops_init(
            default_tags=[project_name],
            api_key=get_api_key("AGENTOPS", chat_env_config),
        )
        logfire_conf(token=get_api_key("LOGFIRE", chat_env_config))
        wandb_login(key=get_api_key("WANDB", chat_env_config))
        weave_init(project_name)
    except Exception as e:
        msg = generic_exception(str(e))
        logger.exception(e)
        raise Exception(msg) from e

app.utils.utils

This module provides utility functions and context managers for handling configurations, error handling, and setting up agent environments.

Functions:

Name Description
load_config

str) -> Config: Load and validate configuration from a JSON file.

print_research_Result

Dict, usage: Usage) -> None: Output structured summary of the research topic.

error_handling_context

str, console: Console = None): Context manager for handling errors during operations.

setup_agent_env

Config, console: Console = None) -> AgentConfig: Set up the agent environment based on the provided configuration.

Classes

Functions

log_research_result(summary, usage)

Prints the research summary and usage details in a formatted manner.

Parameters:

Name Type Description Default
summary Dict

A dictionary containing the research summary with keys ‘topic’, ‘key_points’, ‘key_points_explanation’, and ‘conclusion’.

required
usage Usage

An object containing usage details to be printed.

required
Source code in src/app/utils/utils.py
def log_research_result(summary: ResearchSummary, usage: Usage) -> None:
    """
    Prints the research summary and usage details in a formatted manner.

    Args:
        summary (Dict): A dictionary containing the research summary with keys 'topic',
            'key_points', 'key_points_explanation', and 'conclusion'.
        usage (Usage): An object containing usage details to be printed.
    """

    logger.info(f"\n=== Research Summary: {summary.topic} ===")
    logger.info("\nKey Points:")
    for i, point in enumerate(summary.key_points, 1):
        logger.info(f"{i}. {point}")
    logger.info("\nKey Points Explanation:")
    for i, point in enumerate(summary.key_points_explanation, 1):
        logger.info(f"{i}. {point}")
    logger.info(f"\nConclusion: {summary.conclusion}")
    logger.info(f"\nResponse structure: {list(dict(summary).keys())}")
    logger.info(usage)

parse_args(argv)

Parse command line arguments into a dictionary.

This function processes a list of command-line arguments, extracting recognized options and their values. Supported arguments include flags (e.g., –help, –include-researcher and key-value pairs (e.g., --chat-provider=ollama). If the --help flag is present, a list of available commands and their descriptions is printed, and an empty dictionary is returned.

Recognized arguments as list[str]

    --help                   Display help information and exit.
    --version                Display version information.
    --chat-provider=<str>    Specify the chat provider to use.
    --query=<str>            Specify the query to process.
    --include-researcher     Include the researcher agent.
    --include-analyst        Include the analyst agent.
    --include-synthesiser    Include the synthesiser agent.
    --no-stream              Disable streaming output.
    --chat-config-file=<str> Specify the path to the chat configuration file.

Returns:

Type Description
dict[str, str | bool]

dict[str, str | bool]: A dictionary mapping argument names

dict[str, str | bool]

(with leading ‘–’ removed and hyphens replaced by underscores)

dict[str, str | bool]

to their values (str for key-value pairs, bool for flags).

dict[str, str | bool]

Returns an empty dict if --help is specified.

Example

parse_args(['--chat-provider=ollama', '--include-researcher']) returns {'chat_provider': 'ollama', 'include_researcher': True}

Source code in src/app/utils/utils.py
def parse_args(argv: list[str]) -> dict[str, str | bool]:
    """
    Parse command line arguments into a dictionary.

    This function processes a list of command-line arguments,
    extracting recognized options and their values.
    Supported arguments include flags (e.g., --help, --include-researcher
    and key-value pairs (e.g., `--chat-provider=ollama`).
    If the `--help` flag is present, a list of available commands and their
    descriptions is printed, and an empty dictionary is returned.

    Recognized arguments as list[str]
    ```
        --help                   Display help information and exit.
        --version                Display version information.
        --chat-provider=<str>    Specify the chat provider to use.
        --query=<str>            Specify the query to process.
        --include-researcher     Include the researcher agent.
        --include-analyst        Include the analyst agent.
        --include-synthesiser    Include the synthesiser agent.
        --no-stream              Disable streaming output.
        --chat-config-file=<str> Specify the path to the chat configuration file.
    ```

    Returns:
        `dict[str, str | bool]`: A dictionary mapping argument names
        (with leading '--' removed and hyphens replaced by underscores)
        to their values (`str` for key-value pairs, `bool` for flags).
        Returns an empty dict if `--help` is specified.

    Example:
        >>> `parse_args(['--chat-provider=ollama', '--include-researcher'])`
        returns `{'chat_provider': 'ollama', 'include_researcher': True}`
    """

    commands = {
        "--help": "Display help information",
        "--version": "Display version information",
        "--chat-provider": "Specify the chat provider to use",
        "--query": "Specify the query to process",
        "--include-researcher": "Include the researcher agent",
        "--include-analyst": "Include the analyst agent",
        "--include-synthesiser": "Include the synthesiser agent",
        "--no-stream": "Disable streaming output",
        "--chat-config-file": "Specify the path to the chat configuration file",
    }
    parsed_args: dict[str, str | bool] = {}

    if "--help" in argv:
        print("Available commands:")
        for cmd, desc in commands.items():
            print(f"{cmd}: {desc}")
        return parsed_args

    for arg in argv:
        if arg.split("=", 1)[0] in commands.keys():
            key, value = arg.split("=", 1) if "=" in arg else (arg, True)
            key = key.lstrip("--").replace("-", "_")
            parsed_args[key] = value

    if parsed_args:
        logger.info(f"Used arguments: {parsed_args}")

    return parsed_args

examples.run_simple_agent_no_tools

A simple example of using a Pydantic AI agent to generate a structured summary of a research topic.

Functions

main()

Main function to run the research agent.

Source code in src/examples/run_simple_agent_no_tools.py
def main():
    """Main function to run the research agent."""

    config_path = path.join(path.dirname(__file__), CONFIG_FILE)
    config = load_config(config_path)

    provider = input("Which inference provider to use? ")
    topic = input("What topic would you like to research? ")

    api_key = get_api_key(provider)
    provider_config = get_provider_config(provider, config)

    result = get_research(topic, config.prompts, provider, provider_config, api_key)
    print_research_Result(result.data, result.usage())

examples.run_simple_agent_system

This example demonstrates how to run a simple agent system that consists of a manager agent, a research agent, and an analysis agent. The manager agent delegates research and analysis tasks to the corresponding agents and combines the results to provide a comprehensive answer to the user query. https://ai.pydantic.dev/multi-agent-applications/#agent-delegation

Classes

Functions

get_manager(model_manager, model_researcher, model_analyst, prompts)

Get the agents for the system.

Source code in src/examples/run_simple_agent_system.py
def get_manager(
    model_manager: OpenAIModel,
    model_researcher: OpenAIModel,
    model_analyst: OpenAIModel,
    prompts: dict[str, str],
) -> SystemAgent:
    """Get the agents for the system."""
    researcher = SystemAgent(
        model_researcher,
        ResearchResult,
        prompts["system_prompt_researcher"],
        [duckduckgo_search_tool()],
    )
    analyst = SystemAgent(
        model_analyst, AnalysisResult, prompts["system_prompt_analyst"]
    )
    manager = SystemAgent(
        model_manager, ResearchResult, prompts["system_prompt_manager"]
    )
    add_tools_to_manager_agent(manager, researcher, analyst)
    return manager

get_models(model_config)

Get the models for the system agents.

Source code in src/examples/run_simple_agent_system.py
def get_models(model_config: dict) -> tuple[OpenAIModel]:
    """Get the models for the system agents."""
    model_researcher = create_model(**model_config)
    model_analyst = create_model(**model_config)
    model_manager = create_model(**model_config)
    return model_researcher, model_analyst, model_manager

main() async

Main function to run the research system.

Source code in src/examples/run_simple_agent_system.py
async def main():
    """Main function to run the research system."""

    provider = input("Which inference provider to use? ")
    query = input("What would you like to research? ")

    config_path = path.join(path.dirname(__file__), CONFIG_FILE)
    config = load_config(config_path)

    api_key = get_api_key(provider)
    provider_config = get_provider_config(provider, config)
    usage_limits = UsageLimits(request_limit=10, total_tokens_limit=4000)

    model_config = {
        "base_url": provider_config["base_url"],
        "model_name": provider_config["model_name"],
        "api_key": api_key,
        "provider": provider,
    }
    manager = get_manager(*get_models(model_config), config.prompts)

    print(f"\nResearching: {query}...")

    try:
        result = await manager.run(query, usage_limits=usage_limits)
    except (UnexpectedModelBehavior, UnprocessableEntityError) as e:
        print(f"Error: Model returned unexpected result: {e}")
    except UsageLimitExceeded as e:
        print(f"Usage limit exceeded: {e}")
    else:
        print("\nFindings:", {result.data.findings})
        print(f"Sources: {result.data.sources}")
        print("\nUsage statistics:")
        print(result.usage())

examples.run_simple_agent_tools

Run the dice game agent using simple tools.

Functions

main()

Run the dice game agent.

Source code in src/examples/run_simple_agent_tools.py
def main():
    """Run the dice game agent."""

    provider = input("Which inference provider to use? ")
    player_name = input("Enter your name: ")
    guess = input("Guess a number between 1 and 6: ")

    config_path = path.join(path.dirname(__file__), CONFIG_FILE)
    config = load_config(config_path)

    api_key = get_api_key(provider)
    provider_config = get_provider_config(provider, config)

    result = get_dice(
        player_name, guess, system_prompt, provider, api_key, provider_config
    )
    print(result.data)
    print(f"{result._result_tool_name=}")
    print(result.usage())

examples.utils.agent_simple_no_tools

This module contains a function to create a research agent with the specified model, result type, and system prompt.

Classes

Functions

get_research(topic, prompts, provider, provider_config, api_key)

Run the research agent to generate a structured summary of a research topic.

Source code in src/examples/utils/agent_simple_no_tools.py
def get_research(
    topic: str,
    prompts: dict[str, str],
    provider: str,
    provider_config: Config,
    api_key: str,
) -> AgentRunResult:
    """Run the research agent to generate a structured summary of a research topic."""

    model = create_model(
        provider_config["base_url"], provider_config["model_name"], api_key, provider
    )
    agent = _create_research_agent(model, ResearchSummary, prompts["system_prompt"])

    print(f"\nResearching {topic}...")
    try:
        result = agent.run_sync(f"{prompts['user_prompt']} {topic}")
    except APIConnectionError as e:
        print(f"Error connecting to API: {e}")
        exit()
    except Exception as e:
        print(f"Error connecting to API: {e}")
        exit()
    else:
        return result

examples.utils.agent_simple_system

This module contains a simple system of agents that can be used to research and analyze data.

Classes

SystemAgent

Bases: Agent

A generic system agent that can be used to research and analyze data.

Source code in src/examples/utils/agent_simple_system.py
class SystemAgent(Agent):
    """A generic system agent that can be used to research and analyze data."""

    def __init__(
        self,
        model: OpenAIModel,
        result_type: ResearchResult | AnalysisResult,
        system_prompt: str,
        result_retries: int = 3,
        tools: list | None = [],
    ):
        super().__init__(
            model,
            result_type=result_type,
            system_prompt=system_prompt,
            result_retries=result_retries,
            tools=tools,
        )

Functions

add_tools_to_manager_agent(manager_agent, research_agent, analysis_agent)

Create and configure the joke generation agent.

Source code in src/examples/utils/agent_simple_system.py
def add_tools_to_manager_agent(
    manager_agent: SystemAgent, research_agent: SystemAgent, analysis_agent: SystemAgent
) -> None:
    """Create and configure the joke generation agent."""

    @manager_agent.tool
    async def delegate_research(ctx: RunContext[None], query: str) -> ResearchResult:
        """Delegate research task to ResearchAgent."""
        result = await research_agent.run(query, usage=ctx.usage)
        return result.data

    @manager_agent.tool
    async def delegate_analysis(ctx: RunContext[None], data: str) -> AnalysisResult:
        """Delegate analysis task to AnalysisAgent."""
        result = await analysis_agent.run(data, usage=ctx.usage)
        return result.data

examples.utils.agent_simple_tools

Simple agent for the dice game example.

Functions

get_dice(player_name, guess, system_prompt, provider, api_key, config)

Run the dice game agent.

Source code in src/examples/utils/agent_simple_tools.py
def get_dice(
    player_name: str,
    guess: str,
    system_prompt: str,
    provider: str,
    api_key: str,
    config: dict,
) -> AgentRunResult:
    """Run the dice game agent."""

    model = create_model(config["base_url"], config["model_name"], api_key, provider)
    agent = _DiceGameAgent(model, system_prompt)

    try:
        # usage_limits=UsageLimits(request_limit=5, total_tokens_limit=300),
        result = agent.run_sync(f"Player is guessing {guess}...", deps=player_name)
    except APIConnectionError as e:
        print(f"Error connecting to API: {e}")
        exit()
    except Exception as e:
        print(f"Error connecting to API: {e}")
        exit()
    else:
        return result

examples.utils.data_models

Example of a module with data models

Classes

AnalysisResult

Bases: BaseModel

Analysis results from the analysis agent.

Source code in src/examples/utils/data_models.py
class AnalysisResult(BaseModel):
    """Analysis results from the analysis agent."""

    insights: list[str]
    recommendations: list[str]

Config

Bases: BaseModel

Configuration settings for the research agent and model providers

Source code in src/examples/utils/data_models.py
class Config(BaseModel):
    """Configuration settings for the research agent and model providers"""

    providers: dict[str, ProviderConfig]
    prompts: dict[str, str]

ProviderConfig

Bases: BaseModel

Configuration for a model provider

Source code in src/examples/utils/data_models.py
class ProviderConfig(BaseModel):
    """Configuration for a model provider"""

    model_name: str
    base_url: str

ResearchResult

Bases: BaseModel

Research results from the research agent.

Source code in src/examples/utils/data_models.py
class ResearchResult(BaseModel):
    """Research results from the research agent."""

    topic: str
    findings: list[str]
    sources: list[str]

ResearchSummary

Bases: BaseModel

Expected model response of research on a topic

Source code in src/examples/utils/data_models.py
class ResearchSummary(BaseModel):
    """Expected model response of research on a topic"""

    topic: str
    key_points: list[str]
    key_points_explanation: list[str]
    conclusion: str

examples.utils.tools

Example tools for the utils example.

Functions

get_player_name(ctx)

Get the player’s name from the context.

Source code in src/examples/utils/tools.py
def get_player_name(ctx: RunContext[str]) -> str:
    """Get the player's name from the context."""
    return ctx.deps

roll_die()

Tool to roll a die.

Source code in src/examples/utils/tools.py
def roll_die() -> str:
    """Tool to roll a die."""

    async def _execute(self) -> str:
        """Roll the die and return the result."""
        return str(randint(1, 6))

examples.utils.utils

Utility functions for running the research agent example.

Classes

Functions

create_model(base_url, model_name, api_key=None, provider=None)

Create a model that uses base_url as inference API

Source code in src/examples/utils/utils.py
def create_model(
    base_url: str,
    model_name: str,
    api_key: str | None = None,
    provider: str | None = None,
) -> OpenAIModel:
    """Create a model that uses base_url as inference API"""

    if api_key is None and not provider.lower() == "ollama":
        raise ValueError("API key is required for model.")
        exit()
    else:
        return OpenAIModel(
            model_name, provider=OpenAIProvider(base_url=base_url, api_key=api_key)
        )

get_api_key(provider)

Retrieve API key from environment variable.

Source code in src/examples/utils/utils.py
def get_api_key(provider: str) -> str | None:
    """Retrieve API key from environment variable."""

    # TODO replace with pydantic-settings ?
    load_dotenv()

    if provider.lower() == "ollama":
        return None
    else:
        return getenv(f"{provider.upper()}{API_SUFFIX}")

get_provider_config(provider, config)

Retrieve configuration settings for the specified provider.

Source code in src/examples/utils/utils.py
def get_provider_config(provider: str, config: Config) -> dict[str, str]:
    """Retrieve configuration settings for the specified provider."""

    try:
        model_name = config.providers[provider].model_name
        base_url = config.providers[provider].base_url
    except KeyError as e:
        raise ValueError(f"Missing configuration for {provider}: {e}.")
        exit()
    except Exception as e:
        raise Exception(f"Error loading provider configuration: {e}")
        exit()
    else:
        return {
            "model_name": model_name,
            "base_url": base_url,
        }

load_config(config_path)

Load and validate configuration from a JSON file.

Source code in src/examples/utils/utils.py
def load_config(config_path: str) -> Config:
    """Load and validate configuration from a JSON file."""

    try:
        with open(config_path) as file:
            config_data = load(file)
        config = Config.model_validate(config_data)
    except FileNotFoundError:
        raise FileNotFoundError(f"Configuration file not found: {config_path}")
        exit()
    except ValidationError as e:
        raise ValueError(f"Invalid configuration format: {e}")
        exit()
    except Exception as e:
        raise Exception(f"Error loading configuration: {e}")
        exit()
    else:
        return config

print_research_Result(summary, usage)

Output structured summary of the research topic.

Source code in src/examples/utils/utils.py
def print_research_Result(summary: dict, usage: Usage) -> None:
    """Output structured summary of the research topic."""

    print(f"\n=== Research Summary: {summary.topic} ===")
    print("\nKey Points:")
    for i, point in enumerate(summary.key_points, 1):
        print(f"{i}. {point}")
    print("\nKey Points Explanation:")
    for i, point in enumerate(summary.key_points_explanation, 1):
        print(f"{i}. {point}")
    print(f"\nConclusion: {summary.conclusion}")

    print(f"\nResponse structure: {list(dict(summary).keys())}")
    print(usage)

gui.components.footer

Functions

Render the page footer.

Source code in src/gui/components/footer.py
4
5
6
7
def render_footer(footer_caption: str):
    """Render the page footer."""
    divider()
    caption(footer_caption)

gui.components.header

Functions

render_header(header_title)

Render the page header with title.

Source code in src/gui/components/header.py
4
5
6
7
def render_header(header_title: str):
    """Render the page header with title."""
    title(header_title)
    divider()

gui.components.output

Functions

render_output(result=None, info_str=None, type=None)

Renders the output in a Streamlit app based on the provided type.

Parameters:

Name Type Description Default
result Any

The content to be displayed. Can be JSON, code markdown, or plain text.

None
info str

The information message to be displayed if result is None.

required
type str

The type of the result content. Can be ‘json’, ‘code’, ‘md’, or other for plain text.

None

Returns:

Name Type Description
Out

None

Source code in src/gui/components/output.py
def render_output(
    result: Any = None, info_str: str | None = None, type: str | None = None
):
    """
    Renders the output in a Streamlit app based on the provided type.

    Args:
        result (Any, optional): The content to be displayed. Can be JSON, code
            markdown, or plain text.
        info (str, optional): The information message to be displayed if result is None.
        type (str, optional): The type of the result content. Can be 'json', 'code',
            'md', or other for plain text.

    Returns:
        Out: None
    """

    if result:
        output_container = empty()
        output_container.write(result)
        # match type:
        #     case "json":
        #         json(result)
        #     case "code":
        #         code(result)
        #     case "md":
        #         markdown(result)
        #     case _:
        #         text(result)
        #         # st.write(result)
    else:
        info(info_str)

gui.components.prompts

gui.components.sidebar

gui.config.config

gui.config.styling

gui.config.text

gui.pages.home

gui.pages.prompts

Streamlit component for editing agent system prompts.

This module provides a function to render and edit prompt configurations for agent roles using a Streamlit-based UI. It validates the input configuration, displays warnings if prompts are missing, and allows interactive editing of each prompt.

Classes

Functions

render_prompts(chat_config)

Render and edit the prompt configuration for agent roles in the Streamlit UI.

Source code in src/gui/pages/prompts.py
def render_prompts(chat_config: ChatConfig | BaseModel):  # -> dict[str, str]:
    """
    Render and edit the prompt configuration for agent roles in the Streamlit UI.
    """

    header(PROMPTS_HEADER)

    if not isinstance(chat_config, ChatConfig):
        msg = invalid_type("ChatConfig", type(chat_config).__name__)
        logger.error(msg)
        error(msg)
        return None

    # updated = False
    prompts = chat_config.prompts

    if not prompts:
        warning(PROMPTS_WARNING)
        prompts = PROMPTS_DEFAULT

    updated_prompts = prompts.copy()

    # Edit prompts
    for prompt_key, prompt_value in prompts.items():
        new_value = render_prompt_editor(prompt_key, prompt_value, height=200)
        if new_value != prompt_value and new_value is not None:
            updated_prompts[prompt_key] = new_value

gui.pages.run_app

Streamlit interface for running the agentic system interactively.

This module defines the render_app function, which provides a Streamlit-based UI for users to select a provider, enter a query, and execute the main agent workflow. Results and errors are displayed in real time, supporting asynchronous execution.

Functions

render_app(provider=None) async

Render the main app interface for running agentic queries via Streamlit.

Displays input fields for provider and query, a button to trigger execution, and an area for output or error messages. Handles async invocation of the main agent workflow and logs any exceptions.

Source code in src/gui/pages/run_app.py
async def render_app(provider: str | None = None):
    """
    Render the main app interface for running agentic queries via Streamlit.

    Displays input fields for provider and query, a button to trigger execution,
    and an area for output or error messages. Handles async invocation of the
    main agent workflow and logs any exceptions.
    """

    header(RUN_APP_HEADER)
    if provider is None:
        provider = text_input(RUN_APP_PROVIDER_PLACEHOLDER)
    query = text_input(RUN_APP_QUERY_PLACEHOLDER)

    subheader(OUTPUT_SUBHEADER)
    if button(RUN_APP_BUTTON):
        if query:
            info(f"{RUN_APP_QUERY_RUN_INFO} {query}")
            try:
                result = await main(chat_provider=provider, query=query)
                render_output(result)
            except Exception as e:
                render_output(None)
                exception(e)
                logger.exception(e)
        else:
            warning(RUN_APP_QUERY_WARNING)
    else:
        render_output(RUN_APP_OUTPUT_PLACEHOLDER)

gui.pages.settings

Streamlit settings UI for provider and agent configuration.

This module provides a function to render and edit agent system settings, including provider selection and related options, within the Streamlit GUI. It validates the input configuration and ensures correct typing before rendering.

Classes

Functions

render_settings(chat_config)

Render and edit agent system settings in the Streamlit UI.

Displays a header and a selectbox for choosing the inference provider. Validates that the input is a ChatConfig instance and displays an error if not.

Source code in src/gui/pages/settings.py
def render_settings(chat_config: ChatConfig | BaseModel) -> str:
    """
    Render and edit agent system settings in the Streamlit UI.

    Displays a header and a selectbox for choosing the inference provider.
    Validates that the input is a ChatConfig instance and displays an error if not.
    """
    header(SETTINGS_HEADER)

    # updated = False
    # updated_config = config.copy()

    if not isinstance(chat_config, ChatConfig):
        msg = invalid_type("ChatConfig", type(chat_config).__name__)
        logger.error(msg)
        error(msg)
        return msg

    provider = selectbox(
        label=SETTINGS_PROVIDER_LABEL,
        options=chat_config.providers.keys(),
    )

    # Run options
    # col1, col2 = st.columns(2)
    # with col1:
    #     streamed_output = st.checkbox(
    #         "Stream Output", value=config.get("streamed_output", False)
    #     )
    # with col2:
    #     st.checkbox("Include Sources", value=True)  # include_sources

    # Allow adding new providers
    # new_provider = st.text_input("Add New Provider")
    # api_key = st.text_input(f"{provider} API Key", type="password")
    # if st.button("Add Provider") and new_provider and new_provider not in providers:
    #     providers.append(new_provider)
    #     updated_config["providers"] = providers
    #     updated_config["api_key"] = api_key
    #     updated = True
    #     st.success(f"Added provider: {new_provider}")

    # # Update config if changed
    # if (
    #     include_a != config.get("include_a", False)
    #     or include_b != config.get("include_b", False)
    #     or streamed_output != config.get("streamed_output", False)
    # ):
    #     updated_config["include_a"] = include_a
    #     updated_config["include_b"] = include_b
    #     updated_config["streamed_output"] = streamed_output
    #     updated = True

    return provider

run_gui

This module sets up and runs a Streamlit application for a Multi-Agent System.

The application includes the following components: - Header - Sidebar for configuration options - Main content area for prompts - Footer

The main function loads the configuration, renders the UI components, and handles the execution of the Multi-Agent System based on user input.

Functions: - run_app(): Placeholder function to run the main application logic. - main(): Main function to set up and run the Streamlit application.

Classes

Functions