Skip to content

Commit

Permalink
ci: enables ollama integration tests (#23)
Browse files Browse the repository at this point in the history
Signed-off-by: Adrian Cole <adrian.cole@elastic.co>
Co-authored-by: Bradley Axen <baxen@squareup.com>
Co-authored-by: Anuraag (Rag) Agrawal <anuraaga@gmail.com>
  • Loading branch information
3 people authored Sep 23, 2024
1 parent c0114fb commit 5b34bc5
Show file tree
Hide file tree
Showing 3 changed files with 56 additions and 3 deletions.
49 changes: 49 additions & 0 deletions .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -34,3 +34,52 @@ jobs:
- name: Run tests
run: uv run pytest tests -m 'not integration'

# This runs integration tests of the OpenAI API, using Ollama to host models.
# This lets us test PRs from forks which can't access secrets like API keys.
ollama:
runs-on: ubuntu-latest

strategy:
matrix:
python-version:
# Only test the lastest python version.
- "3.12"
ollama-model:
# For quicker CI, use a smaller, tool-capable model than the default.
- "qwen2.5:0.5b"

steps:
- uses: actions/checkout@v4

- name: Install UV
run: curl -LsSf https://astral.sh/uv/install.sh | sh

- name: Source Cargo Environment
run: source $HOME/.cargo/env

- name: Set up Python
run: uv python install ${{ matrix.python-version }}

- name: Install Ollama
run: curl -fsSL https://ollama.com/install.sh | sh

- name: Start Ollama
run: |
# Run the background, in a way that survives to the next step
nohup ollama serve > ollama.log 2>&1 &
# Block using the ready endpoint
time curl --retry 5 --retry-connrefused --retry-delay 1 -sf http://localhost:11434
# Tests use OpenAI which does not have a mechanism to pull models. Run a
# simple prompt to (pull and) test the model first.
- name: Test Ollama model
run: ollama run $OLLAMA_MODEL hello || cat ollama.log
env:
OLLAMA_MODEL: ${{ matrix.ollama-model }}

- name: Run Ollama tests
run: uv run pytest tests -m integration -k ollama
env:
OLLAMA_MODEL: ${{ matrix.ollama-model }}
3 changes: 2 additions & 1 deletion tests/providers/openai/test_ollama.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
from typing import Tuple

import os
import pytest

from exchange import Text
Expand All @@ -26,7 +27,7 @@ def test_ollama_completion_integration():

def ollama_complete() -> Tuple[Message, Usage]:
provider = OllamaProvider.from_env()
model = OLLAMA_MODEL
model = os.getenv("OLLAMA_MODEL", OLLAMA_MODEL)
system = "You are a helpful assistant."
messages = [Message.user("Hello")]
return provider.complete(model=model, system=system, messages=messages, tools=None)
7 changes: 5 additions & 2 deletions tests/test_integration.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
import os
import pytest
from exchange.exchange import Exchange
from exchange.message import Message
Expand All @@ -9,7 +10,7 @@
too_long_chars = "x" * (2**20 + 1)

cases = [
(get_provider("ollama"), OLLAMA_MODEL),
(get_provider("ollama"), os.getenv("OLLAMA_MODEL", OLLAMA_MODEL)),
(get_provider("openai"), "gpt-4o-mini"),
(get_provider("databricks"), "databricks-meta-llama-3-70b-instruct"),
(get_provider("bedrock"), "anthropic.claude-3-5-sonnet-20240620-v1:0"),
Expand Down Expand Up @@ -46,7 +47,9 @@ def read_file(filename: str) -> str:
Read the contents of the file.
Args:
filename (str): The path to the file, which can be relative or absolute.
filename (str): The path to the file, which can be relative or
absolute. If it is a plain filename, it is assumed to be in the
current working directory.
Returns:
str: The contents of the file.
Expand Down

0 comments on commit 5b34bc5

Please sign in to comment.