Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci: enables ollama integration tests #23

Merged
merged 5 commits into from
Sep 23, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 49 additions & 0 deletions .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -34,3 +34,52 @@ jobs:

- name: Run tests
run: uv run pytest tests -m 'not integration'

# This runs integration tests of the OpenAI API, using Ollama to host models.
# This lets us test PRs from forks which can't access secrets like API keys.
ollama:
runs-on: ubuntu-latest

strategy:
matrix:
python-version:
# Only test the lastest python version.
- "3.12"
ollama-model:
# For quicker CI, use a smaller, tool-capable model than the default.
- "qwen2.5:0.5b"

steps:
- uses: actions/checkout@v4

- name: Install UV
run: curl -LsSf https://astral.sh/uv/install.sh | sh

- name: Source Cargo Environment
run: source $HOME/.cargo/env

- name: Set up Python
run: uv python install ${{ matrix.python-version }}

- name: Install Ollama
run: curl -fsSL https://ollama.com/install.sh | sh
Copy link
Contributor Author

@codefromthecrypt codefromthecrypt Sep 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

confusing part is the log here ends up saying "hey I'm running" when well it isn't ;) I used act locally to figure it out.

act pull_request -P ubuntu-latest=ghcr.io/catthehacker/ubuntu:act-latest --container-architecture linux/amd64


- name: Start Ollama
run: |
# Run the background, in a way that survives to the next step
nohup ollama serve > ollama.log 2>&1 &

# Block using the ready endpoint
time curl --retry 5 --retry-connrefused --retry-delay 1 -sf http://localhost:11434

# Tests use OpenAI which does not have a mechanism to pull models. Run a
# simple prompt to (pull and) test the model first.
- name: Test Ollama model
run: ollama run $OLLAMA_MODEL hello || cat ollama.log
env:
OLLAMA_MODEL: ${{ matrix.ollama-model }}

- name: Run Ollama tests
run: uv run pytest tests -m integration -k ollama
env:
OLLAMA_MODEL: ${{ matrix.ollama-model }}
3 changes: 2 additions & 1 deletion tests/providers/openai/test_ollama.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
from typing import Tuple

import os
import pytest

from exchange import Text
Expand All @@ -26,7 +27,7 @@ def test_ollama_completion_integration():

def ollama_complete() -> Tuple[Message, Usage]:
provider = OllamaProvider.from_env()
model = OLLAMA_MODEL
model = os.getenv("OLLAMA_MODEL", OLLAMA_MODEL)
system = "You are a helpful assistant."
messages = [Message.user("Hello")]
return provider.complete(model=model, system=system, messages=messages, tools=None)
7 changes: 5 additions & 2 deletions tests/test_integration.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
import os
import pytest
from exchange.exchange import Exchange
from exchange.message import Message
Expand All @@ -9,7 +10,7 @@
too_long_chars = "x" * (2**20 + 1)

cases = [
(get_provider("ollama"), OLLAMA_MODEL),
(get_provider("ollama"), os.getenv("OLLAMA_MODEL", OLLAMA_MODEL)),
(get_provider("openai"), "gpt-4o-mini"),
(get_provider("databricks"), "databricks-meta-llama-3-70b-instruct"),
(get_provider("bedrock"), "anthropic.claude-3-5-sonnet-20240620-v1:0"),
Expand Down Expand Up @@ -46,7 +47,9 @@ def read_file(filename: str) -> str:
Read the contents of the file.

Args:
filename (str): The path to the file, which can be relative or absolute.
filename (str): The path to the file, which can be relative or
absolute. If it is a plain filename, it is assumed to be in the
current working directory.

Returns:
str: The contents of the file.
Expand Down