Skip to content

Development of a tool that uses LLM to support annotations in LED.

Notifications You must be signed in to change notification settings

cfestus/llm4led

Repository files navigation

component-id type name description work-package pilot project resource release-date release-number release-link doi changelog licence copyright contributors related-components credits
llm4led
Software
Curation of documentary evidence, experiments with LED/GPT-4
This repository contains Python code for scraping data from LED (Listening Experience Database). The code processes the obtained data and uses the GPT-4 API to generate annotations from the submitted listening evidence.
WP4
CHILD
polifonia-project
24/04/2024
v1.0
Apache-2.0
Copyright (c) 2024 The Open University
Chukwudi "Festus" Uwasomba <https://github.com/cfestus>
reuses
documentary-evidence-benchmark

Curation of documentary evidence, experiments with LED/GPT-4

Description

This repository contains Python code for scraping data from LED (Listening Experience Database). The code processes the obtained data and uses the GPT-4 API to generate annotations from the submitted listening evidence.

Prerequisites

  • Python 3.x
  • OpenAI API key

To run the code

  1. Clone the Repository:
    git clone <repository-url>
    cd <repository-directory>
    
  2. Install Dependencies:
    pip install -r requirements.txt.
    
  3. API Configuration: Create a .env file in the root directory and add your OpenAI API key, that is your GPT-4 API.
    OPENAI_API_KEY="your_api_key_here"
    
  4. Usage: Once you have completed the setup, run the main script to generate annotations.
    run main.py
    

About

Development of a tool that uses LLM to support annotations in LED.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published