Zindi ITU Challenge Docs
  • Overview
    • 💡Introduction
    • ☎️Challenges and Objectives
  • METHOD
    • 📪RAG
    • 📎Finetuning (Phi-2)
    • 🧙‍♂️Response Scoring (Falcon7B)
    • 🤖Abbreviations
  • Final Systems
    • 🛞Phi-2
    • 🛻Falcon7B
  • REPRODUCING RESULTS
    • 🚀Installation Instructions
    • 🏓Phi-2
    • ⚾Falcon7B
  • Links
Powered by GitBook
On this page
  • Phi-2
  • Falcon7B
  • (Optional) Jump to reproducing results
  1. Overview

Introduction

Documentation for the 3musketeers in the "Specializing Large Language Models for Telecom Networks by ITU AI/ML in 5G" challenge

NextChallenges and Objectives

Last updated 10 months ago

Authors:

  • Tewodros Idris ()

  • Alex Gichamba ()

  • Brian Ebiyau ()

This documentation describes our solutions for both tracks in the "Specializing Large Language Models for Telecom Networks by ITU AI/ML in 5G" challenge.

A brief summary of each solution:

Phi-2

We develop a RAG pipeline based on ColBERT V2 ( and ) and provide as many retrieved chunks as the context window can support.

We also append the full forms of abbreviations that appear in the question and options.

We finetune the LLM with context to align the LLM's generated to our desired output format, and to improve the "usable" context window.

Falcon7B

We adopt the same RAG pipeline and abbreviations expansion as for Phi-2.

Following the challenge guidelines, we could not finetune the model. We find that it responds very poorly when options are provided in the prompt. So, we simply do not provide the options in the prompt.

We allow the LLM to freely generate a response conditioned on the question and context, but not the options. We then develop a scoring system that attempts to find the most likely option given the response.

(Optional) Jump to reproducing results

If you'd like, you can quickly jump to reproducing results: REPRODUCING RESULTS

💡
teddyk251
zeroppl
Atom007
https://arxiv.org/abs/2004.12832
https://arxiv.org/abs/2112.01488