Experiment with Chainlit AI interface with RAG on Upsun

What is Chainlit? Chainlit is an open-source async Python framework which allows developers to build scalable Conversational AI or agentic applications. While providing the base framework, Chainlit gives you full flexibility to implement any external API, logic or local models you want to run. In this tutorial we will be implementing RAG (Retrieval Augmented Generation) in two ways: The first will leverage OpenAI assistants with uploaded documents The second will be using llama_index with a local folder of documents Setting up Chainlit locally Virtualenv Let's start by creating our virtualenv: mkdir chainlit && cd chainlit python3 -m venv venv source venv/bin/activate Install dependencies We are now adding our dependencies and freeze them: pip install chainlit pip install llama_index # Useful only for case #2 pip install openai pip freeze > requirements.txt Test Chainlit Let's start chainlit: chainlit hello You should now see a placeholder on http://localhost:8000/ Let's deploy it on Upsun Init the git repository git init . Don't forget to add a .gitignore file. Some folders will be used later on. .env database/** data/** storage/** .chainlit venv __pycache__ Create an Upsun project upsun project:create # follow the prompts The Upsun CLI will automatically set the upsun remote on your local git repository. Let's add the configuration Here is an example configuration to run Chainlit: applications: chainlit: source: root: "/" type: "python:3.11" mounts: "/database": source: "storage" source_path: "database" ".files": source: "storage" source_path: "files" "__pycache__": source: "storage" source_path: "pycache" ".chainlit": source: "storage" source_path: ".chainlit" web: commands: start: "chainlit run app.py --port $PORT --host 0.0.0.0" upstream: socket_family: tcp locations: "/": passthru: true "/public": passthru: true build: flavor: none hooks: build: | set -eux pip install -r requirements.txt deploy: | set -eux # post_deploy: | routes: "https://{default}/": type: upstream upstream: "chainlit:http" "https://www.{default}": type: redirect to: "https://{default}/" Nothing out of the ordinary there! We install all dependencies in the build hook and then start the app with chainlit directly and we specify the port it should run on.

Jan 20, 2025 - 17:14
 0
Experiment with Chainlit AI interface with RAG on Upsun

What is Chainlit?

Chainlit is an open-source async Python framework which allows developers to build scalable Conversational AI or agentic applications. While providing the base framework, Chainlit gives you full flexibility to implement any external API, logic or local models you want to run.

Test assistant

In this tutorial we will be implementing RAG (Retrieval Augmented Generation) in two ways:

  • The first will leverage OpenAI assistants with uploaded documents
  • The second will be using llama_index with a local folder of documents

Setting up Chainlit locally

Virtualenv

Let's start by creating our virtualenv:

mkdir chainlit && cd chainlit
python3 -m venv venv
source venv/bin/activate

Install dependencies

We are now adding our dependencies and freeze them:

pip install chainlit
pip install llama_index # Useful only for case #2
pip install openai
pip freeze > requirements.txt

Test Chainlit

Let's start chainlit:

chainlit hello

You should now see a placeholder on http://localhost:8000/

Chainlit demo

Let's deploy it on Upsun

Init the git repository

git init .

Don't forget to add a .gitignore file. Some folders will be used later on.

.env
database/**
data/**
storage/**
.chainlit
venv
__pycache__

Create an Upsun project

upsun project:create # follow the prompts

The Upsun CLI will automatically set the upsun remote on your local git repository.

Let's add the configuration

Here is an example configuration to run Chainlit:

applications:
  chainlit:
    source:
      root: "/"

    type: "python:3.11"

    mounts:
      "/database":
        source: "storage"
        source_path: "database"
      ".files":
        source: "storage"
        source_path: "files"
      "__pycache__":
        source: "storage"
        source_path: "pycache"
      ".chainlit":
        source: "storage"
        source_path: ".chainlit"

    web:
      commands:
        start: "chainlit run app.py --port $PORT --host 0.0.0.0"
      upstream:
        socket_family: tcp
      locations:
        "/":
          passthru: true
        "/public":
          passthru: true

    build:
      flavor: none

    hooks:
      build: |
        set -eux
        pip install -r requirements.txt
      deploy: |
        set -eux
      # post_deploy: |

routes:
  "https://{default}/":
    type: upstream
    upstream: "chainlit:http"
  "https://www.{default}":
    type: redirect
    to: "https://{default}/"

Nothing out of the ordinary there! We install all dependencies in the build hook and then start the app with chainlit directly and we specify the port it should run on.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow