Saturday, March 30, 2024

Hugging Face - Part4 - Containerize your LLM app using Python, FastAPI, and Docker

In this exercise, our objective is to integrate an API endpoint for the Large Language Model (LLM) provided by Hugging Face using FastAPI. Additionally, we aim to encapsulate this whole application within a Docker container for portability and ease of deployment.

To achieve this, our project consists of several key components:

  • Large Language Model: Our application logic resides in model.py, where the model_pipeline function serves as the core engine behind our LLM interaction using LangChain. We've chosen the Mistral Instruct model from Hugging Face for this exercise.

  • API Endpoint Integration: We'll be incorporating an API endpoint using FastAPI to seamlessly interact with the LLM downloaded from Hugging Face. The main.py file implements the FastAPI framework, defining routes and endpoints. Specifically, the /ask endpoint invokes the model_pipeline function to interact with the Mistral Instruct model and generate a response.

  • Containerization: Utilizing the Dockerfile, we containerize our FastAPI LLM application. This ensures that our application, along with its dependencies, can be easily packaged and deployed across various environments.


You can access the Dockerfile, Python code, and other observations on my GitHub repository:

https://github.com/vineethac/huggingface/tree/main/5-containerize-llm-app

Deploy on Kubernetes as a pod

Deploying directly as a pod is not a preferred way. This is just for quick testing purpose! In the next blog post we will see how to deploy this as a Kubernetes deployment resource.

❯ KUBECONFIG=gckubeconfig k run hf-11 --image=vineethac/fastapi-llm-app:latest --image-pull-policy=Always
pod/hf-11 created
❯ KUBECONFIG=gckubeconfig kg po hf-11
NAME    READY   STATUS              RESTARTS   AGE
hf-11   0/1     ContainerCreating   0          2m23s
❯
❯ KUBECONFIG=gckubeconfig kg po hf-11
NAME    READY   STATUS    RESTARTS   AGE
hf-11   1/1     Running   0          26m
❯
❯ KUBECONFIG=gckubeconfig k logs hf-11 -f
Downloading shards: 100%|██████████| 3/3 [02:29<00:00, 49.67s/it]
Loading checkpoint shards: 100%|██████████| 3/3 [00:03<00:00,  1.05s/it]
INFO:     Will watch for changes in these directories: ['/fastapi-llm-app']
INFO:     Uvicorn running on http://0.0.0.0:5000 (Press CTRL+C to quit)
INFO:     Started reloader process [7] using WatchFiles
Loading checkpoint shards: 100%|██████████| 3/3 [00:11<00:00,  3.88s/it]
INFO:     Started server process [25]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
2024-03-28 08:19:12 hf-11 watchfiles.main[7] INFO 3 changes detected
2024-03-28 08:19:48 hf-11 root[25] INFO User prompt: select head or tail randomly. strictly respond only in one word. no explanations needed.
2024-03-28 08:19:48 hf-11 root[25] INFO Model: mistralai/Mistral-7B-Instruct-v0.2
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
2024-03-28 08:19:54 hf-11 root[25] INFO LLM response:  Head.
2024-03-28 08:19:54 hf-11 root[25] INFO FastAPI response:  Head.
INFO:     127.0.0.1:53904 - "POST /ask HTTP/1.1" 200 OK
INFO:     127.0.0.1:55264 - "GET / HTTP/1.1" 200 OK
INFO:     127.0.0.1:43342 - "GET /healthz HTTP/1.1" 200 OK

For a quick validation, I did exec into the pod and curl against the exposed APIs.

❯ KUBECONFIG=gckubeconfig k exec -it hf-11 -- bash
root@hf-11:/fastapi-llm-app#
root@hf-11:/fastapi-llm-app# curl -d '{"text":"select head or tail randomly. strictly respond only in one word. no explanations needed."}' -H "Content-Type: application/json" -X POST http://localhost:5000/ask
{"response":" Head."}root@hf-11:/fastapi-llm-app#
root@hf-11:/fastapi-llm-app# curl localhost:5000
"Welcome to FastAPI for your local LLM!"root@hf-11:/fastapi-llm-app#
root@hf-11:/fastapi-llm-app#
root@hf-11:/fastapi-llm-app# curl localhost:5000/healthz
{"Status":"OK"}root@hf-11:/fastapi-llm-app#
root@hf-11:/fastapi-llm-app#


You can also use kubectl expose command to create a service for this pod and then port forward to it and then curl to it. 

Hope it was useful. Cheers!

Thursday, March 28, 2024

Generative AI and LLMs Blog Series

In this blog series we will explore the fascinating world of Generative AI and Large Language Models (LLMs). We delve into the latest advancements in AI technology, focusing particularly on LLMs, which have revolutionized various fields, including natural language processing and text generation.

Throughout this series, we will discuss LLM serving platforms such as Ollama and Hugging Face, providing insights into their capabilities, features, and applications. I will also guide you through the process of getting started with LLMs, from setting up your development/ test environment to deploying these powerful models on Kubernetes clusters. Additionally, we'll demonstrate how to effectively prompt and interact with LLMs using frameworks like LangChain, empowering you to harness the full potential of these cutting-edge technologies.

Stay tuned for insightful articles, and hands-on guides that will equip you with the knowledge and skills to unlock the transformative capabilities of LLMs. Let's explore the future of AI together!

Image credits: designer.microsoft.com/image-creator


Ollama

Part1 - Deploy Ollama on Kubernetes

Part2 - Prompt LLMs using Ollama, LangChain, and Python

Part3 - Web UI for Ollama to interact with LLMs

Part4 - Vision assistant using LLaVA


Hugging Face

Part1 - Getting started with Hugging Face

Part2 - Code generation with Code Llama Instruct

Part3 - Inference with Code Llama using LangChain

Part4 - Containerize your LLM app using Python, FastAPI, and Docker

Part5 - Deploy your LLM app on Kubernetes <coming soon>


Friday, February 23, 2024

Hugging Face - Part3 - Inference with Code Llama using LangChain

In the field of understanding and working with human language (NLP), Hugging Face is a key platform that provides many pre-trained models for different tasks. With Transformers, LangChain, and Python developers can easily use Hugging Face's models on their own computers for quick processing. Using LangChain offers a streamlined and user-friendly approach to tapping into the capabilities of pre-trained language models. In this blog post we focus on how to inference with Code Llama - Instruct model from Hugging Face locally using LangChain. 


You can access the Python script in my GitHub repository:
https://github.com/vineethac/huggingface/tree/main/4-codellama_with_langchain


To initiate inference with Code Llama, developers can start by specifying the desired model using its identifier, such as MODEL_ID = "codellama/CodeLlama-7b-Instruct-hf". Transformers simplifies the process by providing a unified interface with the familiar Python programming language, allowing users to effortlessly initialize the model and tokenizer.

Once the model and tokenizer are set up, developers can leverage LangChain's HuggingFacePipeline class to create a text generation pipeline. This pipeline, defined with parameters like max_new_tokens and repetition_penalty, becomes a powerful tool for local inferencing. By combining this pipeline with LangChain's PromptTemplate, developers can easily construct prompts and invoke the entire chain to generate responses. This streamlined process facilitates local inferencing with Code Llama, empowering developers to leverage Hugging Face's models for a wide range of natural language processing tasks in their Python applications. 


Example

root@hf-3:/codellama# python3 codellama_langchain.py
tokenizer_config.json: 100%|█████████████████████████████████████████████████████████| 749/749 [00:00<00:00, 3.57MB/s]
tokenizer.model: 100%|█████████████████████████████████████████████████████████████| 500k/500k [00:00<00:00, 4.48MB/s]
tokenizer.json: 100%|████████████████████████████████████████████████████████████| 1.84M/1.84M [00:00<00:00, 6.13MB/s]
special_tokens_map.json: 100%|███████████████████████████████████████████████████████| 411/411 [00:00<00:00, 1.86MB/s]
config.json: 100%|███████████████████████████████████████████████████████████████████| 646/646 [00:00<00:00, 3.40MB/s]
model.safetensors.index.json: 100%|██████████████████████████████████████████████| 25.1k/25.1k [00:00<00:00, 68.2MB/s]
model-00001-of-00002.safetensors: 100%|██████████████████████████████████████████| 9.98G/9.98G [01:50<00:00, 90.0MB/s]
model-00002-of-00002.safetensors: 100%|██████████████████████████████████████████| 3.50G/3.50G [00:39<00:00, 89.5MB/s]
Downloading shards: 100%|███████████████████████████████████████████████████████████████| 2/2 [02:30<00:00, 75.16s/it]
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████| 2/2 [00:05<00:00, 2.86s/it]
generation_config.json: 100%|█████████████████████████████████████████████████████████| 116/116 [00:00<00:00, 110kB/s]

Ask codellama: given two unsorted integer lists. merge the two lists, sort the merged list, and find median using python. consider the length of the merged list while finding the median value.
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
Here is a possible solution to the problem:

def merge_and_find_median(list1, list2):
# Merge the two lists
merged_list = list1 + list2

# Sort the merged list
merged_list.sort()

# Find the median value
if len(merged_list) % 2 == 0:
# Even number of elements in the merged list
median = (merged_list[len(merged_list) // 2 - 1] + merged_list[len(merged_list) // 2]) / 2
else:
# Odd number of elements in the merged list
median = merged_list[len(merged_list) // 2]

return median

Explanation:

* First, we merge the two lists by concatenating them.
* Then, we sort the merged list using the `sort()` method.
* Next, we check whether the length of the merged list is even or odd. If it's even, we take the average of the middle two elements of the list. If it's odd, we simply take the middle element as the median.
* Finally, we return the median value.

Note that this solution assumes that both input lists are sorted in ascending order. If they are not sorted, you may need to add additional code to sort them before merging and finding the median.</s>

Ask codellama: /bye
root@hf-3:/codellama#


Hope it was useful. Cheers!

Tuesday, February 20, 2024

Hugging Face - Part2 - Code generation with Code Llama - Instruct

Code Llama, an impressive publicly available machine learning model, is a specialised version of Llama 2 that was created by further training Llama 2 on code-specific datasets. It is specifically designed to tackle coding challenges. It can generate both code and descriptive natural language about code, making it a versatile asset for developers. Some common use cases include writing new functions or even debugging existing code. It supports a wide range of popular programming languages, including Python, C++, Java, PHP, Typescript (Javascript), C#, and Bash.

Code Llama – Instruct, an advanced variation of Code Llama which is designed to accept natural language instructions as input and returns the expected output. This unique feature makes the model more adept at understanding and fulfilling user requirements. The Meta AI team recommend using Code Llama - Instruct variants whenever you intend to use Code Llama for your code generation tasks.



In this blog post, I will guide you through the process of employing the Code Llama - Instruct model from Hugging Face locally for code generation tasks. We will be utilizing the Python Transformers library for this. You can access the Python script in my GitHub repository:

https://github.com/vineethac/huggingface/tree/main/3-codellama-instruct

Example

root@hf-5:/# python3 codellama_prompt.py
tokenizer_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 749/749 [00:00<00:00, 3.44MB/s]
tokenizer.model: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 500k/500k [00:00<00:00, 4.12MB/s]
tokenizer.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.84M/1.84M [00:00<00:00, 9.76MB/s]
special_tokens_map.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 411/411 [00:00<00:00, 2.08MB/s]
config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 646/646 [00:00<00:00, 3.51MB/s]
model.safetensors.index.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25.1k/25.1k [00:00<00:00, 47.9MB/s]
model-00001-of-00002.safetensors: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9.98G/9.98G [02:02<00:00, 81.2MB/s]
model-00002-of-00002.safetensors: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.50G/3.50G [00:45<00:00, 76.7MB/s]
Downloading shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [02:48<00:00, 84.38s/it]
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:20<00:00, 10.10s/it]
generation_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 116/116 [00:00<00:00, 444kB/s]


Ask codellama/CodeLlama-7b-Instruct-hf: reverse a list in python.
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
Result: <s>[INST] reverse a list in python. [/INST]  There are several ways to reverse a list in Python. Here are a few methods:

1. Using the `reversed()` function:

my_list = [1, 2, 3, 4, 5]
reversed_list = list(reversed(my_list))
print(reversed_list)  # [5, 4, 3, 2, 1]

2. Using slicing:

my_list = [1, 2, 3, 4, 5]
reversed_list = my_list[::-1]
print(reversed_list)  # [5, 4, 3, 2, 1]

3. Using the `reverse()` method:

my_list = [1, 2, 3, 4, 5]
my_list.reverse()
print(my_list)  # [5, 4, 3, 2, 1]

Note that the `reverse()` method reverses the list in place, meaning that it modifies the original list. The other two methods create a new list with the elements in reverse order.

Ask codellama/CodeLlama-7b-Instruct-hf: /bye
root@hf-5:/#

The first time you execute the Python script, the model will be automatically downloaded to your local machine. Subsequently, upon subsequent runs, the previously saved model will be utilized in processing user inputs.

root@hf-5:~# cd ~/.cache/huggingface/hub/
root@hf-5:~/.cache/huggingface/hub#
root@hf-5:~/.cache/huggingface/hub# ls | grep Instruct
models--codellama--CodeLlama-7b-Instruct-hf
root@hf-5:~/.cache/huggingface/hub#
root@hf-5:~/.cache/huggingface/hub# cd models--codellama--CodeLlama-7b-Instruct-hf
root@hf-5:~/.cache/huggingface/hub/models--codellama--CodeLlama-7b-Instruct-hf# ls
blobs  refs  snapshots
root@hf-5:~/.cache/huggingface/hub/models--codellama--CodeLlama-7b-Instruct-hf# cd blobs/
root@hf-5:~/.cache/huggingface/hub/models--codellama--CodeLlama-7b-Instruct-hf/blobs#
root@hf-5:~/.cache/huggingface/hub/models--codellama--CodeLlama-7b-Instruct-hf/blobs# ls -altrh
total 13G
-rw-r--r-- 1 root root  749 Feb 19 12:03 526f464cf83353c59f7c07b9e587498b47d67a1b
-rw-r--r-- 1 root root 489K Feb 19 12:03 45ccb9c8b6b561889acea59191d66986d314e7cbd6a78abc6e49b139ca91c1e6
-rw-r--r-- 1 root root 1.8M Feb 19 12:03 6b25321d89e21832a89e6273834eab0e4378a53b
-rw-r--r-- 1 root root  411 Feb 19 12:03 d85ba6cb6820b01226ef8bd40b46bb489041c6a8
-rw-r--r-- 1 root root  646 Feb 19 12:03 8fb4018bc8ceaddbaf7d3d238911a30fd5e9081a
-rw-r--r-- 1 root root  25K Feb 19 12:03 cd3b8fb46c4d5616e91520a7a7d9a5a75af759a8
-rw-r--r-- 1 root root 9.3G Feb 19 12:05 0f52c0eab2dafa0a13e8103a426b17137f7b053e9211334158d7bd7cc1148ceb
-rw-r--r-- 1 root root 3.3G Feb 19 12:06 9ddab1824225fbe405cea67c5d8d87666f1ab5c59ec89abdf2cacae9b555da75
-rw-r--r-- 1 root root  116 Feb 19 12:06 aa9aac2cbaa80cf25094e7d9a527bd1cab9f5321
drwxr-xr-x 6 root root 4.0K Feb 19 12:06 ..
drwxr-xr-x 2 root root 4.0K Feb 19 12:06 .
root@hf-5:~/.cache/huggingface/hub/models--codellama--CodeLlama-7b-Instruct-hf/blobs#

I hope it was useful. Cheers!

Monday, February 19, 2024

Hugging Face - Part1 - Getting started

This blog series will help you get started with Hugging Face, including:

  • Downloading and using Hugging Face models locally via the Python Transformers library.
  • Constructing an API for your LLM application using FastAPI.
  • Containerizing your project with Docker.
  • Deploying and running your containerized LLM application on a Kubernetes cluster.


An overview about Hugging Face, types of Language Models, and the Transformers library are given in my GitHub repo: https://github.com/vineethac/huggingface/tree/main

Here are some examples of running the language models locally from Hugging Face using Pipeline function from the Transformers library:

question-answering

Model used: distilbert-base-cased-distilled-squad

6-question-answering.py

'''
Question answering from a given context.
'''

from transformers import pipeline

question_answerer = pipeline(task="question-answering", model="distilbert-base-cased-distilled-squad")
output = question_answerer(
    question="What work I do?",
    context="My name is Vineeth and I work as a Site Reliability Engineer at VMware in Bangalore, India",
)

print(output)


root@hf-2:/transformers-course# python3 6-question-answering.py
{'score': 0.9214025139808655, 'start': 35, 'end': 60, 'answer': 'Site Reliability Engineer'}
root@hf-2:/transformers-course#


translation

Model used: Helsinki-NLP/opus-mt-fr-en

8-translation.py

'''
Translate from fr to en.
'''

from transformers import pipeline

translator = pipeline("translation", model="Helsinki-NLP/opus-mt-fr-en")
output = translator("Ce cours est produit par Hugging Face.")

print(output)


root@hf-2:/transformers-course# python3 8-translation.py
[{'translation_text': 'This course is produced by Hugging Face.'}]
root@hf-2:/transformers-course#


More details and examples are given in my GitHub repo:

 

https://github.com/vineethac/huggingface/tree/main/1-examples



Hope it was useful. Cheers!


Thursday, February 1, 2024

Ollama - Part4 - Vision assistant using LLaVA

In this exercise we will interact with LLaVA which is an end-to-end trained large multimodal model and vision assistant. We will use the Ollama REST API to prompt the model using Python.

Full project in my GitHub

https://github.com/vineethac/Ollama/tree/main/ollama_vision_assistant


LLaVA, being a large multimodal model and vision assistant, can be utilized for various tasks. Here are a couple of use cases:

  • Image Description Generation

Input: Provide LLaVA with an image.
Use Case: LLaVA can generate descriptive text or captions for the content of the image. This is particularly useful for automating image cataloging or enhancing accessibility for visually impaired users.

  • Question-Answering on Text and Image

Input: Ask LLaVA a question related to a given text or show it an image.
Use Case: LLaVA can comprehend the context and provide relevant answers. For instance, you could ask about details in a picture or seek information from a paragraph, and LLaVA will attempt to answer accordingly.

These are just a few examples, and the versatility of LLaVA allows for exploration across a wide range of multimodal tasks and applications.

Sample interaction with LLaVA model


Image


Image credits: shutterstock

Prompt

python3 query_image.py --path=images/img1.jpg --prompt="describe the picture 
and what are the essentials that one need to carry generally while going these 
kind of places?"

Response
{
    "model": "llava",
    "created_at": "2024-01-23T17:41:27.771729767Z",
    "response": " The image shows a man riding his bicycle on a country road, surrounded by 
    beautiful scenery and mountains. He appears to be enjoying the ride as he navigates 
    through the countryside. \n\nWhile cycling in such environments, an essential item one 
    would need to carry is a water bottle or hydration pack, to ensure they stay well-hydrated 
    during the journey. In addition, it's important to have a map or GPS device to navigate 
    through potentially less familiar routes and avoid getting lost. Other useful items for 
    cyclists may include a multi-tool, first aid kit, bike lock, snacks, spare clothes, 
    and a small portable camping stove if planning an overnight stay in the wilderness.",

Hope it was useful. Cheers!

Friday, January 26, 2024

Ollama - Part3 - Web UI for Ollama to interact with LLMs

In the previous blog posts, we covered the deployment of Ollama on Kubernetes cluster and demonstrated how to prompt the Language Models (LLMs) using LangChain and Python. Now we will delve into deploying a web user interface (UI) for Ollama on a Kubernetes cluster. This will provide a ChatGPT like experience when engaging with the LLMs.

Full project in my GitHub

https://github.com/vineethac/Ollama/tree/main/ollama_webui


The above referenced GitHub repository details all the necessary steps required to deploy the Ollama web UI. The Following diagram outlines the various components and services that interact with each other as part of this entire system:


For detailed information on deploying Prometheus, Grafana, and Loki on a Kubernetes cluster, please refer this blog post.

A sample interaction with the mistral model using the web UI is given below.


Hope it was useful. Cheers!