Showing posts with label Code Llama – Instruct. Show all posts
Showing posts with label Code Llama – Instruct. Show all posts

Saturday, March 30, 2024

Hugging Face - Part4 - Containerize your LLM app using Python, FastAPI, and Docker

In this exercise, our objective is to integrate an API endpoint for the Large Language Model (LLM) provided by Hugging Face using FastAPI. Additionally, we aim to encapsulate this whole application within a Docker container for portability and ease of deployment.

To achieve this, our project consists of several key components:

  • Large Language Model: Our application logic resides in model.py, where the model_pipeline function serves as the core engine behind our LLM interaction using LangChain. We've chosen the Mistral Instruct model from Hugging Face for this exercise.

  • API Endpoint Integration: We'll be incorporating an API endpoint using FastAPI to seamlessly interact with the LLM downloaded from Hugging Face. The main.py file implements the FastAPI framework, defining routes and endpoints. Specifically, the /ask endpoint invokes the model_pipeline function to interact with the Mistral Instruct model and generate a response.

  • Containerization: Utilizing the Dockerfile, we containerize our FastAPI LLM application. This ensures that our application, along with its dependencies, can be easily packaged and deployed across various environments.


You can access the Dockerfile, Python code, and other observations on my GitHub repository:

https://github.com/vineethac/huggingface/tree/main/5-containerize-llm-app

Deploy on Kubernetes as a pod

Deploying directly as a pod is not a preferred way. This is just for quick testing purpose! In the next blog post we will see how to deploy this as a Kubernetes deployment resource.

❯ KUBECONFIG=gckubeconfig k run hf-11 --image=vineethac/fastapi-llm-app:latest --image-pull-policy=Always
pod/hf-11 created
❯ KUBECONFIG=gckubeconfig kg po hf-11
NAME    READY   STATUS              RESTARTS   AGE
hf-11   0/1     ContainerCreating   0          2m23s
❯
❯ KUBECONFIG=gckubeconfig kg po hf-11
NAME    READY   STATUS    RESTARTS   AGE
hf-11   1/1     Running   0          26m
❯
❯ KUBECONFIG=gckubeconfig k logs hf-11 -f
Downloading shards: 100%|██████████| 3/3 [02:29<00:00, 49.67s/it]
Loading checkpoint shards: 100%|██████████| 3/3 [00:03<00:00,  1.05s/it]
INFO:     Will watch for changes in these directories: ['/fastapi-llm-app']
INFO:     Uvicorn running on http://0.0.0.0:5000 (Press CTRL+C to quit)
INFO:     Started reloader process [7] using WatchFiles
Loading checkpoint shards: 100%|██████████| 3/3 [00:11<00:00,  3.88s/it]
INFO:     Started server process [25]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
2024-03-28 08:19:12 hf-11 watchfiles.main[7] INFO 3 changes detected
2024-03-28 08:19:48 hf-11 root[25] INFO User prompt: select head or tail randomly. strictly respond only in one word. no explanations needed.
2024-03-28 08:19:48 hf-11 root[25] INFO Model: mistralai/Mistral-7B-Instruct-v0.2
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
2024-03-28 08:19:54 hf-11 root[25] INFO LLM response:  Head.
2024-03-28 08:19:54 hf-11 root[25] INFO FastAPI response:  Head.
INFO:     127.0.0.1:53904 - "POST /ask HTTP/1.1" 200 OK
INFO:     127.0.0.1:55264 - "GET / HTTP/1.1" 200 OK
INFO:     127.0.0.1:43342 - "GET /healthz HTTP/1.1" 200 OK

For a quick validation, I did exec into the pod and curl against the exposed APIs.

❯ KUBECONFIG=gckubeconfig k exec -it hf-11 -- bash
root@hf-11:/fastapi-llm-app#
root@hf-11:/fastapi-llm-app# curl -d '{"text":"select head or tail randomly. strictly respond only in one word. no explanations needed."}' -H "Content-Type: application/json" -X POST http://localhost:5000/ask
{"response":" Head."}root@hf-11:/fastapi-llm-app#
root@hf-11:/fastapi-llm-app# curl localhost:5000
"Welcome to FastAPI for your local LLM!"root@hf-11:/fastapi-llm-app#
root@hf-11:/fastapi-llm-app#
root@hf-11:/fastapi-llm-app# curl localhost:5000/healthz
{"Status":"OK"}root@hf-11:/fastapi-llm-app#
root@hf-11:/fastapi-llm-app#


You can also use kubectl expose command to create a service for this pod and then port forward to it and then curl to it. 

Hope it was useful. Cheers!

Thursday, March 28, 2024

Generative AI and LLMs Blog Series

In this blog series we will explore the fascinating world of Generative AI and Large Language Models (LLMs). We delve into the latest advancements in AI technology, focusing particularly on LLMs, which have revolutionized various fields, including natural language processing and text generation.

Throughout this series, we will discuss LLM serving platforms such as Ollama and Hugging Face, providing insights into their capabilities, features, and applications. I will also guide you through the process of getting started with LLMs, from setting up your development/ test environment to deploying these powerful models on Kubernetes clusters. Additionally, we'll demonstrate how to effectively prompt and interact with LLMs using frameworks like LangChain, empowering you to harness the full potential of these cutting-edge technologies.

Stay tuned for insightful articles, and hands-on guides that will equip you with the knowledge and skills to unlock the transformative capabilities of LLMs. Let's explore the future of AI together!

Image credits: designer.microsoft.com/image-creator


Ollama

Part1 - Deploy Ollama on Kubernetes

Part2 - Prompt LLMs using Ollama, LangChain, and Python

Part3 - Web UI for Ollama to interact with LLMs

Part4 - Vision assistant using LLaVA


Hugging Face

Part1 - Getting started with Hugging Face

Part2 - Code generation with Code Llama Instruct

Part3 - Inference with Code Llama using LangChain

Part4 - Containerize your LLM app using Python, FastAPI, and Docker

Part5 - Deploy your LLM app on Kubernetes 

Part6 - LLM app observability <coming soon>


Friday, February 23, 2024

Hugging Face - Part3 - Inference with Code Llama using LangChain

In the field of understanding and working with human language (NLP), Hugging Face is a key platform that provides many pre-trained models for different tasks. With Transformers, LangChain, and Python developers can easily use Hugging Face's models on their own computers for quick processing. Using LangChain offers a streamlined and user-friendly approach to tapping into the capabilities of pre-trained language models. In this blog post we focus on how to inference with Code Llama - Instruct model from Hugging Face locally using LangChain. 


You can access the Python script in my GitHub repository:
https://github.com/vineethac/huggingface/tree/main/4-codellama_with_langchain


To initiate inference with Code Llama, developers can start by specifying the desired model using its identifier, such as MODEL_ID = "codellama/CodeLlama-7b-Instruct-hf". Transformers simplifies the process by providing a unified interface with the familiar Python programming language, allowing users to effortlessly initialize the model and tokenizer.

Once the model and tokenizer are set up, developers can leverage LangChain's HuggingFacePipeline class to create a text generation pipeline. This pipeline, defined with parameters like max_new_tokens and repetition_penalty, becomes a powerful tool for local inferencing. By combining this pipeline with LangChain's PromptTemplate, developers can easily construct prompts and invoke the entire chain to generate responses. This streamlined process facilitates local inferencing with Code Llama, empowering developers to leverage Hugging Face's models for a wide range of natural language processing tasks in their Python applications. 


Example

root@hf-3:/codellama# python3 codellama_langchain.py
tokenizer_config.json: 100%|█████████████████████████████████████████████████████████| 749/749 [00:00<00:00, 3.57MB/s]
tokenizer.model: 100%|█████████████████████████████████████████████████████████████| 500k/500k [00:00<00:00, 4.48MB/s]
tokenizer.json: 100%|████████████████████████████████████████████████████████████| 1.84M/1.84M [00:00<00:00, 6.13MB/s]
special_tokens_map.json: 100%|███████████████████████████████████████████████████████| 411/411 [00:00<00:00, 1.86MB/s]
config.json: 100%|███████████████████████████████████████████████████████████████████| 646/646 [00:00<00:00, 3.40MB/s]
model.safetensors.index.json: 100%|██████████████████████████████████████████████| 25.1k/25.1k [00:00<00:00, 68.2MB/s]
model-00001-of-00002.safetensors: 100%|██████████████████████████████████████████| 9.98G/9.98G [01:50<00:00, 90.0MB/s]
model-00002-of-00002.safetensors: 100%|██████████████████████████████████████████| 3.50G/3.50G [00:39<00:00, 89.5MB/s]
Downloading shards: 100%|███████████████████████████████████████████████████████████████| 2/2 [02:30<00:00, 75.16s/it]
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████| 2/2 [00:05<00:00, 2.86s/it]
generation_config.json: 100%|█████████████████████████████████████████████████████████| 116/116 [00:00<00:00, 110kB/s]

Ask codellama: given two unsorted integer lists. merge the two lists, sort the merged list, and find median using python. consider the length of the merged list while finding the median value.
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
Here is a possible solution to the problem:

def merge_and_find_median(list1, list2):
# Merge the two lists
merged_list = list1 + list2

# Sort the merged list
merged_list.sort()

# Find the median value
if len(merged_list) % 2 == 0:
# Even number of elements in the merged list
median = (merged_list[len(merged_list) // 2 - 1] + merged_list[len(merged_list) // 2]) / 2
else:
# Odd number of elements in the merged list
median = merged_list[len(merged_list) // 2]

return median

Explanation:

* First, we merge the two lists by concatenating them.
* Then, we sort the merged list using the `sort()` method.
* Next, we check whether the length of the merged list is even or odd. If it's even, we take the average of the middle two elements of the list. If it's odd, we simply take the middle element as the median.
* Finally, we return the median value.

Note that this solution assumes that both input lists are sorted in ascending order. If they are not sorted, you may need to add additional code to sort them before merging and finding the median.</s>

Ask codellama: /bye
root@hf-3:/codellama#


Hope it was useful. Cheers!

Tuesday, February 20, 2024

Hugging Face - Part2 - Code generation with Code Llama - Instruct

Code Llama, an impressive publicly available machine learning model, is a specialised version of Llama 2 that was created by further training Llama 2 on code-specific datasets. It is specifically designed to tackle coding challenges. It can generate both code and descriptive natural language about code, making it a versatile asset for developers. Some common use cases include writing new functions or even debugging existing code. It supports a wide range of popular programming languages, including Python, C++, Java, PHP, Typescript (Javascript), C#, and Bash.

Code Llama – Instruct, an advanced variation of Code Llama which is designed to accept natural language instructions as input and returns the expected output. This unique feature makes the model more adept at understanding and fulfilling user requirements. The Meta AI team recommend using Code Llama - Instruct variants whenever you intend to use Code Llama for your code generation tasks.



In this blog post, I will guide you through the process of employing the Code Llama - Instruct model from Hugging Face locally for code generation tasks. We will be utilizing the Python Transformers library for this. You can access the Python script in my GitHub repository:

https://github.com/vineethac/huggingface/tree/main/3-codellama-instruct

Example

root@hf-5:/# python3 codellama_prompt.py
tokenizer_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 749/749 [00:00<00:00, 3.44MB/s]
tokenizer.model: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 500k/500k [00:00<00:00, 4.12MB/s]
tokenizer.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.84M/1.84M [00:00<00:00, 9.76MB/s]
special_tokens_map.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 411/411 [00:00<00:00, 2.08MB/s]
config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 646/646 [00:00<00:00, 3.51MB/s]
model.safetensors.index.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25.1k/25.1k [00:00<00:00, 47.9MB/s]
model-00001-of-00002.safetensors: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9.98G/9.98G [02:02<00:00, 81.2MB/s]
model-00002-of-00002.safetensors: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.50G/3.50G [00:45<00:00, 76.7MB/s]
Downloading shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [02:48<00:00, 84.38s/it]
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:20<00:00, 10.10s/it]
generation_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 116/116 [00:00<00:00, 444kB/s]


Ask codellama/CodeLlama-7b-Instruct-hf: reverse a list in python.
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
Result: <s>[INST] reverse a list in python. [/INST]  There are several ways to reverse a list in Python. Here are a few methods:

1. Using the `reversed()` function:

my_list = [1, 2, 3, 4, 5]
reversed_list = list(reversed(my_list))
print(reversed_list)  # [5, 4, 3, 2, 1]

2. Using slicing:

my_list = [1, 2, 3, 4, 5]
reversed_list = my_list[::-1]
print(reversed_list)  # [5, 4, 3, 2, 1]

3. Using the `reverse()` method:

my_list = [1, 2, 3, 4, 5]
my_list.reverse()
print(my_list)  # [5, 4, 3, 2, 1]

Note that the `reverse()` method reverses the list in place, meaning that it modifies the original list. The other two methods create a new list with the elements in reverse order.

Ask codellama/CodeLlama-7b-Instruct-hf: /bye
root@hf-5:/#

The first time you execute the Python script, the model will be automatically downloaded to your local machine. Subsequently, upon subsequent runs, the previously saved model will be utilized in processing user inputs.

root@hf-5:~# cd ~/.cache/huggingface/hub/
root@hf-5:~/.cache/huggingface/hub#
root@hf-5:~/.cache/huggingface/hub# ls | grep Instruct
models--codellama--CodeLlama-7b-Instruct-hf
root@hf-5:~/.cache/huggingface/hub#
root@hf-5:~/.cache/huggingface/hub# cd models--codellama--CodeLlama-7b-Instruct-hf
root@hf-5:~/.cache/huggingface/hub/models--codellama--CodeLlama-7b-Instruct-hf# ls
blobs  refs  snapshots
root@hf-5:~/.cache/huggingface/hub/models--codellama--CodeLlama-7b-Instruct-hf# cd blobs/
root@hf-5:~/.cache/huggingface/hub/models--codellama--CodeLlama-7b-Instruct-hf/blobs#
root@hf-5:~/.cache/huggingface/hub/models--codellama--CodeLlama-7b-Instruct-hf/blobs# ls -altrh
total 13G
-rw-r--r-- 1 root root  749 Feb 19 12:03 526f464cf83353c59f7c07b9e587498b47d67a1b
-rw-r--r-- 1 root root 489K Feb 19 12:03 45ccb9c8b6b561889acea59191d66986d314e7cbd6a78abc6e49b139ca91c1e6
-rw-r--r-- 1 root root 1.8M Feb 19 12:03 6b25321d89e21832a89e6273834eab0e4378a53b
-rw-r--r-- 1 root root  411 Feb 19 12:03 d85ba6cb6820b01226ef8bd40b46bb489041c6a8
-rw-r--r-- 1 root root  646 Feb 19 12:03 8fb4018bc8ceaddbaf7d3d238911a30fd5e9081a
-rw-r--r-- 1 root root  25K Feb 19 12:03 cd3b8fb46c4d5616e91520a7a7d9a5a75af759a8
-rw-r--r-- 1 root root 9.3G Feb 19 12:05 0f52c0eab2dafa0a13e8103a426b17137f7b053e9211334158d7bd7cc1148ceb
-rw-r--r-- 1 root root 3.3G Feb 19 12:06 9ddab1824225fbe405cea67c5d8d87666f1ab5c59ec89abdf2cacae9b555da75
-rw-r--r-- 1 root root  116 Feb 19 12:06 aa9aac2cbaa80cf25094e7d9a527bd1cab9f5321
drwxr-xr-x 6 root root 4.0K Feb 19 12:06 ..
drwxr-xr-x 2 root root 4.0K Feb 19 12:06 .
root@hf-5:~/.cache/huggingface/hub/models--codellama--CodeLlama-7b-Instruct-hf/blobs#

I hope it was useful. Cheers!