Friday, January 26, 2024

Ollama - Part3 - Web UI for Ollama to interact with LLMs

In the previous blog posts, we covered the deployment of Ollama on Kubernetes cluster and demonstrated how to prompt the Language Models (LLMs) using LangChain and Python. Now we will delve into deploying a web user interface (UI) for Ollama on a Kubernetes cluster. This will provide a ChatGPT like experience when engaging with the LLMs.

Full project in my GitHub

https://github.com/vineethac/Ollama/tree/main/ollama_webui


The above referenced GitHub repository details all the necessary steps required to deploy the Ollama web UI. The Following diagram outlines the various components and services that interact with each other as part of this entire system:


For detailed information on deploying Prometheus, Grafana, and Loki on a Kubernetes cluster, please refer this blog post.

A sample interaction with the mistral model using the web UI is given below.


Hope it was useful. Cheers!

Thursday, January 25, 2024

Ollama - Part2 - Prompt Large Language Models (LLMs) using Ollama, LangChain and Python


In this exercise we will learn to interact with the LLMs using Ollama, LangChain, and Python.

Full project in my GitHub

https://github.com/vineethac/Ollama/tree/main/ollama_langchain


Import necessary modules from LangChain library and Python's argparse module

from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.llms import Ollama
import argparse

Argument parsing

parser = argparse.ArgumentParser()
parser.add_argument('--model', type=str, default="llama2")

args = parser.parse_args()
model = args.model

Initialize Ollama

llm = Ollama(
        model=model, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), base_url="http://ollama:11434"
)

Interactive loop

while True:
    print(f"Model: {model}")
    prompt = input("Ask me anything: ")

    if prompt=="/bye":
        break

    llm(prompt)
    print("\n \n")


In summary, this script sets up a simple command-line interface for interacting with the Ollama language model. It takes user prompts, sends them to the Ollama model for processing, and prints the model's responses. The loop continues until the user enters "/bye" to exit.

Hope it was useful. Cheers!

Monday, January 15, 2024

Ollama - Part1 - Deploy Ollama on Kubernetes

Docker published GenAI stack around Oct 2023 which consists of large language models (LLMs) from Ollama, vector and graph databases from Neo4j, and the LangChain framework. These utilities can help developers with the resources they need to kick-start creating new applications using generative AI. Ollama can be used to deploy and run LLMs locally. In this exercise we will deploy Ollama to a Kubernetes cluster and prompt it.

In my case I am using a Tanzu Kubernetes Cluster (TKC) running on vSphere with Tanzu 7u3 platform powered by Dell PowerEdge R640 servers. The TKC nodes are using best-effort-2xlarge vmclass with 8 CPU and 64Gi Memory.  Note that I am running it on a regular Kubernetes cluster without GPU. If you have GPU, additional configuration steps might be required.



Hope it was useful. Cheers!

Saturday, December 30, 2023

GitOps using Argo CD - Part2 - Mini project

In the previous blog post, we discussed deploying Argo CD on a Kubernetes cluster and explored the fundamentals of application management. This time, we'll leverage Argo CD to deploy the applications from our Kubernetes mini project.

Full project in my GitHub

https://github.com/vineethac/Kubernetes/tree/main/gitops-argocd


Following are the different components of the project that will get deployed on to a Kubernetes cluster using the Argo CD application resource:
  1. Ingress controller
  2. Prometheus stack
  3. FastAPI web app
  4. FastAPI service monitor
  5. Loki stack


Deploy each of these components by applying the corresponding YAML manifest, following the outlined steps in the GitHub repository mentioned above. After the successful deployment of all components, you can observe them in the Argo CD web UI, as illustrated below.


Hope it was useful. Cheers!

Sunday, December 3, 2023

Kubernetes mini project

In this mini project, we are going to learn the following:

  • Deploy a simple Python based web application on a Kubernetes cluster.
  • We will use Helm to deploy this app.
  • This web app uses FastAPI and exposes some metrics using the Prometheus Python client.
  • To store and visualize these metrics we will deploy Prometheus and Grafana in the K8s cluster.
  • We will also deploy and use an ingress controller for exposing the web app, Prometheus, and Grafana to external users.
  • For logging we will deploy and use Grafana Loki stack.


Full project in my GitHub

High-level steps to complete this project

Step1: Write the Python app.

Step2: Create the Dockerfile for the app.

Step3: Create the container image.

Step4: Push the container image to an image registry like Docker Hub.

Step5: Get access to a K8s cluster.

Step6: Deploy an ingress controller.

Step7: Create the Helm chart for your app and deploy it to the K8s cluster.

Step8: Deploy Prometheus stack on the K8s cluster using Helm.

Step9: Create a servicemonitor resource which defines the target to be monitored by Prometheus.

Step10: Verify targets and service discovery in Prometheus.

Step11: Configure Grafana dashboard and verify.

Step12. Deploy Grafana Loki stack using Helm.


Hope it was useful. Cheers!

Saturday, November 18, 2023

vSphere with Tanzu using NSX-T - Part29 - Logging using Loki stack

Grafana Loki is a log aggregation system that we can use for Kubernetes. In this post we will deploy Loki stack on a Tanzu Kubernetes cluster.

❯ KUBECONFIG=gc.kubeconfig kg no
NAME                                            STATUS   ROLES                  AGE    VERSION
tkc01-control-plane-k8fzb                       Ready    control-plane,master   144m   v1.23.8+vmware.3
tkc01-worker-nodepool-a1-pqq7j-76d555c9-4n5kh   Ready    <none>                 132m   v1.23.8+vmware.3
tkc01-worker-nodepool-a1-pqq7j-76d555c9-8pcc6   Ready    <none>                 128m   v1.23.8+vmware.3
tkc01-worker-nodepool-a1-pqq7j-76d555c9-rx7jf   Ready    <none>                 134m   v1.23.8+vmware.3
❯
❯ helm repo add grafana https://grafana.github.io/helm-charts
❯ helm repo update
❯ helm repo list
❯ helm search repo loki

I saved the values file using helm show values grafana/loki-stack and made necessary modifications as mentioned below. 

  • I enabled Grafana by setting enabled: true. This will create a new Grafana instance.
  • I also added a section under grafana.ingress in the loki-stack/values.yaml, that will create an ingress resource for this new Grafana instance.

 Here is the values.yaml file.

test_pod:
  enabled: true
  image: bats/bats:1.8.2
  pullPolicy: IfNotPresent

loki:
  enabled: true
  isDefault: true
  url: http://{{(include "loki.serviceName" .)}}:{{ .Values.loki.service.port }}
  readinessProbe:
    httpGet:
      path: /ready
      port: http-metrics
    initialDelaySeconds: 45
  livenessProbe:
    httpGet:
      path: /ready
      port: http-metrics
    initialDelaySeconds: 45
  datasource:
    jsonData: "{}"
    uid: ""


promtail:
  enabled: true
  config:
    logLevel: info
    serverPort: 3101
    clients:
      - url: http://{{ .Release.Name }}:3100/loki/api/v1/push

fluent-bit:
  enabled: false

grafana:
  enabled: true
  sidecar:
    datasources:
      label: ""
      labelValue: ""
      enabled: true
      maxLines: 1000
  image:
    tag: 8.3.5
  ingress:
    ## If true, Grafana Ingress will be created
    ##
    enabled: true

    ## IngressClassName for Grafana Ingress.
    ## Should be provided if Ingress is enable.
    ##
    ingressClassName: nginx

    ## Annotations for Grafana Ingress
    ##
    annotations: {}
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: "true"

    ## Labels to be added to the Ingress
    ##
    labels: {}

    ## Hostnames.
    ## Must be provided if Ingress is enable.
    ##
    # hosts:
    #   - grafana.domain.com
    hosts:
      - grafana-loki-vineethac-poc.test.com

    ## Path for grafana ingress
    path: /

    ## TLS configuration for grafana Ingress
    ## Secret must be manually created in the namespace
    ##
    tls: []
    # - secretName: grafana-general-tls
    #   hosts:
    #   - grafana.example.com

prometheus:
  enabled: false
  isDefault: false
  url: http://{{ include "prometheus.fullname" .}}:{{ .Values.prometheus.server.service.servicePort }}{{ .Values.prometheus.server.prefixURL }}
  datasource:
    jsonData: "{}"

filebeat:
  enabled: false
  filebeatConfig:
    filebeat.yml: |
      # logging.level: debug
      filebeat.inputs:
      - type: container
        paths:
          - /var/log/containers/*.log
        processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"
      output.logstash:
        hosts: ["logstash-loki:5044"]

logstash:
  enabled: false
  image: grafana/logstash-output-loki
  imageTag: 1.0.1
  filters:
    main: |-
      filter {
        if [kubernetes] {
          mutate {
            add_field => {
              "container_name" => "%{[kubernetes][container][name]}"
              "namespace" => "%{[kubernetes][namespace]}"
              "pod" => "%{[kubernetes][pod][name]}"
            }
            replace => { "host" => "%{[kubernetes][node][name]}"}
          }
        }
        mutate {
          remove_field => ["tags"]
        }
      }
  outputs:
    main: |-
      output {
        loki {
          url => "http://loki:3100/loki/api/v1/push"
          #username => "test"
          #password => "test"
        }
        # stdout { codec => rubydebug }
      }

# proxy is currently only used by loki test pod
# Note: If http_proxy/https_proxy are set, then no_proxy should include the
# loki service name, so that tests are able to communicate with the loki
# service.
proxy:
  http_proxy: ""
  https_proxy: ""
  no_proxy: ""

Deploy using Helm

❯ helm upgrade --install --atomic loki-stack grafana/loki-stack --values values.yaml --kubeconfig=gc.kubeconfig --create-namespace --namespace=loki-stack
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: gc.kubeconfig
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: gc.kubeconfig
Release "loki-stack" does not exist. Installing it now.
W1203 13:36:48.286498   31990 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W1203 13:36:48.592349   31990 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W1203 13:36:55.840670   31990 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W1203 13:36:55.849356   31990 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
NAME: loki-stack
LAST DEPLOYED: Sun Dec  3 13:36:45 2023
NAMESPACE: loki-stack
STATUS: deployed
REVISION: 1
NOTES:
The Loki stack has been deployed to your cluster. Loki can now be added as a datasource in Grafana.

See http://docs.grafana.org/features/datasources/loki/ for more detail.

 

Verify

❯ KUBECONFIG=gc.kubeconfig kg all -n loki-stack
NAME                                     READY   STATUS    RESTARTS   AGE
pod/loki-stack-0                         1/1     Running   0          89s
pod/loki-stack-grafana-dff58c989-jdq2l   2/2     Running   0          89s
pod/loki-stack-promtail-5xmrj            1/1     Running   0          89s
pod/loki-stack-promtail-cts5j            1/1     Running   0          89s
pod/loki-stack-promtail-frwvw            1/1     Running   0          89s
pod/loki-stack-promtail-wn4dw            1/1     Running   0          89s

NAME                            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/loki-stack              ClusterIP   10.110.208.35    <none>        3100/TCP   90s
service/loki-stack-grafana      ClusterIP   10.104.222.214   <none>        80/TCP     90s
service/loki-stack-headless     ClusterIP   None             <none>        3100/TCP   90s
service/loki-stack-memberlist   ClusterIP   None             <none>        7946/TCP   90s

NAME                                 DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/loki-stack-promtail   4         4         4       4            4           <none>          90s

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/loki-stack-grafana   1/1     1            1           90s

NAME                                           DESIRED   CURRENT   READY   AGE
replicaset.apps/loki-stack-grafana-dff58c989   1         1         1       90s

NAME                          READY   AGE
statefulset.apps/loki-stack   1/1     91s

❯ KUBECONFIG=gc.kubeconfig kg ing -n loki-stack
NAME                 CLASS   HOSTS                                 ADDRESS        PORTS   AGE
loki-stack-grafana   nginx   grafana-loki-vineethac-poc.test.com   10.216.24.45   80      7m16s
❯

Now in my case I've an ingress controller and dns resolution in place. If you don't have those configured, you can just port forward the loki-stack-grafana service to view the Grafana dashboard.

To get the username and password you should decode the following secret:

❯ KUBECONFIG=gc.kubeconfig kg secrets -n loki-stack loki-stack-grafana -oyaml

Login to the Grafana instance and verify the Data Sources section, and it must be already configured. Now click on explore option and use the log browser to query logs. 

Hope it was useful. Cheers!

Sunday, October 29, 2023

Kubernetes 101 - Part12 - Debug pod

When it comes to troubleshooting application connectivity and name resolution issues in Kubernetes, having the right tools at your disposal can make all the difference. One of the most common challenges is accessing essential utilities like ping, nslookup, dig, traceroute, and more. To simplify this process, we've created a container image that packs a range of these utilities, making it easy to quickly identify and resolve connectivity issues.

 

The Container Image: A Swiss Army Knife for Troubleshooting

This container image, designed specifically for Kubernetes troubleshooting, comes pre-installed with the following essential utilities:

  1. ping: A classic network diagnostic tool for testing connectivity.
  2. dig: A DNS lookup tool for resolving domain names to IP addresses.
  3. nslookup: A network troubleshooting tool for resolving hostnames to IP addresses.
  4. traceroute: A network diagnostic tool for tracing the path of packets across a network.
  5. curl: A command-line tool for transferring data to and from a web server using HTTP, HTTPS, SCP, SFTP, TFTP, and more.
  6. wget: A command-line tool for downloading files from the web.
  7. nc: A command-line tool for reading and writing data to a network socket.
  8. netstat: A command-line tool for displaying network connections, routing tables, and interface statistics.
  9. ifconfig: A command-line tool for configuring network interfaces.
  10. route: A command-line tool for displaying and modifying the IP routing table.
  11. host: A command-line tool for performing DNS lookups and resolving hostnames.
  12. arp: A command-line tool for displaying and modifying the ARP cache.
  13. iostat: A command-line tool for displaying disk I/O statistics.
  14. top: A command-line tool for displaying system resource usage.
  15. free: A command-line tool for displaying free memory and swap space.
  16. vmstat: A command-line tool for displaying virtual memory statistics.
  17. pmap: A command-line tool for displaying process memory maps.
  18. mpstat: A command-line tool for displaying multiprocessor statistics.
  19. python3: A programming language and interpreter.
  20. pip: A package installer for Python.

 

Run as a pod on Kubernetes

kubectl run debug --image=vineethac/debug -n default -- sleep infinity

 

Exec into the debug pod

kubectl exec -it debug -n default -- bash 
root@debug:/# ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=46 time=49.3 ms 64 bytes from 8.8.8.8: icmp_seq=2 ttl=45 time=57.4 ms 64 bytes from 8.8.8.8: icmp_seq=3 ttl=46 time=49.4 ms ^C --- 8.8.8.8 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 49.334/52.030/57.404/3.799 ms root@debug:/#
root@debug:/# nslookup google.com Server: 10.96.0.10 Address: 10.96.0.10#53 Non-authoritative answer: Name: google.com Address: 142.250.72.206 Name: google.com Address: 2607:f8b0:4005:80c::200e root@debug:/# exit exit ❯

 

Reference

https://github.com/vineethac/Docker/tree/main/debug-image

By having these essential utilities at your fingertips, you'll be better equipped to quickly identify and resolve connectivity issues in your Kubernetes cluster, saving you time and reducing the complexity of troubleshooting.

Hope it was useful. Cheers!