In this exercise we will learn to interact with the LLMs using Ollama, LangChain, and Python.
Full project in my GitHub
https://github.com/vineethac/Ollama/tree/main/ollama_langchain
Import necessary modules from LangChain library and Python's argparse module
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.llms import Ollama
import argparse
Argument parsing
parser = argparse.ArgumentParser()
parser.add_argument('--model', type=str, default="llama2")
args = parser.parse_args()
model = args.model
Initialize Ollama
llm = Ollama(
model=model, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), base_url="http://ollama:11434"
)
Interactive loop
while True:
print(f"Model: {model}")
prompt = input("Ask me anything: ")
if prompt=="/bye":
break
llm(prompt)
print("\n \n")
In summary, this script sets up a simple command-line interface for interacting with the Ollama language model. It takes user prompts, sends them to the Ollama model for processing, and prints the model's responses. The loop continues until the user enters "/bye" to exit.
Hope it was useful. Cheers!
No comments:
Post a Comment