In this exercise we will interact with LLaVA which is an end-to-end trained large multimodal model and vision assistant. We will use the Ollama REST API to prompt the model using Python.
Full project in my GitHub
https://github.com/vineethac/Ollama/tree/main/ollama_vision_assistant
LLaVA, being a large multimodal model and vision assistant, can be utilized for various tasks. Here are a couple of use cases:
- Image Description Generation
- Question-Answering on Text and Image
These are just a few examples, and the versatility of LLaVA allows for exploration across a wide range of multimodal tasks and applications.
Sample interaction with LLaVA model
Image
Image credits: shutterstock |
Prompt
python3 query_image.py --path=images/img1.jpg --prompt="describe the picture and what are the essentials that one need to carry generally while going these kind of places?"
{ "model": "llava", "created_at": "2024-01-23T17:41:27.771729767Z", "response": " The image shows a man riding his bicycle on a country road, surrounded by beautiful scenery and mountains. He appears to be enjoying the ride as he navigates through the countryside. \n\nWhile cycling in such environments, an essential item one would need to carry is a water bottle or hydration pack, to ensure they stay well-hydrated during the journey. In addition, it's important to have a map or GPS device to navigate through potentially less familiar routes and avoid getting lost. Other useful items for cyclists may include a multi-tool, first aid kit, bike lock, snacks, spare clothes, and a small portable camping stove if planning an overnight stay in the wilderness.",
Hope it was useful. Cheers!