mirror of
https://github.com/amithkoujalgi/ollama4j.git
synced 2025-10-14 17:38:58 +02:00

Refactored error handling in OllamaChatEndpointCaller by extracting status code checks into a helper method. Improved logging for image loading errors in OllamaChatRequestBuilder. Updated integration and unit tests to relax assertions and clarify comments. Minor documentation formatting fixes and Makefile improvement for reproducible npm installs.
2.1 KiB
2.1 KiB
sidebar_position
sidebar_position |
---|
2 |
import CodeEmbed from '@site/src/components/CodeEmbed'; import TypewriterTextarea from '@site/src/components/TypewriterTextarea';
Generate
This API lets you ask questions to the LLMs in a synchronous way. This API corresponds to the completion API.
Use the OptionBuilder
to build the Options
object
with extra parameters.
Refer
to this.
Try asking a question about the model
You will get a response similar to:
:::tip[LLM Response] I am a model of an AI trained by Mistral AI. I was designed to assist with a wide range of tasks, from answering questions to helping with complex computations and research. How can I help you toda :::
Try asking a question, receiving the answer streamed
You will get a response similar to:
Generate structured output
With response as a Map
You will get a response similar to:
:::tip[LLM Response]
{
"heroName" : "Batman",
"ageOfPerson" : 30
}
:::
With response mapped to specified class type
:::tip[LLM Response] HeroInfo(heroName=Batman, ageOfPerson=30) :::