Refactor error handling and update tests

Refactored error handling in OllamaChatEndpointCaller by extracting status code checks into a helper method. Improved logging for image loading errors in OllamaChatRequestBuilder. Updated integration and unit tests to relax assertions and clarify comments. Minor documentation formatting fixes and Makefile improvement for reproducible npm installs.
This commit is contained in:
amithkoujalgi
2025-09-18 01:50:23 +05:30
parent 7788f954d6
commit 0aeabcc963
14 changed files with 130 additions and 112 deletions

View File

@@ -4,7 +4,7 @@ sidebar_position: 2
# Timeouts
## Set Request Timeout
### Set Request Timeout
This API lets you set the request timeout for the Ollama client.

View File

@@ -21,11 +21,11 @@ session. The tool invocation and response handling are all managed internally by
<CodeEmbed src="https://raw.githubusercontent.com/ollama4j/ollama4j-examples/refs/heads/main/src/main/java/io/github/ollama4j/examples/ChatWithTools.java"/>
::::tip[LLM Response]
:::tip[LLM Response]
**First answer:** 6527fb60-9663-4073-b59e-855526e0a0c2 is the ID of the employee named 'Rahul Kumar'.
**Second answer:** _Kumar_ is the last name of the employee named 'Rahul Kumar'.
::::
:::
This tool calling can also be done using the streaming API.
@@ -74,8 +74,8 @@ The annotated method can then be used as a tool in the chat session:
Running the above would produce a response similar to:
::::tip[LLM Response]
:::tip[LLM Response]
**First answer:** 0.0000112061 is the most important constant in the world using 10 digits, according to my function. This constant is known as Planck's constant and plays a fundamental role in quantum mechanics. It relates energy and frequency in electromagnetic radiation and action (the product of momentum and distance) for particles.
**Second answer:** 3-digit constant: 8.001
::::
:::

View File

@@ -16,7 +16,7 @@ information using the history of already asked questions and the respective answ
You will get a response similar to:
::::tip[LLM Response]
:::tip[LLM Response]
> First answer: The capital of France is Paris.
>
@@ -47,7 +47,7 @@ You will get a response similar to:
"tool_calls" : null
}]
```
::::
:::
### Create a conversation where the answer is streamed
@@ -75,9 +75,9 @@ You will get a response similar to:
You will get a response as:
::::tip[LLM Response]
:::tip[LLM Response]
Shhh!
::::
:::
## Create a conversation about an image (requires a vision model)
@@ -91,7 +91,7 @@ Let's use this image:
You will get a response similar to:
::::tip[LLM Response]
:::tip[LLM Response]
**First Answer:** The image shows a dog sitting on the bow of a boat that is docked in calm water. The boat has two
levels, with the lower level containing seating and what appears to be an engine cover. The dog seems relaxed and
comfortable on the boat, looking out over the water. The background suggests it might be late afternoon or early
@@ -101,4 +101,4 @@ evening, given the warm lighting and the low position of the sun in the sky.
appears to be medium-sized with a short coat and a brown coloration, which might suggest that it is a **_Golden Retriever_**
or a similar breed. Without more details like ear shape and tail length, it's not possible to identify the exact breed
confidently.
::::
:::

View File

@@ -12,7 +12,7 @@ Generate embeddings from a model.
<CodeEmbed src="https://raw.githubusercontent.com/ollama4j/ollama4j-examples/refs/heads/main/src/main/java/io/github/ollama4j/examples/GenerateEmbeddings.java" />
::::tip[LLM Response]
:::tip[LLM Response]
```json
[
@@ -40,7 +40,7 @@ Generate embeddings from a model.
]
```
::::
:::
You could also use the `OllamaEmbedRequestModel` to specify the options such as `seed`, `temperature`, etc., to apply
for generating embeddings.
@@ -49,7 +49,7 @@ for generating embeddings.
You will get a response similar to:
::::tip[LLM Response]
:::tip[LLM Response]
```json
[
@@ -77,4 +77,4 @@ You will get a response similar to:
]
```
::::
:::

View File

@@ -4,7 +4,7 @@ sidebar_position: 4
import CodeEmbed from '@site/src/components/CodeEmbed';
# Generate with Image Files
# Generate with Images
This API lets you ask questions along with the image files to the LLMs.
This API corresponds to
@@ -27,10 +27,10 @@ If you have this image downloaded and you pass the path to the downloaded image
You will get a response similar to:
::::tip[LLM Response]
:::tip[LLM Response]
This image features a white boat with brown cushions, where a dog is sitting on the back of the boat. The dog seems to
be enjoying its time outdoors, perhaps on a lake.
::::
:::
# Generate with Image URLs
@@ -55,7 +55,7 @@ Passing the link of this image the following code:
You will get a response similar to:
::::tip[LLM Response]
:::tip[LLM Response]
This image features a white boat with brown cushions, where a dog is sitting on the back of the boat. The dog seems to
be enjoying its time outdoors, perhaps on a lake.
::::
:::

View File

@@ -79,7 +79,7 @@ Now put it all together by registering the tools and prompting with tools.
Run this full example and you will get a response similar to:
::::tip[LLM Response]
:::tip[LLM Response]
[Result of executing tool 'current-fuel-price']: Current price of petrol in Bengaluru is Rs.103/L
@@ -88,4 +88,4 @@ Run this full example and you will get a response similar to:
[Result of executing tool 'get-employee-details']: Employee Details `{ID: 6bad82e6-b1a1-458f-a139-e3b646e092b1, Name:
Rahul Kumar, Address: King St, Hyderabad, India, Phone: 9876543210}`
::::
:::

View File

@@ -22,10 +22,10 @@ to [this](/apis-extras/options-builder).
You will get a response similar to:
::::tip[LLM Response]
:::tip[LLM Response]
I am a model of an AI trained by Mistral AI. I was designed to assist with a wide range of tasks, from answering
questions to helping with complex computations and research. How can I help you toda
::::
:::
### Try asking a question, receiving the answer streamed
@@ -49,7 +49,7 @@ width='100%'
You will get a response similar to:
::::tip[LLM Response]
:::tip[LLM Response]
```json
{
@@ -58,12 +58,12 @@ You will get a response similar to:
}
```
::::
:::
### With response mapped to specified class type
<CodeEmbed src="https://raw.githubusercontent.com/ollama4j/ollama4j-examples/refs/heads/main/src/main/java/io/github/ollama4j/examples/GenerateStructuredOutputMappedToObject.java" />
::::tip[LLM Response]
:::tip[LLM Response]
HeroInfo(heroName=Batman, ageOfPerson=30)
::::
:::