Update documentation and refactor code to replace OllamaAPI with Ollama

- Replaced all instances of `OllamaAPI` with `Ollama` in documentation and code examples for consistency.
- Enhanced the configuration for handling broken markdown links in Docusaurus.
- Updated integration tests and example code snippets to reflect the new class structure.
This commit is contained in:
amithkoujalgi
2025-09-29 09:31:32 +05:30
parent 35bf3de62a
commit f114181fe2
12 changed files with 25 additions and 23 deletions

View File

@@ -18,7 +18,6 @@ The metrics integration provides the following metrics:
```java
import io.github.ollama4j.Ollama;
import io.github.ollama4j.OllamaAPI;
// Create API instance with metrics enabled
Ollama ollama = new Ollama();

View File

@@ -337,7 +337,6 @@ import com.couchbase.client.java.Scope;
import com.couchbase.client.java.json.JsonObject;
import com.couchbase.client.java.query.QueryResult;
import io.github.ollama4j.Ollama;
import io.github.ollama4j.OllamaAPI;
import io.github.ollama4j.exceptions.OllamaException;
import io.github.ollama4j.exceptions.ToolInvocationException;
import io.github.ollama4j.tools.OllamaToolsResult;

View File

@@ -65,7 +65,7 @@ public class Main {
String host = "http://localhost:11434/";
OllamaAPI ollama = new OllamaAPI(host);
Ollama ollama = new Ollama(host);
Options options =
new OptionsBuilder()
@@ -74,6 +74,15 @@ public class Main {
.setNumGpu(2)
.setTemperature(1.5f)
.build();
OllamaResult result =
ollama.generate(
OllamaGenerateRequestBuilder.builder()
.withModel(model)
.withPrompt("Who are you?")
.withOptions(options)
.build(),
null);
}
}
```

View File

@@ -8,7 +8,6 @@ This API lets you check the reachability of Ollama server.
```java
import io.github.ollama4j.Ollama;
import io.github.ollama4j.OllamaAPI;
public class Main {

View File

@@ -9,7 +9,6 @@ inferences.
```java
import io.github.ollama4j.Ollama;
import io.github.ollama4j.OllamaAPI;
import io.github.ollama4j.models.response.OllamaResult;
import io.github.ollama4j.types.OllamaModelType;
import io.github.ollama4j.utils.OptionsBuilder;

View File

@@ -10,7 +10,6 @@ This API lets you set the request timeout for the Ollama client.
```java
import io.github.ollama4j.Ollama;
import io.github.ollama4j.OllamaAPI;
public class Main {

View File

@@ -16,7 +16,7 @@ experience.
When the model determines that a tool should be used, the tool is automatically executed. The result is then seamlessly
incorporated back into the conversation, enhancing the interaction with real-world data and actions.
The following example demonstrates usage of a simple tool, registered with the `OllamaAPI`, and then used within a chat
The following example demonstrates usage of a simple tool, registered with the `Ollama`, and then used within a chat
session. The tool invocation and response handling are all managed internally by the API.
<CodeEmbed src="https://raw.githubusercontent.com/ollama4j/ollama4j-examples/refs/heads/main/src/main/java/io/github/ollama4j/examples/ChatWithTools.java"/>
@@ -33,7 +33,7 @@ This tool calling can also be done using the streaming API.
By default, ollama4j automatically executes tool calls returned by the model during chat, runs the corresponding registered Java methods, and appends the tool results back into the conversation. For some applications, you may want to intercept tool calls and decide yourself when and how to execute them (for example, to queue them, to show a confirmation UI to the user, to run them in a sandbox, or to perform multistep orchestration).
To enable this behavior, set the useTools flag to true on your OllamaAPI instance. When enabled, ollama4j will stop autoexecuting tools and will instead return tool calls inside the assistant message. You can then inspect the tool calls and execute them manually.
To enable this behavior, set the useTools flag to true on your Ollama instance. When enabled, ollama4j will stop autoexecuting tools and will instead return tool calls inside the assistant message. You can then inspect the tool calls and execute them manually.
Notes:
@@ -57,10 +57,10 @@ To use a method as a tool within a chat call, follow these steps:
* `java.lang.Boolean`
* `java.math.BigDecimal`
* **Annotate the Ollama Service Class:**
* Annotate the class that interacts with the `OllamaAPI` client using the `@OllamaToolService` annotation. Reference
* Annotate the class that interacts with the `Ollama` client using the `@OllamaToolService` annotation. Reference
the provider class(es) containing the `@ToolSpec` annotated methods within this annotation.
* **Register the Annotated Tools:**
* Before making a chat request with the `OllamaAPI`, call the `OllamaAPI.registerAnnotatedTools()` method. This
* Before making a chat request with the `Ollama`, call the `Ollama.registerAnnotatedTools()` method. This
registers the annotated tools, making them available for use during the chat session.
Let's try an example. Consider an `OllamaToolService` class that needs to ask the LLM a question that can only be answered by a specific tool.

View File

@@ -17,7 +17,6 @@ _Base roles are `SYSTEM`, `USER`, `ASSISTANT`, `TOOL`._
```java
import io.github.ollama4j.Ollama;
import io.github.ollama4j.OllamaAPI;
import io.github.ollama4j.models.chat.OllamaChatMessageRole;
public class Main {
@@ -52,7 +51,6 @@ public class Main {
```java
import io.github.ollama4j.Ollama;
import io.github.ollama4j.OllamaAPI;
import io.github.ollama4j.models.chat.OllamaChatMessageRole;
public class Main {

View File

@@ -113,9 +113,8 @@ Create a new Java class in your project and add this code.
```java
import io.github.ollama4j.Ollama;
import io.github.ollama4j.OllamaAPI;
public class OllamaAPITest {
public class OllamaTest {
public static void main(String[] args) {
Ollama ollama = new Ollama();
@@ -132,9 +131,8 @@ Specify a different Ollama host that you want to connect to.
```java
import io.github.ollama4j.Ollama;
import io.github.ollama4j.OllamaAPI;
public class OllamaAPITest {
public class OllamaTest {
public static void main(String[] args) {
String host = "http://localhost:11434/";

View File

@@ -24,7 +24,6 @@ const config = {
projectName: 'ollama4j', // Usually your repo name.
onBrokenLinks: 'throw',
onBrokenMarkdownLinks: 'warn',
// Even if you don't use internationalization, you can use this field to set
// useful metadata like html lang. For example, if your site is Chinese, you
@@ -175,6 +174,9 @@ const config = {
}),
markdown: {
mermaid: true,
hooks: {
onBrokenMarkdownLinks: 'warn'
}
},
themes: ['@docusaurus/theme-mermaid']
};