Amith Koujalgi c74e70f3fa init
2023-11-07 23:06:02 +05:30
2023-10-27 16:40:34 +05:30
2023-11-07 23:06:02 +05:30
2023-10-27 01:15:14 +05:30
2023-10-27 00:42:14 +05:30
2023-10-27 16:51:27 +05:30
2023-10-27 16:34:30 +05:30
2023-11-07 23:06:02 +05:30

Ollama4j

ollama4j-icon

A Java wrapper for Ollama APIs.

Build Status

Install:

From Maven Central:


<dependency>
    <groupId>io.github.amithkoujalgi</groupId>
    <artifactId>ollama4j</artifactId>
    <version>1.0-SNAPSHOT</version>
</dependency>

You might want to include the Maven repository to pull the ollama4j library from. Include this in your pom.xml:


<repositories>
    <repository>
        <id>ollama4j-from-ossrh</id>
        <url>https://s01.oss.sonatype.org/content/repositories/snapshots</url>
    </repository>
</repositories>

Verify if the ollama4j dependencies have been resolved by running:

mvn clean install

Start Ollama Container:

docker run -v ~/ollama:/root/.ollama -p 11434:11434 ollama/ollama

Pull a model:

public class Main {
    public static void main(String[] args) throws Exception {
        String host = "http://localhost:11434/";
        OllamaAPI ollamaAPI = new OllamaAPI(host);
        ollamaAPI.pullModel(OllamaModel.LLAMA2);
    }
}

Post a question to Ollama using Ollama4j:

Using sync API:

public class Main {
    public static void main(String[] args) throws Exception {
        String host = "http://localhost:11434/";
        OllamaAPI ollamaAPI = new OllamaAPI(host);
        String response = ollamaAPI.runSync(OllamaModel.LLAMA2, "Who are you?");
        System.out.println(response);
    }
}

Using async API:

public class Main {
    public static void main(String[] args) throws Exception {
        String host = "http://localhost:11434/";
        OllamaAPI ollamaAPI = new OllamaAPI(host);
        OllamaAsyncResultCallback ollamaAsyncResultCallback = ollamaAPI.runAsync(OllamaModel.LLAMA2, "Who are you?");
        while (true) {
            if (ollamaAsyncResultCallback.isComplete()) {
                System.out.println(ollamaAsyncResultCallback.getResponse());
                break;
            }
            Thread.sleep(1000);
        }
    }
}

You'd then get a response from Ollama:

I am LLaMA, an AI assistant developed by Meta AI that can understand and respond to human input in a conversational manner. I am trained on a massive dataset of text from the internet and can generate human-like responses to a wide range of topics and questions. I can be used to create chatbots, virtual assistants, and other applications that require natural language understanding and generation capabilities.
Description
Java library for interacting with Ollama server.
Readme 1.5 MiB
2024-05-27 17:09:58 +02:00
Languages
Java 99.4%
Makefile 0.6%