forked from Mirror/ollama4j
		
	Added new documentation for 'chat-with-thinking' and 'generate-thinking' APIs, including usage examples and streamed output. Updated existing API docs to improve example clarity, response formatting, and added more interactive output using TypewriterTextarea. Removed deprecated 'list-library-models' doc and made minor README updates.
		
			
				
	
	
	
		
			1.4 KiB
		
	
	
	
	
	
	
	
			
		
		
	
	
			1.4 KiB
		
	
	
	
	
	
	
	
sidebar_position
| sidebar_position | 
|---|
| 5 | 
import CodeEmbed from '@site/src/components/CodeEmbed';
Generate Embeddings
Generate embeddings from a model.
Using embed()
::::tip[LLM Response]
[
  [
    0.010000081,
    -0.0017487297,
    0.050126992,
    0.04694895,
    0.055186987,
    0.008570699,
    0.10545243,
    -0.02591801,
    0.1296789,
  ],
  [
    -0.009868476,
    0.060335685,
    0.025288988,
    -0.0062160683,
    0.07281043,
    0.017217565,
    0.090314455,
    -0.051715206,
  ]
]
::::
You could also use the OllamaEmbedRequestModel to specify the options such as seed, temperature, etc., to apply
for generating embeddings.
You will get a response similar to:
::::tip[LLM Response]
[
  [
    0.010000081,
    -0.0017487297,
    0.050126992,
    0.04694895,
    0.055186987,
    0.008570699,
    0.10545243,
    -0.02591801,
    0.1296789,
  ],
  [
    -0.009868476,
    0.060335685,
    0.025288988,
    -0.0062160683,
    0.07281043,
    0.017217565,
    0.090314455,
    -0.051715206,
  ]
]
::::