Overview
Ollama is an open-source platform for running Large Language Models (LLM) in local environments. In Fess 15.6, Ollama integration is provided as the fess-llm-ollama plugin and is suitable for use in private environments.
Using Ollama allows you to use AI search mode functionality without sending data externally.
Key Features
Local Execution: Data is not sent externally, ensuring privacy
Various Models: Supports multiple models including Llama, Mistral, Gemma, and CodeLlama
Cost Efficiency: No API costs (only hardware costs)
Customization: Can use custom fine-tuned models
Supported Models
Main models available with Ollama:
llama3.3:70b- Meta’s Llama 3.3 (70B parameters)gemma4:e4b- Google’s Gemma 4 (E4B parameters, default)mistral:7b- Mistral AI’s Mistral (7B parameters)codellama:13b- Meta’s Code Llama (13B parameters)phi3:3.8b- Microsoft’s Phi-3 (3.8B parameters)
Note
For the latest list of available models, see Ollama Library.
Prerequisites
Before using Ollama, verify the following.
Ollama Installation: Download and install from https://ollama.com/
Model Download: Download the model you want to use to Ollama
Ollama Server Running: Verify Ollama is running
Installing Ollama
Linux/macOS
Windows
Download and run the installer from the official website.
Docker
Downloading Models
Plugin Installation
In Fess 15.6, Ollama integration has been separated as a plugin. To use Ollama, you must install the fess-llm-ollama plugin.
Download fess-llm-ollama-15.6.0.jar.
Place it in the
app/WEB-INF/plugin/directory of your Fess installation directory.
Restart Fess.
Note
The plugin version should match the version of Fess.
Basic Configuration
In Fess 15.6, LLM-related configuration is split across multiple configuration files.
Minimal Configuration
app/WEB-INF/conf/fess_config.properties:
system.properties (also configurable from Administration > System > General):
Note
The LLM provider setting can also be configured by setting rag.llm.name from the administration screen (Administration > System > General).
Recommended Configuration (Production)
app/WEB-INF/conf/fess_config.properties:
system.properties:
Configuration Options
All configuration options available for the Ollama client. All settings except rag.llm.name are configured in fess_config.properties.
| Property | Description | Default |
|---|---|---|
rag.llm.ollama.api.url | Ollama server base URL | http://localhost:11434 |
rag.llm.ollama.model | Model name to use (must be downloaded to Ollama) | gemma4:e4b |
rag.llm.ollama.timeout | Request timeout (in milliseconds) | 60000 |
rag.llm.ollama.availability.check.interval | Availability check interval (in seconds) | 60 |
rag.llm.ollama.max.concurrent.requests | Maximum number of concurrent requests | 5 |
rag.llm.ollama.chat.evaluation.max.relevant.docs | Maximum number of relevant documents during evaluation | 3 |
rag.llm.ollama.concurrency.wait.timeout | Concurrent request wait timeout (milliseconds) | 30000 |
rag.llm.ollama.connect.timeout | TCP connect timeout (milliseconds). Configurable separately from rag.llm.ollama.timeout | 5000 |
rag.llm.ollama.retry.max | Maximum number of HTTP retries (on 429 and 5xx errors) | 3 |
rag.llm.ollama.retry.base.delay.ms | Base delay for exponential backoff (in milliseconds) | 2000 |
Concurrency Control
Use rag.llm.ollama.max.concurrent.requests to control the number of concurrent requests to Ollama. The default is 5. Adjust according to the resources of your Ollama server. Too many concurrent requests may overload the Ollama server and degrade response speed.
Per-Prompt-Type Settings
In Fess, LLM parameters can be customized per prompt type. Configure in fess_config.properties.
The following parameters can be set per prompt type:
rag.llm.ollama.{promptType}.temperature- Temperature during generationrag.llm.ollama.{promptType}.max.tokens- Maximum number of tokensrag.llm.ollama.{promptType}.context.max.chars- Maximum number of context characters
Available prompt types:
| Prompt Type | Description |
|---|---|
intent | Prompt for determining user intent |
evaluation | Prompt for evaluating search results |
unclear | Response prompt for unclear queries |
noresults | Prompt for when no results are found |
docnotfound | Prompt for when documents are not found |
answer | Answer generation prompt |
summary | Summary generation prompt |
faq | FAQ generation prompt |
direct | Direct response prompt |
Configuration Examples:
Ollama Model Options
Ollama model parameters can be configured in fess_config.properties.
| Property | Description | Default |
|---|---|---|
rag.llm.ollama.top.p | Top-P sampling value (0.0 to 1.0) | (Not set) |
rag.llm.ollama.top.k | Top-K sampling value | (Not set) |
rag.llm.ollama.num.ctx | Context window size | (Not set) |
rag.llm.ollama.default.* | Default fallback settings | (Not set) |
rag.llm.ollama.options.* | Global options | (Not set) |
Configuration Examples:
Thinking Model Support
When using thinking models such as gemma4 or qwen3.5, Fess supports configuring a thinking budget.
Set the following in fess_config.properties:
By setting the thinking budget, you can control the number of tokens allocated to the “thinking” step that the model performs before generating a response.
Network Configuration
Docker Configuration
The official docker-fess repository ships an Ollama overlay (compose-ollama.yaml). The minimum steps are:
The contents of compose-ollama.yaml (use as a reference if you build your own equivalent):
Notes:
FESS_PLUGINS=fess-llm-ollama:15.6.0makes the container’srun.shdownload and install the plugin JAR intoapp/WEB-INF/plugin/automatically-Dfess.config.rag.chat.enabled=trueenables AI mode-Dfess.config.rag.llm.ollama.api.url=...sets the Ollama server URL (within the Docker Compose network, resolve it by the service name such asollama01)-Dfess.system.rag.llm.name=ollamaonly acts as the initial default before a value is persisted in OpenSearch. After startup you can also change it from Administration > System > General (RAG section)
Note
Uppercase snake-case environment variables such as RAG_CHAT_ENABLED and RAG_LLM_NAME are not recognized directly by Fess. All values must be passed inside FESS_JAVA_OPTS as -Dfess.config.<key> (for fess_config.properties keys) or -Dfess.system.<key> (for system.properties keys).
Remote Ollama Server
When running Ollama on a separate server from Fess:
Warning
Ollama does not have authentication by default, so when making it externally accessible, consider network-level security measures (firewall, VPN, etc.).
Using HTTP Proxy
Since Fess 15.6.1, the Ollama client shares the Fess-wide HTTP proxy configuration. If reaching the Ollama server requires going through a proxy (for example, when using a remote Ollama server), configure the following properties in fess_config.properties.
| Property | Description | Default |
|---|---|---|
http.proxy.host | Proxy hostname (an empty string disables the proxy) | "" |
http.proxy.port | Proxy port number | 8080 |
http.proxy.username | Username for proxy authentication (optional; enables Basic auth when set) | "" |
http.proxy.password | Password for proxy authentication | "" |
Note
Because Ollama typically runs locally or on an internal network, proxy configuration is only required in limited cases (for example, when reaching a remote Ollama server that is only accessible through a corporate proxy). This configuration also affects Fess-wide HTTP access (such as the crawler).
Model Selection Guide
Guidelines for selecting models based on intended use.
| Model | Size | Required VRAM | Use Case |
|---|---|---|---|
phi3:3.8b | Small | 4GB+ | Lightweight environments, simple Q&A |
gemma4:e4b | Small-Medium | 8GB+ | Well-balanced general use, thinking support (default) |
mistral:7b | Medium | 8GB+ | When high-quality responses are needed |
llama3.3:70b | Large | 48GB+ | Highest quality responses, complex reasoning |
GPU Support
Ollama supports GPU acceleration. Using an NVIDIA GPU significantly improves inference speed.
Troubleshooting
Connection Errors
Symptom: Chat functionality shows errors, LLM displays as unavailable
Check the following:
Verify Ollama is running:
Verify the model is downloaded:
Check firewall settings
Verify the
fess-llm-ollamaplugin is placed inapp/WEB-INF/plugin/
Model Not Found
Symptom: “Configured model not found in Ollama” appears in logs
Solutions:
Verify the model name is correct (may need to include
:latesttag):Download the required model:
Timeout
Symptom: Requests time out
Solutions:
Extend timeout duration:
Consider using a smaller model or GPU environment
Debug Settings
When investigating issues, adjust Fess log levels to output detailed Ollama-related logs.
app/WEB-INF/classes/log4j2.xml:
References
LLM Integration Overview - LLM Integration Overview
AI Mode Configuration - AI Mode Details