>>> is on gentoo
### Quick‑Start Guide – **Ollama for AMD on Gentoo (gfx1036 / GLX‑1036)**
> **TL;DR** – Just install the *community build* from the releases page, put it in `~/bin`, and start it.
> If you want a **clean, self‑compiled binary** (smaller, only the GPUs you care about), follow the
“Self‑build” section.
---
## 1. Verify your GPU arch
Open a terminal and run:
```bash
hipinfo | grep -i gfx
```
You should see something like:
```
gpu id 0: AMD R7 570
          gfx arch: gfx1036
```
If `hipinfo` is missing, install the ROCm runtime first:
```bash
emerge --ask dev-graphics/rocm-runtime
```
---
## 2. Choose your install route
| Route | What you get | When to use |
|-------|--------------|-------------|
| **Demo release** | One‑click binary that already ships `rocblas.dll` (or `librocblas.so`) for `gfx1036` |
You just want it working on your machine |
| **Self‑build** | A lean binary that only contains the libraries you need for `gfx1036` | You want a smaller
install or need to patch the code |
| **Pre‑built ROCm libs + official Ollama** | You keep the official binary but swap out the `rocblas` library
for the AMD‑specific one | You’re on a community GPU but want the official distribution |
> For GLX‑1036 the *demo release* is **ready‑to‑go**.
> The self‑build route is only required if you want a custom build or you’re debugging.
---
## 3. Self‑build Ollama on Gentoo
> These steps assume you have `git`, `cmake`, `ninja`, and a recent C++ compiler (gcc‑13+ or clang‑18).
> They also assume you have the ROCm **runtime** already installed (see section 1).
### 3.1 Clone the repo
```bash
git clone https://github.com/your‑copy/ollama-for-amd.git
cd ollama-for-amd
```
### 3.2 Edit the build preset
Open `CMakePresets.json` (or create it if it doesn’t exist) and add your GPU arch:
```json
{
  "version": 2,
  "cmakeMinimumRequired": {
    "major": 3,
    "minor": 24,
    "patch": 0
  },
  "configurePresets": [
    {
      "name": "x64",
      "hidden": true,
      "generator": "Ninja",
      "description": "Build for AMD GFX1036",
      "binaryDir": "${sourceDir}/build",
      "cacheVariables": {
        "CMAKE_CXX_STANDARD": "20",
        "CMAKE_CXX_STANDARD_REQUIRED": "ON",
        "CMAKE_CXX_EXTENSIONS": "OFF",
        "CMAKE_BUILD_TYPE": "Release",
        "CMAKE_CXX_FLAGS_RELEASE": "-O3",
        "AMDGPU_TARGETS": "gfx1036"
      }
    }
  ]
}
```
> `AMDGPU_TARGETS` tells the ROCm compiler to emit code for `gfx1036`.
### 3.3 (Optional) Use a pre‑built ROCm `rocblas` for `gfx1036`
If you have the ROCm runtime installed, the `hip` compiler already knows where to find `librocblas.so`.
If you prefer to use the pre‑built `rocblas` that ships with the community release, copy it into the runtime
library path:
```bash
# Example: 6.1.2
cd /usr/lib64
sudo cp /path/to/rocm6.1-gfx1036/librocblas.so .
```
> Usually the ROCm runtime already contains the right `librocblas.so` for your arch, so this step is
optional.
### 3.4 Build
```bash
cmake --preset x64
cmake --build build --config Release
```
If you want a slimmer binary, run:
```bash
strip build/ollama
```
The resulting `build/ollama` binary will now load your GPU.
---
## 4. Install & run
### 4.1 Install the binary
```bash
mkdir -p ~/.local/bin
cp build/ollama ~/.local/bin/
chmod +x ~/.local/bin/ollama
```
### 4.2 Start the Ollama server
```bash
ollama serve
```
You should see something like:
```
INFO source=gpu.go:386 msg="found compatible GPU: gfx1036"
INFO source=serve.go:512 msg="Ollama server started on :11434"
```
### 4.3 Run a model
Open another terminal:
```bash
ollama run llama3.1
```
If everything works you’ll get a prompt from the model; if not, check the logs for GPU‑related errors.
---
## 5. Common Troubleshooting
| Symptom | Likely Cause | Fix |
|---------|--------------|-----|
| `no compatible GPUs were discovered` | `hipinfo` not on PATH or wrong ROCm runtime | `export
PATH=/usr/lib64/rocm/hip/bin:$PATH` and re‑run `hipinfo`. |
| `error: device not found` | Wrong ROCm version (e.g., 6.2 vs. 6.1) | Re‑install the ROCm runtime that
matches your GPU: `emerge --ask dev-graphics/rocm-runtime`. |
| `Ollama crashes on startup` | Incompatible `librocblas.so` | Copy the `librocblas.so` from the community
release into `/usr/lib64` (or set `LD_LIBRARY_PATH`). |
| `CPU only` | `ROCM_DISABLE` or `HSA_OVERRIDE_GFX_VERSION` not set | Export
`HSA_OVERRIDE_GFX_VERSION=10.3.0` or `ROCM_DISABLE=0`. |
> **Tip** – Run `hipinfo -d` to double‑check that the GPU is listed and visible to ROCm.
---
## 6. Using the **demo release** (easiest path)
If you do not want to build anything, just download the pre‑compiled binary from the releases page:
```
https://github.com/your‑copy/ollama-for-amd/releases/download/v0.5.6/ollama-amd-gfx1036.tar.gz
```
```bash
tar xf ollama-amd-gfx1036.tar.gz
mv ollama ~/.local/bin/
```
Now start it exactly as in section 4.2.
---
## 7. Final Checklist
- [ ] ROCm runtime installed (`emerge dev-graphics/rocm-runtime`)
- [ ] `hipinfo` shows `gfx1036`
- [ ] `ollama` binary (built or downloaded) in `~/.local/bin`
- [ ] `ollama serve` logs “found compatible GPU: gfx1036”
- [ ] Model runs successfully (`ollama run llama3.1`)
You’re all set! Enjoy Ollama on your AMD R7 570.