If you want to run Granite models for inference, Ollama is probably the easiest approach. Here's how to get started and troubleshoot common issues.
Installation
Install Ollama using Homebrew:
brew install ollama
  Choosing a Model
Visit the Ollama Granite4 library to explore available models.
Models with the -h suffix use the newer hybrid architecture, which offers several advantages over the condensed architecture:
- Smaller model size
 - Larger context windows (up to 1M tokens)
 - More efficient inference by activating only a portion of parameters
 
| Name | Size | Context | Input | 
|---|---|---|---|
| granite4:latest | 2.1GB | 128K | Text | 
| granite4:micro | 2.1GB | 128K | Text | 
| granite4:micro-h | 1.9GB | 1M | Text | 
| granite4:tiny-h | 4.2GB | 1M | Text | 
| granite4:small-h | 19GB | 1M | Text | 
Pulling and Running a Model
Pull your chosen model:
ollama pull granite4:small-h
  Test that it works:
ollama run granite4:small-h
>>> Hello, what company is behind your creators?
The company that created me is called IBM (International Business Machines Corporation). ...
>>>
  Troubleshooting: Error 500
Models with the hybrid architecture require a recent version of Ollama. If you encounter a 500 Internal Server Error when trying to run a model, you likely need to update Ollama and then try to run the model again.
Error:
ollama run granite4:tiny-h
Error: 500 Internal Server Error: unable to load model: /Users/ignacio/.ollama/models/blobs/sha256-9811e90b0eecf2b194aafad5bb386279f338a45412a9e6f86b718cca6626c495
ollama --version
ollama version is 0.11.4
Debugging the Error
To understand the root cause, restart the Ollama server with debug logging enabled:
brew services stop ollama
OLLAMA_DEBUG=2 ollama serve
ollama serve full log
OLLAMA_DEBUG=2 ollama serve
time=2025-10-26T18:40:19.270+09:00 level=INFO source=routes.go:1304 msg="server config" env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/ignacio/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]"
time=2025-10-26T18:40:19.271+09:00 level=INFO source=images.go:477 msg="total blobs: 23"
time=2025-10-26T18:40:19.272+09:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2025-10-26T18:40:19.272+09:00 level=INFO source=routes.go:1357 msg="Listening on 127.0.0.1:11434 (version 0.11.4)"
time=2025-10-26T18:40:19.273+09:00 level=DEBUG source=sched.go:106 msg="starting llm scheduler"
time=2025-10-26T18:40:19.313+09:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="107.5 GiB" available="107.5 GiB"
[GIN] 2025/10/26 - 18:40:34 | 200 |      30.458µs |       127.0.0.1 | HEAD     "/"
time=2025-10-26T18:40:34.677+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32
[GIN] 2025/10/26 - 18:40:34 | 200 |   20.910833ms |       127.0.0.1 | POST     "/api/show"
time=2025-10-26T18:40:34.692+09:00 level=DEBUG source=sched.go:183 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
time=2025-10-26T18:40:34.697+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32
time=2025-10-26T18:40:34.707+09:00 level=DEBUG source=sched.go:226 msg="loading first model" model=/Users/ignacio/.ollama/models/blobs/sha256-9811e90b0eecf2b194aafad5bb386279f338a45412a9e6f86b718cca6626c495
time=2025-10-26T18:40:34.707+09:00 level=DEBUG source=memory.go:111 msg=evaluating library=metal gpu_count=1 available="[107.5 GiB]"
time=2025-10-26T18:40:34.707+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.vision.block_count default=0
time=2025-10-26T18:40:34.707+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.head_count_kv default=0
time=2025-10-26T18:40:34.707+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.head_count_kv default="&{size:0 values:[]}"
time=2025-10-26T18:40:34.707+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.key_length default=128
time=2025-10-26T18:40:34.707+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.value_length default=128
time=2025-10-26T18:40:34.707+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.head_count_kv default=0
time=2025-10-26T18:40:34.707+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.head_count_kv default="&{size:0 values:[]}"
time=2025-10-26T18:40:34.707+09:00 level=INFO source=sched.go:786 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/ignacio/.ollama/models/blobs/sha256-9811e90b0eecf2b194aafad5bb386279f338a45412a9e6f86b718cca6626c495 gpu=0 parallel=1 available=115448725504 required="5.5 GiB"
time=2025-10-26T18:40:34.708+09:00 level=INFO source=server.go:135 msg="system memory" total="128.0 GiB" free="98.2 GiB" free_swap="0 B"
time=2025-10-26T18:40:34.708+09:00 level=DEBUG source=memory.go:111 msg=evaluating library=metal gpu_count=1 available="[107.5 GiB]"
time=2025-10-26T18:40:34.708+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.vision.block_count default=0
time=2025-10-26T18:40:34.708+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.head_count_kv default=0
time=2025-10-26T18:40:34.708+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.head_count_kv default="&{size:0 values:[]}"
time=2025-10-26T18:40:34.708+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.key_length default=128
time=2025-10-26T18:40:34.708+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.value_length default=128
time=2025-10-26T18:40:34.708+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.head_count_kv default=0
time=2025-10-26T18:40:34.708+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.head_count_kv default="&{size:0 values:[]}"
time=2025-10-26T18:40:34.708+09:00 level=INFO source=server.go:175 msg=offload library=metal layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[107.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.5 GiB" memory.required.partial="5.5 GiB" memory.required.kv="320.0 MiB" memory.required.allocations="[5.5 GiB]" memory.weights.total="3.9 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="120.6 MiB" memory.graph.full="640.0 MiB" memory.graph.partial="640.0 MiB"
time=2025-10-26T18:40:34.708+09:00 level=DEBUG source=server.go:291 msg="compatible gpu libraries" compatible=[]
llama_model_load_from_file_impl: using device Metal (Apple M4 Max) - 110100 MiB free
llama_model_loader: loaded meta data with 42 key-value pairs and 666 tensors from /Users/ignacio/.ollama/models/blobs/sha256-9811e90b0eecf2b194aafad5bb386279f338a45412a9e6f86b718cca6626c495 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = granitehybrid
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Granite 4.0 H Tiny
llama_model_loader: - kv   3:                         general.size_label str              = 64x994M
llama_model_loader: - kv   4:                            general.license str              = apache-2.0
llama_model_loader: - kv   5:                               general.tags arr[str,2]       = ["language", "granite-4.0"]
llama_model_loader: - kv   6:                  granitehybrid.block_count u32              = 40
llama_model_loader: - kv   7:               granitehybrid.context_length u32              = 1048576
llama_model_loader: - kv   8:             granitehybrid.embedding_length u32              = 1536
llama_model_loader: - kv   9:          granitehybrid.feed_forward_length u32              = 512
llama_model_loader: - kv  10:         granitehybrid.attention.head_count u32              = 12
llama_model_loader: - kv  11:      granitehybrid.attention.head_count_kv arr[i32,40]      = [0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, ...
llama_model_loader: - kv  12:               granitehybrid.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  13: granitehybrid.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  14:                 granitehybrid.expert_count u32              = 64
llama_model_loader: - kv  15:            granitehybrid.expert_used_count u32              = 6
llama_model_loader: - kv  16:                   granitehybrid.vocab_size u32              = 100352
llama_model_loader: - kv  17:         granitehybrid.rope.dimension_count u32              = 128
llama_model_loader: - kv  18:              granitehybrid.attention.scale f32              = 0.007812
llama_model_loader: - kv  19:              granitehybrid.embedding_scale f32              = 12.000000
llama_model_loader: - kv  20:               granitehybrid.residual_scale f32              = 0.220000
llama_model_loader: - kv  21:                  granitehybrid.logit_scale f32              = 6.000000
llama_model_loader: - kv  22: granitehybrid.expert_shared_feed_forward_length u32              = 1024
llama_model_loader: - kv  23:              granitehybrid.ssm.conv_kernel u32              = 4
llama_model_loader: - kv  24:               granitehybrid.ssm.state_size u32              = 128
llama_model_loader: - kv  25:              granitehybrid.ssm.group_count u32              = 1
llama_model_loader: - kv  26:               granitehybrid.ssm.inner_size u32              = 3072
llama_model_loader: - kv  27:           granitehybrid.ssm.time_step_rank u32              = 48
llama_model_loader: - kv  28:       granitehybrid.rope.scaling.finetuned bool             = false
llama_model_loader: - kv  29:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  30:                         tokenizer.ggml.pre str              = dbrx
llama_model_loader: - kv  31:                      tokenizer.ggml.tokens arr[str,100352]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  32:                  tokenizer.ggml.token_type arr[i32,100352]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  33:                      tokenizer.ggml.merges arr[str,100000]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  34:                tokenizer.ggml.bos_token_id u32              = 100257
llama_model_loader: - kv  35:                tokenizer.ggml.eos_token_id u32              = 100257
llama_model_loader: - kv  36:            tokenizer.ggml.unknown_token_id u32              = 100269
llama_model_loader: - kv  37:            tokenizer.ggml.padding_token_id u32              = 100256
llama_model_loader: - kv  38:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  39:                    tokenizer.chat_template str              = {%- set tools_system_message_prefix =...
llama_model_loader: - kv  40:               general.quantization_version u32              = 2
llama_model_loader: - kv  41:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  337 tensors
llama_model_loader: - type q4_K:  286 tensors
llama_model_loader: - type q6_K:   43 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 3.94 GiB (4.87 BPW) 
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'granitehybrid'
llama_model_load_from_file_impl: failed to load model
time=2025-10-26T18:40:34.741+09:00 level=INFO source=sched.go:453 msg="NewLlamaServer failed" model=/Users/ignacio/.ollama/models/blobs/sha256-9811e90b0eecf2b194aafad5bb386279f338a45412a9e6f86b718cca6626c495 error="unable to load model: /Users/ignacio/.ollama/models/blobs/sha256-9811e90b0eecf2b194aafad5bb386279f338a45412a9e6f86b718cca6626c495"
[GIN] 2025/10/26 - 18:40:34 | 500 |   60.900167ms |       127.0.0.1 | POST     "/api/generate"
The key error message reveals the issue:
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'granitehybrid'
This confirms that the current version of Ollama doesn't recognize the hybrid architecture, so an update is required.
Update Ollama using Homebrew:
brew upgrade ollama
  Verify the update and test again:
ollama --version
ollama version is 0.12.6
ollama run granite4:tiny-h
>>> Hello, what are you doing today?
Hello! I'm here to assist you with any questions or tasks you may have...
>>>
  That's it! You should now be able to run Granite models with Ollama without issues.




