Running Granite Models with Ollama

If you want to run Granite models for inference, Ollama is probably the easiest approach. Here's how to get started and troubleshoot common issues.

Installation

Install Ollama using Homebrew:

brew install ollama

Choosing a Model

Visit the Ollama Granite4 library to explore available models.

Models with the -h suffix use the newer hybrid architecture, which offers several advantages over the condensed architecture:

  • Smaller model size
  • Larger context windows (up to 1M tokens)
  • More efficient inference by activating only a portion of parameters
Name Size Context Input
granite4:latest 2.1GB 128K Text
granite4:micro 2.1GB 128K Text
granite4:micro-h 1.9GB 1M Text
granite4:tiny-h 4.2GB 1M Text
granite4:small-h 19GB 1M Text

Pulling and Running a Model

Pull your chosen model:

ollama pull granite4:small-h

Test that it works:

ollama run granite4:small-h
>>> Hello, what company is behind your creators?
The company that created me is called IBM (International Business Machines Corporation). ...
>>>

Troubleshooting: Error 500

Models with the hybrid architecture require a recent version of Ollama. If you encounter a 500 Internal Server Error when trying to run a model, you likely need to update Ollama and then try to run the model again.

Error:

ollama run granite4:tiny-h
Error: 500 Internal Server Error: unable to load model: /Users/ignacio/.ollama/models/blobs/sha256-9811e90b0eecf2b194aafad5bb386279f338a45412a9e6f86b718cca6626c495

ollama --version
ollama version is 0.11.4

Debugging the Error

To understand the root cause, restart the Ollama server with debug logging enabled:

brew services stop ollama
OLLAMA_DEBUG=2 ollama serve
ollama serve full log
OLLAMA_DEBUG=2 ollama serve
time=2025-10-26T18:40:19.270+09:00 level=INFO source=routes.go:1304 msg="server config" env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:DEBUG-4 OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/ignacio/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]"
time=2025-10-26T18:40:19.271+09:00 level=INFO source=images.go:477 msg="total blobs: 23"
time=2025-10-26T18:40:19.272+09:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2025-10-26T18:40:19.272+09:00 level=INFO source=routes.go:1357 msg="Listening on 127.0.0.1:11434 (version 0.11.4)"
time=2025-10-26T18:40:19.273+09:00 level=DEBUG source=sched.go:106 msg="starting llm scheduler"
time=2025-10-26T18:40:19.313+09:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="107.5 GiB" available="107.5 GiB"
[GIN] 2025/10/26 - 18:40:34 | 200 |      30.458µs |       127.0.0.1 | HEAD     "/"
time=2025-10-26T18:40:34.677+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32
[GIN] 2025/10/26 - 18:40:34 | 200 |   20.910833ms |       127.0.0.1 | POST     "/api/show"
time=2025-10-26T18:40:34.692+09:00 level=DEBUG source=sched.go:183 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=3 gpu_count=1
time=2025-10-26T18:40:34.697+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=general.alignment default=32
time=2025-10-26T18:40:34.707+09:00 level=DEBUG source=sched.go:226 msg="loading first model" model=/Users/ignacio/.ollama/models/blobs/sha256-9811e90b0eecf2b194aafad5bb386279f338a45412a9e6f86b718cca6626c495
time=2025-10-26T18:40:34.707+09:00 level=DEBUG source=memory.go:111 msg=evaluating library=metal gpu_count=1 available="[107.5 GiB]"
time=2025-10-26T18:40:34.707+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.vision.block_count default=0
time=2025-10-26T18:40:34.707+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.head_count_kv default=0
time=2025-10-26T18:40:34.707+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.head_count_kv default="&{size:0 values:[]}"
time=2025-10-26T18:40:34.707+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.key_length default=128
time=2025-10-26T18:40:34.707+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.value_length default=128
time=2025-10-26T18:40:34.707+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.head_count_kv default=0
time=2025-10-26T18:40:34.707+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.head_count_kv default="&{size:0 values:[]}"
time=2025-10-26T18:40:34.707+09:00 level=INFO source=sched.go:786 msg="new model will fit in available VRAM in single GPU, loading" model=/Users/ignacio/.ollama/models/blobs/sha256-9811e90b0eecf2b194aafad5bb386279f338a45412a9e6f86b718cca6626c495 gpu=0 parallel=1 available=115448725504 required="5.5 GiB"
time=2025-10-26T18:40:34.708+09:00 level=INFO source=server.go:135 msg="system memory" total="128.0 GiB" free="98.2 GiB" free_swap="0 B"
time=2025-10-26T18:40:34.708+09:00 level=DEBUG source=memory.go:111 msg=evaluating library=metal gpu_count=1 available="[107.5 GiB]"
time=2025-10-26T18:40:34.708+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.vision.block_count default=0
time=2025-10-26T18:40:34.708+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.head_count_kv default=0
time=2025-10-26T18:40:34.708+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.head_count_kv default="&{size:0 values:[]}"
time=2025-10-26T18:40:34.708+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.key_length default=128
time=2025-10-26T18:40:34.708+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.value_length default=128
time=2025-10-26T18:40:34.708+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.head_count_kv default=0
time=2025-10-26T18:40:34.708+09:00 level=DEBUG source=ggml.go:208 msg="key with type not found" key=granitehybrid.attention.head_count_kv default="&{size:0 values:[]}"
time=2025-10-26T18:40:34.708+09:00 level=INFO source=server.go:175 msg=offload library=metal layers.requested=-1 layers.model=41 layers.offload=41 layers.split="" memory.available="[107.5 GiB]" memory.gpu_overhead="0 B" memory.required.full="5.5 GiB" memory.required.partial="5.5 GiB" memory.required.kv="320.0 MiB" memory.required.allocations="[5.5 GiB]" memory.weights.total="3.9 GiB" memory.weights.repeating="3.8 GiB" memory.weights.nonrepeating="120.6 MiB" memory.graph.full="640.0 MiB" memory.graph.partial="640.0 MiB"
time=2025-10-26T18:40:34.708+09:00 level=DEBUG source=server.go:291 msg="compatible gpu libraries" compatible=[]
llama_model_load_from_file_impl: using device Metal (Apple M4 Max) - 110100 MiB free
llama_model_loader: loaded meta data with 42 key-value pairs and 666 tensors from /Users/ignacio/.ollama/models/blobs/sha256-9811e90b0eecf2b194aafad5bb386279f338a45412a9e6f86b718cca6626c495 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = granitehybrid
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Granite 4.0 H Tiny
llama_model_loader: - kv   3:                         general.size_label str              = 64x994M
llama_model_loader: - kv   4:                            general.license str              = apache-2.0
llama_model_loader: - kv   5:                               general.tags arr[str,2]       = ["language", "granite-4.0"]
llama_model_loader: - kv   6:                  granitehybrid.block_count u32              = 40
llama_model_loader: - kv   7:               granitehybrid.context_length u32              = 1048576
llama_model_loader: - kv   8:             granitehybrid.embedding_length u32              = 1536
llama_model_loader: - kv   9:          granitehybrid.feed_forward_length u32              = 512
llama_model_loader: - kv  10:         granitehybrid.attention.head_count u32              = 12
llama_model_loader: - kv  11:      granitehybrid.attention.head_count_kv arr[i32,40]      = [0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, ...
llama_model_loader: - kv  12:               granitehybrid.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  13: granitehybrid.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  14:                 granitehybrid.expert_count u32              = 64
llama_model_loader: - kv  15:            granitehybrid.expert_used_count u32              = 6
llama_model_loader: - kv  16:                   granitehybrid.vocab_size u32              = 100352
llama_model_loader: - kv  17:         granitehybrid.rope.dimension_count u32              = 128
llama_model_loader: - kv  18:              granitehybrid.attention.scale f32              = 0.007812
llama_model_loader: - kv  19:              granitehybrid.embedding_scale f32              = 12.000000
llama_model_loader: - kv  20:               granitehybrid.residual_scale f32              = 0.220000
llama_model_loader: - kv  21:                  granitehybrid.logit_scale f32              = 6.000000
llama_model_loader: - kv  22: granitehybrid.expert_shared_feed_forward_length u32              = 1024
llama_model_loader: - kv  23:              granitehybrid.ssm.conv_kernel u32              = 4
llama_model_loader: - kv  24:               granitehybrid.ssm.state_size u32              = 128
llama_model_loader: - kv  25:              granitehybrid.ssm.group_count u32              = 1
llama_model_loader: - kv  26:               granitehybrid.ssm.inner_size u32              = 3072
llama_model_loader: - kv  27:           granitehybrid.ssm.time_step_rank u32              = 48
llama_model_loader: - kv  28:       granitehybrid.rope.scaling.finetuned bool             = false
llama_model_loader: - kv  29:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  30:                         tokenizer.ggml.pre str              = dbrx
llama_model_loader: - kv  31:                      tokenizer.ggml.tokens arr[str,100352]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  32:                  tokenizer.ggml.token_type arr[i32,100352]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  33:                      tokenizer.ggml.merges arr[str,100000]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  34:                tokenizer.ggml.bos_token_id u32              = 100257
llama_model_loader: - kv  35:                tokenizer.ggml.eos_token_id u32              = 100257
llama_model_loader: - kv  36:            tokenizer.ggml.unknown_token_id u32              = 100269
llama_model_loader: - kv  37:            tokenizer.ggml.padding_token_id u32              = 100256
llama_model_loader: - kv  38:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  39:                    tokenizer.chat_template str              = {%- set tools_system_message_prefix =...
llama_model_loader: - kv  40:               general.quantization_version u32              = 2
llama_model_loader: - kv  41:                          general.file_type u32              = 15
llama_model_loader: - type  f32:  337 tensors
llama_model_loader: - type q4_K:  286 tensors
llama_model_loader: - type q6_K:   43 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 3.94 GiB (4.87 BPW) 
llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'granitehybrid'
llama_model_load_from_file_impl: failed to load model
time=2025-10-26T18:40:34.741+09:00 level=INFO source=sched.go:453 msg="NewLlamaServer failed" model=/Users/ignacio/.ollama/models/blobs/sha256-9811e90b0eecf2b194aafad5bb386279f338a45412a9e6f86b718cca6626c495 error="unable to load model: /Users/ignacio/.ollama/models/blobs/sha256-9811e90b0eecf2b194aafad5bb386279f338a45412a9e6f86b718cca6626c495"
[GIN] 2025/10/26 - 18:40:34 | 500 |   60.900167ms |       127.0.0.1 | POST     "/api/generate"

The key error message reveals the issue:

llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'granitehybrid'

This confirms that the current version of Ollama doesn't recognize the hybrid architecture, so an update is required.

Update Ollama using Homebrew:

brew upgrade ollama

Verify the update and test again:

ollama --version
ollama version is 0.12.6

ollama run granite4:tiny-h
>>> Hello, what are you doing today?
Hello! I'm here to assist you with any questions or tasks you may have...
>>>

That's it! You should now be able to run Granite models with Ollama without issues.

UV - Python Package Manager

What is UV?

UV is a next-generation Python toolchain written in Rust. It unifies package management, virtual environments, and Python version control into a single, high-performance binary — no activation steps, no Python preinstalled.

Why UV Exists

Python development traditionally requires multiple tools — pip, venv, and pyenv — leading to slow installs, multiple CLIs, and the “bootstrap problem” (needing Python installed first). Even Poetry, which improved dependency resolution, still relies on external Python managers and mutates the shell via activated environments, causing inconsistent command behavior.

UV rethinks the workflow. It replaces multiple tools with one binary, supports the official PEP-518 pyproject.toml standard, and runs code with uv run — activating virtual environments internally per process without affecting your terminal. It also includes a Python version manager, SAT-based dependency resolution, and lockfile reproducibility.

The result: a tool that is 10–20× faster than pip or Poetry, works the same across macOS, Linux, and Windows, and removes most friction from Python development workflows.

Feature Comparison

Category Aspect pyenv + pip + venv Poetry + pyenv UV
Dependency Management Dependency resolution algorithm Sequential / greedy (may install conflicts) ✅ SAT solver (correct, reproducible) ✅ SAT solver (optimized, parallel)
Reproducibility with lock files requirements.txt: none
pip freeze: partial, no hashes
✅ Yes (poetry.lock) ✅ Yes (uv.lock)
Performance (install ~50 packages) 1× baseline (30–45s) ~1.3× faster (25–35s) ✅ 10–15× faster (2–3s, optimized in Rust)
Environment Management Virtual environment activation Manual activation required; mutates shell poetry run auto-activates uv run auto-activates internally (no shell mutation)
Tool integration 3 separate tools 2 tools (still needs pyenv) ✅ Single tool
Python Version Management Install / switch Python versions Via pyenv (mutates shell environment) Via pyenv (mutates shell environment) ✅ Built-in (no shell mutation)
Bootstrap requirement Needs Python preinstalled Needs Python preinstalled ✅ None (self-contained binary)

Typical Workflows Compared

Note: Poetry + pyenv covers most of the same workflow steps as UV, but UV is significantly faster, isolates environments without shell mutation, and includes a built-in Python version manager.

pyenv + pip + venv Poetry + pyenv UV
pyenv install 3.12
pyenv local 3.12
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
(wait 30–60s)
pyenv install 3.12
pyenv local 3.12
poetry install
(wait 20–40s)
uv python install 3.12
uv sync
(wait 2–3s)
python script.py poetry run python script.py uv run python script.py
deactivate (no deactivate needed) (no deactivate needed)

Using UV

There are several post about uv around. Below are just some simple commands I found I will be using every day.

Install UV

# macOS / Linux
curl -LsSf https://astral.sh/uv/install.sh | sh

# Windows (PowerShell)
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"

# Verify
uv --version

Start a Project

# Create and enter project
mkdir my-project && cd my-project

# Initialize pyproject.toml and environment
uv init

# Or if pyproject.toml already exists:
uv sync

Minimal pyproject.toml example:

[project]
name = "my-project"
version = "0.1.0"
requires-python = ">=3.11"

dependencies = [
    "requests>=2.31.0",
]

Managing Dependencies

Add / remove packages updates pyproject.toml automatically:

[project]
dependencies = [
    "requests>=2.31.0",
]

[tool.uv.dev-dependencies]
pytest = "*"
# Add main dependency
uv add requests

# Add dev dependency
uv add --dev pytest

# Remove dependency
uv remove requests

Dependency Groups (Optional)

Defined in pyproject.toml:

[dependency-groups]
dev = ["pytest", "mypy", "ruff"]
docs = ["mkdocs", "mkdocs-material"]
# Install groups
uv sync --group dev
uv sync --group docs

Python Versions

Stored in pyproject.toml:

[tool.uv.python]
3.12 = "*"
3.11 = "*"
# Install / switch Python
uv python install 3.12
uv python install 3.11

# List installed versions
uv python list

Running Scripts & Tools

Commands are defined or come from installed packages; uv run reads from pyproject.toml or the environment:

[project.scripts]
my-tool = "my_package.main:cli"
# Run Python script or module
uv run python main.py
uv run python -m module_name

# Run installed tools
uv run pytest
uv run mypy src/
uv run ruff check src/
uv run ruff format src/

# Run a custom script
uv run my-tool
Hope it helps.

Camera rotation angles in AVFoundation

I recently worked with AVFoundation camera and used AVCaptureDevice.RotationCoordinator to sync device rotation to video preview layer. That was straight forward but the problem was ouput sampleBuffer was not rotated.

So, for frame processing purposes I rotated the sampleBuffer with below function.
private func cgImageOrientation(from rotationAngle: CGFloat) -> CGImagePropertyOrientation {
    switch rotationAngle {
    case 0: return isFrontCamera ? .upMirrored : .up
    case 90: return isFrontCamera ? .rightMirrored : .right
    case 180: return isFrontCamera ? .downMirrored : .down
    case 270: return isFrontCamera ? .leftMirrored : .left
    default: return isFrontCamera ? .upMirrored : .up
    }
}
This are just notes for angles I observed and so far got successfull results. (Probably it is device specific so I would like to find a more general way of dealing with this ...)
Model Device orientation Camera position Preview Layer
Connection Angle
Camera func cgImageOrientation result ok?
iPad (A16) landscape top 180 front downMirrored ok
portrait right 90 front rightMirrored ok
landscape bottom 0 front upMirrored ok
portrait left 270 front leftMirrored ok

Service Logs in Azure Web App via Terraform

I spend lot of time to finally setup logs of my web application in Azure correctly.

Mission

After creating my web app. I want to see the logs in "Log Analytics Workspace" > "Logs".
In order to so so, I need first to tell the "App Service Logs" to see my STDOUT and STDERR. This is quite easy in the UI; It is basically just turn the "Application Logging" toggle to "File System".
I want to this but not via UI but via Terraform scripts.
application logs

My Problem

Linux WebApp Documentation explains about what logs, application_logs and http_logs do but it does not explicitly say the relationship between them.
It turns out that application_logs needs http_logs to work properly. I was passing ONLY application_logs and it did not have any effect on the toggle in the UI

Solution

Pass both application_logs and http_logs, even if not interested in the http_logs it is required for application logs to work. Likely to be an implementation detail not well documented.
resource "azurerm_linux_web_app" "main" {
  name                = "${var.app_name}-web"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name
  service_plan_id     = azurerm_service_plan.main.id

  site_config {
    ...
  }

  app_settings = {
    ...
  }

  identity {
    ...
  }

  logs {
    application_logs {
      file_system_level = "Information"  # Options: Off, Error, Warning, Information, Verbose
    }

    http_logs {
      file_system {
        retention_in_days = 7
        retention_in_mb   = 35
      }
    }

    detailed_error_messages = true
    failed_request_tracing = true
  }
}

Using Azure CLI

There are two tools az (Azure CLI) and azd (Azure Developer CLI). Both serve different purposes but are complementary to each other.
Azure CLI Azure Developer CLI
Purpose General-purpose command-line tool High-level developer-focused CLI
Use Case Manages Azure resources and configurations. Manages end-to-end Azure workflows and applications.

1. Azure CLI

1.1 Install

Install it:
brew install azure-cli
Check it works:
az --version
It should show the installed version. As of 2025/01/01 my Azure CLI version is 2.67.0.

1.2 Login

To login do the following command. It will open a browser where you can login. When the login is done it will communicate the CLI.
az login
Since Azure CLI version 2.61.0, we are able to select the Subscription at login time. Just follow the instructions in the CLI and input the number that corresponds to the desired subcription. More in the official docs.

1.3 Check acccount/subscription information

If successfully logged in active account and subscription information can be retrieved
az account show --output table
or more specific
az account show --query "{Name:name, ID:id}" --output table
The former will show a table like this:
EnvironmentName HomeTenantId IsDefault Name State TenantDefaultDomain TenantDisplayName TenantId
AzureCloud ccfc0000-0000-0000-0000-000000000000 True Azure Subscription Basic Enabled meemaildomain.onmicrosoft.com Default Directory ccfc0000-0000-0000-0000-000000000000

1.4 Retrieve all subscriptions

All available accounts can be retrieved with
az account list --output table
or a more specific command
az account list --query "[].{Name:name, ID:id}" --output table
The former will show a table like this:
Name CloudName SubscriptionId TenantId State IsDefault
Azure Subscription Basic AzureCloud 00000-0000-0000.. ccfc0000-0000-0000-0000-000000000000 Enabled true

1.5 List Resource Groups

List resorces groups in a nice table. If you prefer the json format just omit --output table.
az group list --output table
or a more specific query:
az group list --output table

2. Azure Developer CLI

2.1 Install

Install it (More in the official docs.):
brew tap azure/azd && brew install azd
Check it works:
azd version

2.2 See help

There are various subcommands like init, up, etc. See a list of them in the help:
azd --help

2.3 Create a simple web

Official documentation shows various templates. I will use python and Flask with App Service because is the cheapest for small apps. Create the app from a template in local computer. We will be interactively asked for an environment name. It is optional.
azd init --template azure-flask-postgres-flexible-appservice 

2.4 Deploying the web app

If app was created with above template. It will contain a bunch of bicep files and .azure.yaml file which are enough to deploy with the follwing command. This command will deploy to the default environment. If it is the first time deploying in such environment then the subscription and region will be interactively asked too.
azd up
Resource group name and other services names will be derived from environment name (dev1).

(✓) Done: Resource group: dev1-rg (4.324s)
(✓) Done: Virtual Network: dev1-vnet (7.855s)
(✓) Done: Log Analytics workspace: dev1-AAAAAAAAAAAAA-loganalytics (17.576s)
(✓) Done: Application Insights: dev1-AAAAAAAAAAAAA-appinsights (5.623s)
(✓) Done: Portal dashboard: dev1-AAAAAAAAAAAAA-appinsights-dashboard (1.832s)
(✓) Done: Key Vault: dev1AAAAAAAAAAAAA-vault (24.42s)
(✓) Done: Private Endpoint: dev1-keyvault-pe (33.304s)
(✓) Done: Azure Database for PostgreSQL flexible server: dev1-AAAAAAAAAAAAA-postgresql (4m5.414s)
(✓) Done: App Service plan: dev1-AAAAAAAAAAAAA-appsvc-serviceplan (10.593s)
(✓) Done: App Service: dev1-AAAAAAAAAAAAA-appsvc-web (43.647s)

2.5 Deploy to another environment/region/subscription

Create a new environment and pass it to up command.
azd env new dev2
azd up --environment dev2  
Where is Environment information stored?
Information for each environment is stored in a separated file: .azure/[EnvironmentName]/.env .
  • Environment can be created at azd init --template [template-name] and azd env new [EnvironmentName] .
  • Environment name is stored in AZURE_ENV_NAME variable
  • Additionally, subscription plan name and location (region) is also stored in the same file.
  • Information gathered at deployment/provisioning time is also stored (i.e: BACKEND_URI, AZURE_KEY_VAULT_ENDPOINT, AZURE_KEY_VAULT_NAME, etc).
.azure/dev2/.env :
AZURE_ENV_NAME="dev2"
AZURE_LOCATION="eastasia"
AZURE_SUBSCRIPTION_ID="00000000-0000-0000-0000-000000000000"

2.6 Check environments


View current environment. The default environment will be used in azd up unless --environment flag is used.
azd env list
NAME DEFAULT LOCAL REMOTE
dev1 false true false
dev2 false true false
nacho true true false
View other related variables
azd env get-values

Flutter installation

This is a summary of what this YouTube video explains. Recommended. Youtube - Install Flutter macOS - Derek Banas

1. Install Flutter

brew install --cask flutter
flutter doctor
flutter upgrade

2. Optional: Install Android Studio

Install Android studio if you plan to target Android devices.
Download latest Android Studio from https://developer.android.com/studio and install Flutter plugin
Settings > Plugins > Flutter In Settings > System Settings
  • In SDK Platforms tab, make sure at least one API level is installed. This is needed for running apps.
  • In SDK Tools tab, make sure Android SDK Command Line Tools (latest) is installed. This is needed by flutter.
  • Set ANDROID_HOME Get your Android sdk location from Android Studio
    echo export ANDROID_HOME="/Users/ignacio/Library/Android/sdk" >> ~/.zshrc
    If needed, restart terminal to get settings applied.
In recent versions of Android Studio jre directory has been renamed to jbr so Flutter gets confused. To solve this we create a link. (Thanks to this Stackoverflow answer!)
cd /Applications/Android\ Studio.app/Contents
ln -s jbr jre
Now Flutter show be able to fully recognize Android, we run the doctor:
flutter doctor -v
flutter doctor --android-licenses
Last step on Android Studio: The first time we create a Flutter project we need to setup Flutter SDK path. We get it from flutter doctor -v command. Just look for a line like this:
• Flutter version 3.7.1 on channel stable at /opt/homebrew/Caskroom/flutter/3.7.0/flutter

3. Optional: Install CocoaPods

I suspect this is only required if you target iOS devices.
brew install cocoapods

4. Final check

At the end we should have everything setup. Run the doctor:
flutter doctor -v

Aggregate target to create xcframework

It has been a while I create an xcframework from scratch so these is hopefully a all-mighty script that will work with a framework target.
Create an aggregate target and add a "Run Script" like below.
set -e

N_SCHEME_NAME=MySchemeNameHere
N_BUILD_DIR="build"
N_IOS_XCARCHIVE="${N_BUILD_DIR}/${N_SCHEME_NAME}-iphoneos.xcarchive"
N_SIM_XCARCHIVE="${N_BUILD_DIR}/${N_SCHEME_NAME}-iphonesimulator.xcarchive"
N_XCFRAMEWORK="${N_BUILD_DIR}/${N_SCHEME_NAME}.xcframework"

rm -rf "$N_BUILD_DIR"

echo "🚀 Building $N_IOS_XCARCHIVE"
xcodebuild archive \
    -scheme "$N_SCHEME_NAME" \
    -archivePath "$N_IOS_XCARCHIVE" \
    -sdk iphoneos \
    BUILD_LIBRARY_FOR_DISTRIBUTION=YES \
    SKIP_INSTALL=NO

echo "🚀 Building $N_SIM_XCARCHIVE" 
xcodebuild archive \
    -scheme "$N_SCHEME_NAME" \
    -archivePath "$N_SIM_XCARCHIVE" \
    -sdk iphonesimulator \
    BUILD_LIBRARY_FOR_DISTRIBUTION=YES \
    SKIP_INSTALL=NO
 
echo "🚀 Building $N_XCFRAMEWORK"
xcodebuild -create-xcframework \
    -framework "${N_IOS_XCARCHIVE}/Products/Library/Frameworks/${N_SCHEME_NAME}.framework" \
    -framework "${N_SIM_XCARCHIVE}/Products/Library/Frameworks/${N_SCHEME_NAME}.framework" \
    -output "${N_XCFRAMEWORK}"

echo "🚀🟢 Build SUCCESS"

Hope it helps.

This work is licensed under BSD Zero Clause License | nacho4d ®