Skip to content

Getting started

In this tutorial, we'll use matchbox with Terraform to provision Fedora CoreOS or Flatcar Linux machines.

We'll install the matchbox service, setup a PXE network boot environment, and use Terraform configs to declare infrastructure and apply resources on matchbox.


Install matchbox on a host server or Kubernetes cluster. Generate TLS credentials and enable the gRPC API as directed. Save the ca.crt, client.crt, and client.key on your local machine (e.g. ~/.matchbox).

Verify the matchbox read-only HTTP endpoints are accessible.

$ curl

Verify your TLS client certificate and key can be used to access the gRPC API.

$ openssl s_client -connect \
  -CAfile ~/.matchbox/ca.crt \
  -cert ~/.matchbox/client.crt \
  -key ~/.matchbox/client.key


Install [Terraform][] v0.13+ on your system.

$ terraform version
Terraform v0.13.3


Clone the matchbox source.

$ git clone
$ cd matchbox/examples/terraform

Select from the Terraform examples. For example,

  • fedora-coreos-install - PXE boot, install Fedora CoreOS to disk, reboot, and machines come up with your SSH authorized key set
  • flatcar-install - PXE boot, install Flatcar Linux to disk, reboot, and machines come up with your SSH authorized key set

These aren't exactly full clusters, but they show declarations and network provisioning.

$ cd fedora-coreos-install    # or flatcar-install


Fedora CoreOS images are only served via HTTPS, so your iPXE firmware must be compiled to support HTTPS downloads.

Let's review the terraform config and learn a bit about Matchbox.


Matchbox is configured as a provider platform for bare-metal resources.

// Configure the matchbox provider
provider "matchbox" {
  endpoint    = var.matchbox_rpc_endpoint
  client_cert = file("~/.matchbox/client.crt")
  client_key  = file("~/.matchbox/client.key")
  ca          = file("~/.matchbox/ca.crt")

terraform {
  required_providers {
    ct = {
      source  = "poseidon/ct"
      version = "0.6.1"
    matchbox = {
      source = "poseidon/matchbox"
      version = "0.4.1"


Machine profiles specify the kernel, initrd, kernel args, Ignition Config, and other configs (e.g. templated Container Linux Config, Cloud-config, generic) used to network boot and provision a bare-metal machine. The profile below would PXE boot machines using a Fedora CoreOS kernel and initrd (see assets to learn about caching for speed), perform a disk install, reboot (first boot from disk), and use a Fedora CoreOS Config to generate an Ignition config to provision.

// Fedora CoreOS profile
resource "matchbox_profile" "fedora-coreos-install" {
  name  = "worker"
  kernel = "${var.os_stream}/builds/${var.os_version}/x86_64/fedora-coreos-${var.os_version}-live-kernel-x86_64"
  initrd = [

  args = [

  raw_ignition = data.ct_config.worker-ignition.rendered

data "ct_config" "worker-ignition" {
  content  = data.template_file.worker-config.rendered
  strict   = true

data "template_file" "worker-config" {
  template = file("fcc/fedora-coreos.yaml")
  vars = {
    ssh_authorized_key     = var.ssh_authorized_key


Matcher groups match machines based on labels like MAC, UUID, etc. to different profiles and templates in machine-specific values. The group below does not have a selector block, so any machines which network boot from Matchbox will match this group and be provisioned using the fedora-coreos-install profile. Machines are matched to the most specific matching group.

// Default matcher group for machines
resource "matchbox_group" "default" {
  name    = "default"
  profile =


Some Terraform variables are used in the examples. A quick way to set their value is by creating a terraform.tfvars file.

cp terraform.tfvars.example terraform.tfvars
matchbox_http_endpoint = ""
matchbox_rpc_endpoint = ""
ssh_authorized_key = "YOUR_SSH_KEY"


Initialize the Terraform workspace. Then plan and apply the resources.

terraform init
$ terraform apply
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.

Matchbox serves configs to machines and respects query parameters, if you're interested:


Matchbox can integrate with many on-premise network setups. It does not seek to be the DHCP server, TFTP server, or DNS server for the network. Instead, matchbox serves iPXE scripts as the entrypoint for provisioning network booted machines. PXE clients are supported by chainloading iPXE firmware.

In the simplest case, an iPXE-enabled network can chain to Matchbox,

# /var/www/html/ipxe/default.ipxe

Read for the complete range of options. Network admins have a great amount of flexibility:

  • May keep using existing DHCP, TFTP, and DNS services
  • May configure subnets, architectures, or specific machines to delegate to matchbox
  • May place matchbox behind a menu entry (timeout and default to matchbox)

If you've never setup a PXE-enabled network before or you're trying to setup a home lab, checkout the container image copy-paste examples and see the section about proxy-DHCP.


Its time to network boot your machines. Use the BMC's remote management capablities (may be vendor-specific) to set the boot device (on the next boot only) to PXE and power on each machine.

$ ipmitool -H -U USER -P PASS power off
$ ipmitool -H -U USER -P PASS chassis bootdev pxe
$ ipmitool -H -U USER -P PASS power on

Each machine should chainload iPXE, delegate to Matchbox, receive its iPXE config (or other supported configs) and begin the provisioning process. The examples assume machines are configured to boot from disk first and PXE only when requested, but you can write profiles for different cases.

Once the install completes and the machine reboots, you can SSH.

$ ssh

To re-provision the machine for another purpose, run terraform apply and PXE boot machines again.

Going Further

Matchbox can be used to provision multi-node Fedora CoreOS or Flatcar Linux clusters at one or many on-premise sites if deployed in an HA way. Machines can be matched individually by MAC address, UUID, region, or other labels you choose. Installs can be made much faster by caching images in the built-in HTTP assets server.

Ignition can be used to partition disks and filesystems, write systemd units, write networkd configs or regular files, and create users. Nodes can be network provisioned into a complete cluster system that meets your needs. For example, see Typhoon.