Synterran Compute

Reserved B300 capacity, Q3 2026.

Synterran operates AI compute capacity today, and is accepting reservations for NVIDIA B300 capacity coming online in Atlanta Q3 2026. Direct partnership, configured to your workload — not self-serve cloud.

Accepting reservations  ·  Q3 2026 delivery  ·  Atlanta
At a glance

What you’re reserving.

Hardware

NVIDIA B300 on HGX OCP ORV3. Liquid-cooled, 8-GPU islands, 72–144 GPUs per rack. NVL72 topology available for pretraining-scale workloads.

Terms

12–60 month reservations. 15% reservation deposit. Pricing by conversation, not by list.

Delivery

Atlanta, Q3 2026. Eight-rack first build with expansion to follow.

Why reserve

Bare-metal capacity, engineered per customer.

Reserved capacity at Synterran is a direct relationship, not a pricing tier. Three things come with it that commodity supply doesn’t.

Founder-operator access

The person selling the capacity is the person designing and operating the system. Technical conversations don’t route through sales tiers.

Topology fit

HGX by default, because 288 GB of HBM per GPU fits most inference and fine-tuning workloads within an 8-GPU island. NVL72 available for pretraining-scale workloads that need a coherent 72-GPU NVLink domain.

Architectural roadmap

Reserved customers start on grid power, and inherit on-site co-powering and cycle integration as later deployment phases ship — without renegotiating the reservation.

Hardware

Default rack: B300 HGX OCP ORV3.

The first-build configuration is B300 in HGX OCP ORV3 form factor — Supermicro 2-OU liquid-cooled nodes, up to 18 nodes per rack, yielding 72–144 GPUs per rack depending on node density. Eight racks at first build, with room for expansion.

HGX is the default because the 8-GPU NVLink island is where most customer workloads actually sit. At 288 GB of HBM per B300 GPU, even large open-weight reasoning models fit within the island. Memory and throughput tend to bind before topology does.

Pretraining-scale workloads that need the coherent 72-GPU NVLink domain — frontier model training, certain research configurations — can be built on NVL72. Reach out with your workload; topology gets scoped to the shape of what you’re running.

Configuration

  • B300 generation, NVIDIA
  • 288 GB HBM per GPU
  • 8-GPU HGX islands (NVLink)
  • Up to 18 nodes per rack
  • 72–144 GPUs per rack
  • Liquid-cooled, OCP ORV3
  • NVL72 on request
Roadmap

Built in phases. Customers ride through.

First-build capacity ships in phases. Reserved customers transition across phases as they come online — the reservation persists, the surrounding architecture upgrades.

1

Grid power, liquid-cooled racks

Reserved capacity comes online on grid power with liquid-cooled loops, solar-backed UPS, and the commodity cooling infrastructure that Phase 2 will integrate with.

2

On-site co-powering

Allison 250 turbines provide B-side power alongside the grid. Desiccant inlet air conditioning feeds chilled, dried air to the turbines, driven by waste heat from the compute load.

3

Cycle integration

Full Synterran Cycle integration including constitutive carbon capture. The cycle becomes the primary power source with grid as backup.

Full architecture: Synterran Cycle →

Reserve

Start the conversation.

Reservations start with a direct conversation. A useful first email includes your company, use case, rough GPU count and term you’re thinking about, and any workload specifics that would inform the topology choice.

Inquiries reach me directly.

Reserve compute

Direct-partnership B300 capacity, Q3 2026.

Or email brendan@synterran.systems directly.