Hardware
NVIDIA B300 on HGX OCP ORV3. Liquid-cooled, 8-GPU islands, 72–144 GPUs per rack. NVL72 topology available for pretraining-scale workloads.
Synterran operates AI compute capacity today, and is accepting reservations for NVIDIA B300 capacity coming online in Atlanta Q3 2026. Direct partnership, configured to your workload — not self-serve cloud.
NVIDIA B300 on HGX OCP ORV3. Liquid-cooled, 8-GPU islands, 72–144 GPUs per rack. NVL72 topology available for pretraining-scale workloads.
12–60 month reservations. 15% reservation deposit. Pricing by conversation, not by list.
Atlanta, Q3 2026. Eight-rack first build with expansion to follow.
Reserved capacity at Synterran is a direct relationship, not a pricing tier. Three things come with it that commodity supply doesn’t.
The person selling the capacity is the person designing and operating the system. Technical conversations don’t route through sales tiers.
HGX by default, because 288 GB of HBM per GPU fits most inference and fine-tuning workloads within an 8-GPU island. NVL72 available for pretraining-scale workloads that need a coherent 72-GPU NVLink domain.
Reserved customers start on grid power, and inherit on-site co-powering and cycle integration as later deployment phases ship — without renegotiating the reservation.
The first-build configuration is B300 in HGX OCP ORV3 form factor — Supermicro 2-OU liquid-cooled nodes, up to 18 nodes per rack, yielding 72–144 GPUs per rack depending on node density. Eight racks at first build, with room for expansion.
HGX is the default because the 8-GPU NVLink island is where most customer workloads actually sit. At 288 GB of HBM per B300 GPU, even large open-weight reasoning models fit within the island. Memory and throughput tend to bind before topology does.
Pretraining-scale workloads that need the coherent 72-GPU NVLink domain — frontier model training, certain research configurations — can be built on NVL72. Reach out with your workload; topology gets scoped to the shape of what you’re running.
First-build capacity ships in phases. Reserved customers transition across phases as they come online — the reservation persists, the surrounding architecture upgrades.
Reserved capacity comes online on grid power with liquid-cooled loops, solar-backed UPS, and the commodity cooling infrastructure that Phase 2 will integrate with.
Allison 250 turbines provide B-side power alongside the grid. Desiccant inlet air conditioning feeds chilled, dried air to the turbines, driven by waste heat from the compute load.
Full Synterran Cycle integration including constitutive carbon capture. The cycle becomes the primary power source with grid as backup.
Full architecture: Synterran Cycle →
Reservations start with a direct conversation. A useful first email includes your company, use case, rough GPU count and term you’re thinking about, and any workload specifics that would inform the topology choice.
Inquiries reach me directly.
Direct-partnership B300 capacity, Q3 2026.
Or email brendan@synterran.systems directly.