Inside the Starlink Dish: How SpaceX Commoditized the Phased Array.

39 min read

For decades, Active Electronically Scanned Arrays (AESAs) were military hardware. If you wanted a flat antenna that could steer a microwave beam without moving parts, you wrote a check to Raytheon or Northrop Grumman for something in the low millions. These systems flew on the F-22, tracked ICBMs from Aegis destroyers, and remained firmly outside the consumer price bracket.

Then SpaceX shipped one for $599.

The Starlink User Terminal (affectionately called “Dishy McFlatface” by early adopters) is one of the most complex pieces of consumer RF hardware ever mass-produced. It tracks satellites moving at 7 km/s across the sky, maintains a data link at 14 GHz, and handles the handoff between satellites every few minutes. It does this while sitting in a backyard, covered in snow, drawing about 50 watts.

This post is a technical exploration of how it works: the antenna physics, the beamforming architecture, the custom silicon, and the manufacturing tricks that took a military-grade system and turned it into something Amazon delivers in a cardboard box.

ℹ️ NOTE A note on sources: This post synthesizes publicly available information: FCC filings, teardowns by others, manufacturer press releases, and first-principles RF analysis. I haven’t personally disassembled a Starlink terminal, and SpaceX doesn’t publish detailed specifications. The technical details represent my best understanding of the available evidence, not insider knowledge. See References for source material. I am not an RF engineer/expert, but have a long-time interest in it. I designed the geo-sat to ground receiver link layer and spec-ed the RF side hardware design/components for tx/rx sides for Othernet - a global data-casting solution. And I implemented all its software/firmware. That’s my only background of working on RF in anger. So: there could be stuff here thats not entirely correct. Please let me know if you spot an error!
The Starlink dish is a ~1,200-element phased array achieving ~33-34 dBi antenna gain, comparable to a 60 cm satellite TV dish but with electronic beam steering that can track satellites and execute handoffs in microseconds. SpaceX achieved consumer pricing through aggressive silicon integration (reducing beamformer chips from ~80 to ~6 between Gen 1 and Gen 3), hybrid PCB materials (mixing expensive RF laminates with cheap FR-4), and software-defined calibration. Manufacturing cost is estimated at ~$400.

The System: Satellites and Dishes

Before diving into the dish, it helps to understand the full signal path. The dish’s design is constrained by what’s happening 550 km overhead.

The Constellation

Starlink satellites orbit in Low Earth Orbit (LEO) at approximately 550 km altitude. Unlike geostationary satellites parked at 36,000 km (where they appear stationary relative to Earth), LEO satellites move across the sky in about 4-6 minutes from horizon to horizon. This creates two fundamental challenges:

  1. Tracking: The antenna must continuously steer its beam to follow the satellite
  2. Handoffs: As one satellite drops below the horizon, the dish must acquire the next one

The payoff is latency. A round-trip to geostationary orbit takes ~500-700ms in practice. The theoretical minimum is ~240ms for 72,000 km at light speed, but satellite processing, ground routing, and non-vertical paths add more. A round-trip to Starlink is ~25-50ms, comparable to terrestrial fiber.

ℹ️ NOTE

Why not just use a simple dish? Traditional geostationary satellite TV uses a fixed parabolic dish, a shaped piece of metal plus tuner that costs $5-10. It works because the satellite never moves relative to Earth. Point it once, forget it forever.

But GEO has fundamental limits beyond latency. A satellite at 36,000 km covers roughly a third of Earth’s surface, so everyone in that footprint shares the same bandwidth. Path loss scales with distance squared, so the signal is roughly \(1000\text{-}4000 \times\) weaker than from 550 km (depending on elevation angles), requiring more transmit power or bigger antennas. And the satellite has finite power from its solar panels, spread thin across millions of users. GEO works for broadcast TV (everyone gets the same signal), but struggles for internet (everyone wants different data).

LEO solves this with geometry: smaller coverage cells mean fewer users sharing each beam, shorter distance means less path loss, and thousands of satellites mean massive aggregate capacity. But it requires tracking. A parabolic dish has a narrow beam, and a LEO satellite would pass through in seconds. Motorized dishes can’t slew fast enough for handoffs every 15-90 seconds. The phased array (steering electronically in microseconds, no moving parts) is the only viable solution. It just requires solving the integration challenges that made AESAs cost millions.

The Frequency Plan

Starlink uses two frequency bands:

LinkBandFrequencyDirection
User DownlinkKu-band10.7 – 12.7 GHzSatellite → Dish
User UplinkKu-band14.0 – 14.5 GHzDish → Satellite
Gateway FeederKa-band17.8 – 19.3 / 27.5 – 30.0 GHzGround Station ↔ Satellite

Key RF parameters from FCC filings:

ParameterGen 3 Terminal
Max EIRP33.7 – 38.1 dBW (angle-dependent)
Peak Antenna Gain~33-34 dBi
G/T (gain-to-noise-temperature ratio)~10.1 dB/K
EIRP Density Limit-14 dBW/Hz (to protect GEO arc)
PolarizationCircular (RHCP Rx, LHCP Tx)

EIRP (Effective Isotropic Radiated Power) varies with beam angle: maximum at boresight, reduced at scan edges to meet ITU coordination requirements.

The user terminal operates entirely in Ku-band. The satellites relay traffic to ground stations via Ka-band (higher frequency, more bandwidth, but requires larger ground stations with tracking dishes). Newer satellites also carry laser inter-satellite links, allowing traffic to hop between satellites without touching the ground, useful for oceanic coverage.

The user terminal is a point-to-point microwave link to a moving target, operating at frequencies where rain fade, atmospheric absorption, and pointing errors all matter.


The Physics: Why This Is Hard

Understanding the engineering constraints requires starting with the fundamental physics of phased arrays.

How Phased Arrays Steer

A phased array steers its beam by manipulating the phase of the signal at each antenna element. When all elements transmit in phase, their signals add constructively directly ahead (broadside). By introducing a progressive phase delay across the array, the point of constructive interference shifts off-axis.

The beam direction \(\theta\) (where \(\theta = 0°\) is straight up, perpendicular to the array face) relates to the phase gradient by:

$$ \sin(\theta) = \frac{\Delta\phi \cdot \lambda}{2\pi \cdot d} $$

Where \(\Delta\phi\) is the phase difference between adjacent elements, \(\lambda\) is the wavelength, and \(d\) is the element spacing.

No moving parts. No motors. The beam sweeps electronically in microseconds. (This also eliminates mechanical wear, simplifies manufacturing, and improves reliability, though the RF advantages are what make it necessary for this application.)

How fast is fast enough? The satellite moves across the sky at roughly 0.5-1°/s (depending on elevation). With a beam width of ~1.5-2° (a consequence of the dish’s high gain, more on this below), there are a couple seconds before the satellite drifts out of the beam. A mechanical gimbal could theoretically keep up for tracking alone.

But tracking isn’t the hard part. Handoffs are. When switching satellites, the dish must:

  1. Acquire the new satellite’s position
  2. Repoint the beam (potentially 30-60° away)
  3. Synchronize with the new satellite’s timing
  4. Complete the switch before the old satellite’s signal degrades

This entire sequence happens in tens of milliseconds. A mechanical dish slewing 60° takes seconds. A phased array does it in microseconds. The limit is how fast you can update the phase shifter registers, not physical inertia.

There’s another reason: multi-user scheduling. The satellite serves thousands of users simultaneously using TDMA (time-division multiple access) within the frequency bands. Each dish gets assigned specific time slots (potentially just milliseconds long) to transmit or receive. The beam pointing may need slight adjustments between slots as the satellite moves, and these adjustments must complete within the guard interval between slots (microseconds). Mechanical systems can’t operate on these timescales.

Antenna Gain

The most important performance metric for the Starlink dish is antenna gain: ~33-34 dBi.

Gain measures how effectively an antenna concentrates energy in a particular direction compared to an isotropic radiator (a theoretical antenna that radiates equally in all directions). A 33 dBi gain means the dish focuses energy about \(2000 \times\) more intensely in its beam direction than an isotropic antenna would.

Why does this matter so much? The answer lies in the link budget: the accounting of all gains and losses between transmitter and receiver.

The dish is communicating with a satellite at ~550 km altitude, but rarely directly overhead. At typical elevation angles (30-50°), the actual path length is 700-1000 km. The signal spreads out as it travels. This is free-space path loss. At 12 GHz with a typical slant range of ~800 km:

$$ L_{path} = 20\log_{10}\left(\frac{4\pi d}{\lambda}\right) \approx 177 \text{ dB} $$

ℹ️ NOTE A quick dB intuition: Decibels are logarithmic ratios. Every 3 dB is a \(2 \times\) change in power; every 10 dB is \(10 \times\). So 177 dB means the signal is attenuated by a factor of \(10^{17.7}\), roughly 500 quadrillion times weaker by the time it arrives. Working in dB lets us add and subtract these enormous ratios instead of multiplying and dividing them.

The link budget equation shows how this gap is closed:

$$ P_{received} = P_{transmitted} + G_{tx} + G_{rx} - L_{path} - L_{other} $$

Every term matters:

  • Transmit power \(P_{tx}\): Limited by satellite solar panels and regulations
  • Transmit antenna gain \(G_{tx}\): The satellite’s phased array, ~30-35 dBi
  • Receive antenna gain \(G_{rx}\): The dish’s ~33 dBi
  • Path loss \(L_{path}\): ~177 dB at typical elevation angles
  • Other losses \(L_{other}\): Atmospheric absorption, rain fade, pointing errors (~2-5 dB)

What the Numbers Actually Mean

Downlink (satellite to dish) at 12 GHz:

StepParameterValueRunning Total
1Satellite transmit power+17 dBW (50W)+17 dBW
2Satellite antenna gain+33 dBi+50 dBW (EIRP)
3Free-space path loss (800 km)-177 dB-127 dBW
4Atmospheric loss-3 dB-130 dBW
5Dish antenna gain+33 dBi-97 dBW
6Noise floor (240 MHz)-120 dBW
7SNR at LNA input\(\sim\)23 dB (\(\sim 200 \times\))

The signal at step 4, before the dish’s gain, is around \(10^{-13}\) watts, comparable to the optical power a faint star delivers to your eye. The dish collects and concentrates this, the LNA amplifies it, and the result is ~23 dB above the thermal noise floor, enough for high-order modulation.

Uplink (dish to satellite) at 14 GHz:

StepParameterValueRunning Total
1Dish transmit power0 dBW (1W)0 dBW
2Dish antenna gain+33 dBi+33 dBW (EIRP)
3Free-space path loss (800 km, 14 GHz)-178 dB-145 dBW
4Atmospheric loss-3 dB-148 dBW
5Satellite antenna gain+33 dBi-115 dBW
6Noise floor (est. 100 MHz)-124 dBW
7SNR at satellite LNA\(\sim\)9 dB (\(\sim 8 \times\))

The uplink is tighter. The dish can only transmit \(\sim\)1W (regulatory and power limits), so the satellite receives a weaker signal. These SNRs translate to \(\sim\)7.7 bits/Hz down vs \(\sim\)3.2 bits/Hz up (see Shannon’s theorem below). At 240 MHz downlink and 100 MHz uplink bandwidth: \(7.7 \times 240 \approx\) 1.8 Gbps theoretical down, \(3.2 \times 100 \approx\) 320 Mbps theoretical up—matching the asymmetric speeds users observe after real-world losses.

For comparison, a GEO satellite at 36,000 km has ~30 dB more path loss each way. To close that link, GEO internet services need larger dishes (0.7-1.2m), higher satellite EIRP (50-55 dBW), or accept slower speeds. Starlink’s geometry advantage is what allows a compact flat panel to achieve usable bidirectional throughput.

SNR, Bandwidth, and Throughput

Shannon’s theorem defines the fundamental limit:

$$ C = B \times \log_2(1 + SNR) $$

Where C is channel capacity in bits/second, B is bandwidth in Hz, and SNR is the signal-to-noise ratio (linear). For Starlink’s ~240 MHz of usable downlink bandwidth:

SNRCapacityPractical Throughput
10 dB (\(10 \times\))\(\sim\)830 Mbps\(\sim\)200-300 Mbps
15 dB (\(32 \times\))\(\sim\)1.2 Gbps\(\sim\)300-400 Mbps
20 dB (\(100 \times\))\(\sim\)1.6 Gbps\(\sim\)400-500 Mbps

Real systems achieve 50-70% of Shannon capacity due to coding overhead, protocol losses, and practical modulation limits. The dish’s job is to deliver enough SNR, and that requires gain.

Without the dish’s 33 dBi, the system would need to compensate elsewhere:

  • \(2000 \times\) more transmit power from the satellite (solar panels can’t deliver it)
  • \(2000 \times\) larger receive antenna (a \(\sim\)15 meter dish)
  • Accept 33 dB less SNR (dropping from 300 Mbps to dial-up speeds)

The antenna gain closes the link budget.

How Phased Arrays Achieve Gain

A parabolic dish achieves gain through geometry: a curved reflector focuses incoming waves onto a small feed horn, like a mirror concentrating sunlight. The gain scales with the dish’s physical area.

A phased array achieves gain differently: through coherent combining. When N antenna elements receive a signal and their outputs are combined in-phase, the signal amplitudes add directly. Noise, being random and uncorrelated between elements, adds in quadrature (as \(\sqrt{\sum x_i^2}\) rather than \(\sum x_i\)), so it grows slower than the signal:

$$ G_{array} \approx N \times G_{element} \times \eta_{aperture} $$

Where \(\eta_{aperture}\) is the aperture efficiency (typically 60-70%, accounting for edge effects and element coupling). With ~1,200 elements and ~6 dBi per patch element:

$$ G_{total} \approx 10\log_{10}(1200) + 6\text{ dBi} + 10\log_{10}(0.65) \approx 30.8 + 6 - 1.9 \approx 35 \text{ dBi} $$

Elements (N)\(10\log_{10}(N)\)Total Gain (dBi)
10020 dB\(\sim\)24
40026 dB\(\sim\)30
80029 dB\(\sim\)33
1,20030.8 dB\(\sim\)35
2,40033.8 dB\(\sim\)38

Assumes 6 dBi element gain and 65% aperture efficiency (\(-1.9\) dB).

A phased array doesn’t need a single large reflector. It synthesizes an equivalent aperture from many small elements working in concert. The gain emerges from coherence, not curvature.

Why ~1,200 Elements?

The Starlink dish uses roughly 1,200 antenna elements. Several constraints shape this choice.

The lower bound comes from the link budget. As shown above, the dish needs ~33 dBi of gain to close the link with adequate SNR. Working backward from the gain equation:

$$ N \approx \frac{G_{total}}{G_{element} \times \eta} \approx \frac{2000}{4 \times 0.65} \approx 770 \text{ elements minimum} $$

In practice, you need margin for rain fade, pointing errors, and scan loss (gain drops ~3 dB at 60° scan angles), pushing the practical minimum toward 1,000 elements. The Mini, with ~600-800 elements and ~30-31 dBi gain, operates closer to this edge and consequently sees reduced throughput in marginal conditions.

The upper bound comes from diminishing returns. Why not 5,000 elements for 40 dBi of gain? Several factors push back:

Cost scales roughly linearly with element count. Each element needs a patch antenna, feed structure, and share of an FEM. At ~$0.25-0.50 per element in volume (including silicon, PCB area, and assembly), doubling elements adds $300-600 to the BOM.

Power also scales linearly. Each element’s PA draws power. The current ~50W average would become ~100W+ at 2,400 elements. This matters for the thermal budget, wall power requirements, and off-grid use cases.

Gain itself has diminishing utility. Going from 33 to 36 dBi (doubling elements) provides 3 dB more SNR, which is \(2 \times\) in linear terms. But Shannon’s capacity scales as \(\log_2(1 + SNR)\). At high SNR, the logarithm compresses gains: going from \(SNR = 200\) to \(SNR = 400\) only increases \(\log_2\) from 7.6 to 8.6, about 13% more bits/Hz. Doubling SNR doesn’t double throughput—it adds roughly one bit per symbol.

The beam also becomes harder to use. At 40 dBi, beamwidth shrinks to ~0.7°. The tracking system must now maintain pointing within ~0.2° while the satellite moves and the dish sways in wind. This demands faster control loops and better mechanical stability, adding cost and complexity.

Physical size increases in lockstep. At \(\lambda/2\) spacing (~10mm), 2,400 elements requires ~0.5 m\(^2\) of aperture. The dish becomes heavier (thermal mass, aluminum backplate, mounting hardware) and harder to install. A 60 cm dish in high winds exerts significant torque on its mount.

The \(\lambda/2\) spacing constraint is what locks element count to aperture size. You can’t pack elements tighter than ~10mm at 14 GHz without grating lobes. So “more elements” always means “bigger dish,” not just “denser array.”

Where does ~1,200 land in this design space?

ElementsGainApertureTradeoffs
\(\sim\)600\(\sim\)30 dBi\(\sim\)0.10 m\(^2\)Marginal link, lower throughput (Mini)
\(\sim\)1,200\(\sim\)33 dBi\(\sim\)0.25 m\(^2\)Comfortable margin, \(\sim\)300 Mbps capable
\(\sim\)2,400\(\sim\)36 dBi\(\sim\)0.50 m\(^2\)Diminishing returns, \(2 \times\) cost/power
\(\sim\)4,800\(\sim\)39 dBi\(\sim\)1.0 m\(^2\)Impractical size/cost for consumer

The ~1,200 element count is where the link budget is satisfied with margin, the dish remains consumer-installable, power stays reasonable for residential circuits, and cost (at scale) reaches a viable price point. Going smaller risks the link; going larger buys little and costs much.

Comparison: How Different Systems Solve the Gain Problem

SystemAntenna TypeGainApertureWhy It Works
StarlinkPhased array~33-34 dBi0.25 m\(^2\)Electronic steering, no moving parts
DirecTV/DishParabolic33-37 dBi0.25-0.45 m\(^2\)Fixed pointing at GEO, never moves
Sirius XMSmall patch~2-4 dBi~0.01 m\(^2\)Satellites blast 20 kW; low data rate (128 kbps)
GPSPatch~3-5 dBi~0.005 m\(^2\)Spread spectrum; only needs 50 bps navigation data
Iridium phoneStub antenna~1-2 dBi~0.001 m\(^2\)Voice only; satellites have high-gain spot beams
Maritime VSATParabolic, motorized38-42 dBi0.5-1.2 m\(^2\)Mechanical tracking OK when ship moves slowly

High data rates demand high gain. GPS and satellite radio work with tiny antennas because they transfer kilobits, not gigabits. Satellite TV dishes have comparable gain to Starlink, but they point at stationary GEO satellites, so no tracking is needed.

Starlink’s achievement is satellite-TV-class gain in a form factor that can steer electronically. That combination didn’t exist at consumer prices before.

The Gain-Beamwidth Tradeoff

High gain comes with a constraint: narrower beams. The relationship is fundamental:

$$ \theta_{beamwidth} \approx \frac{70°}{\sqrt{G_{linear}}} $$

At 33 dBi (\(G \approx 2000\) in linear terms, since \(G_{linear} = 10^{dBi/10}\)):

$$ \theta_{beamwidth} \approx \frac{70°}{\sqrt{2000}} \approx 1.6° $$

A 1.6° beam pointed at a satellite 550 km away illuminates a spot only ~15 km across at orbital altitude. This is desirable: concentrated energy means efficient power use and minimal interference with adjacent satellites. But it’s demanding: the beam must track the satellite within a fraction of a degree as it crosses the sky.

This is why electronic steering matters. A 1.6° beam tracking a satellite moving at 0.5-1°/s needs continuous adjustment. Add handoffs requiring 30-60° slews in milliseconds, and mechanical systems simply can’t keep up. The narrow beam that high gain requires is only practical because phased arrays can steer fast enough to use it.

The Grating Lobe Problem

Element spacing must satisfy:

$$ d < \frac{\lambda}{2} $$

At the uplink frequency of 14.5 GHz, the wavelength \(\lambda \approx 20.7\) mm, so elements must be spaced less than 10.3 mm apart.

Why? If spacing exceeds \(\lambda/2\), the array produces grating lobes, secondary beams that radiate energy in unintended directions. For a communications system, this means interference with other satellites. For a radar, it means false targets. For Starlink specifically, the ITU (International Telecommunication Union) imposes strict limits on off-axis emissions to protect geostationary satellites sharing the same frequency bands.

The engineering challenge: in a \(10 \text{mm} \times 10 \text{mm}\) footprint, you must fit:

  • The antenna element itself
  • A power amplifier (transmit)
  • A low-noise amplifier (receive)
  • Phase shifters
  • Control logic
  • Thermal management

This is why phased arrays were historically expensive. The integration density required custom everything.


The Antenna: Stacked Patches and Aperture Coupling

SpaceX uses stacked patch antennas, a robust topology optimized for bandwidth and manufacturing yield.

Why Bandwidth Matters

A basic microstrip patch antenna (a flat copper square on a PCB) has narrow bandwidth, typically 3-5%. But Starlink’s downlink spans 10.7-12.7 GHz, a 2 GHz range, or about 17% fractional bandwidth. A single patch can’t cover this.

The solution is stacking: two patches separated by a dielectric spacer. The lower (driven) patch resonates at one frequency; the upper (parasitic) patch resonates slightly higher. Their impedance responses combine, creating a wider “double-tuned” bandwidth that covers the full range.

Aperture Coupling

In a dense array, how you feed the antenna matters. The obvious approach, a via (vertical wire) from the PCB to the patch, creates problems at scale:

  • Parasitic inductance: The via adds unwanted inductance that shifts the resonant frequency
  • Manufacturing yield: 1,200+ solder joints per board multiplies failure probability
  • Noise coupling: A direct DC path lets digital noise leak into the RF chain

Starlink uses aperture coupling instead. There’s no physical connection between the feed line and the antenna:

block columns 1 R["Radome: weatherproof plastic"] PP["Parasitic Patch: resonates ~12.2 GHz"] SP["Foam/Air Spacer: ~1-2mm low-loss dielectric"] DP["Driven Patch: resonates ~11.0 GHz"] GP["Ground Plane: H-slot aperture"] RF["RF Substrate (Rogers): microstrip feed"] FR["Inner Layers (FR-4): power, control, GND"] style R fill:#e0e0e0,stroke:#757575 style PP fill:#ffcc80,stroke:#ef6c00 style SP fill:#fff9c4,stroke:#f9a825 style DP fill:#ffcc80,stroke:#ef6c00 style GP fill:#90a4ae,stroke:#546e7a style RF fill:#a5d6a7,stroke:#388e3c style FR fill:#c8e6c9,stroke:#388e3c

Energy couples magnetically through the ground plane slot. No physical connection between feed and antenna.

The RF signal travels along a microstrip line beneath the ground plane. A slot (typically H-shaped or “dogbone”) cut into the ground plane allows the electromagnetic field to couple through, exciting the patch above. No solder joint, no via, no DC path for noise.

Circular Polarization

Satellite links use circular polarization (CP) rather than linear. With linear polarization, any rotation of the antenna (or Faraday rotation in the ionosphere) causes signal loss. Circular polarization is immune to rotation; the field vector corkscrews as it propagates.

Starlink uses Right-Hand Circular Polarization (RHCP) for receive and Left-Hand Circular Polarization (LHCP) for transmit (or vice versa). Using orthogonal polarizations for the two directions provides isolation and reduces self-interference at the antenna.

Generating CP from a patch requires exciting two orthogonal modes with a 90° phase shift. High-end arrays do this with quadrature hybrid couplers (small circuits that split a signal into two outputs 90° apart), but these consume PCB space and add insertion loss (signal attenuation from passing through the component).

For the wideband operation Starlink requires (~17% fractional bandwidth), dual-probe feeding is the likely approach: two feed points excite orthogonal modes of the patch with a 90° phase offset built into the feed network. Teardown photos show evidence of dual aperture slots per element, consistent with this approach. This achieves good axial ratio across the full band without external hybrids, saving board space and ~0.5 dB of insertion loss.

With the antenna physics established, the next question is how SpaceX evolved the electronics controlling these elements.


The Architecture Evolution: Gen 1 to Gen 3 to Mini

The major changes between Starlink generations aren’t in the antenna. They’re in the beamforming architecture. This evolution represents an aggressive cost-reduction program without obvious precedent in consumer RF hardware.

Gen 1: Distributed Element-Level Control (2020)

The original “round Dishy” used a highly distributed control architecture:

AspectGen 1 Specification
Form FactorCircular, 23" diameter
Beamformer ASICs~80 chips
ArchitectureDistributed analog with fine-grained control
Elements per chip~15-20
ActuatorsTwo motors for mechanical tilt
Manufacturing Cost~$3,000 (estimated)
Sale Price$499

Each beamformer chip controlled a small cluster of ~15-20 elements. The phase shifting itself was analog (vector modulators applying phase/amplitude adjustments to the RF signal), but with ~80 chips, the system had fine-grained control over small groups of elements. This enabled precise beam shaping, null steering, and per-element calibration.

Gen 1 also included motors for mechanical tilting. Phased arrays lose gain at extreme scan angles (roughly 3 dB at \(\pm 60°\)), so the motors provided coarse pointing to orient the array face toward the active region of sky, while electronic steering handled fine tracking within that cone. Later generations eliminated the motors as the constellation grew denser (satellites always available near zenith) and users could install the dish at a fixed angle.

The problem was cost at every level:

  • Silicon: 80 complex ASICs per unit
  • PCB routing: Each chip needed multiple high-speed digital lines to the baseband processor
  • Testing: Every chip required individual validation
  • Yield: More chips = more failure points

SpaceX was reportedly losing $2,000+ on every Gen 1 unit sold. The architecture worked technically but couldn’t scale economically.

Gen 2 and Gen 3: Consolidation (2021+)

SpaceX iterated through several rectangular designs:

  • Gen 2 (2021): Rectangular form factor, still had motors for mechanical tilt, reduced chip count
  • Gen 3 (2022+): Rectangular with fixed kickstand mount, no motors, further chip consolidation

The “Gen 3” terminal (internally “Rev 4”) represents the mature architecture:

AspectGen 1Gen 3Change
Form FactorCircular 23"Rectangular 12"\(\times\)19"Easier PCB manufacturing
Beamformer ASICs\(\sim\)80\(\sim\)6\(13 \times\) reduction
FEM chipsIntegrated~300 separateMoved to edge
ArchitectureDistributed fine-grainedHierarchical coarse-grainedSimplified routing
ActuatorsTwo motorsNone (kickstand)Mechanical elimination
Manufacturing Cost\(\sim\)$2,400-3,000\(\sim\)$300-400\(\sim 8 \times\) reduction
PCB Layers~20+~12-14Simpler stackup

The approach: consolidate control into fewer chips and move the analog front-end to the array edge.

Gen 1 distributed control across 80 chips, requiring massive interconnect. Gen 3 concentrates control in 6 “Shiraz” ASICs, then uses simple, daisy-chainable “Pulsar” FEMs for the final analog phase shifting and amplification.

How do 6 chips control ~1,200 elements? The architecture splits the work hierarchically:

The Beamformer ASIC (referred to as “Shiraz” in teardown communities, an unofficial name likely derived from PCB silkscreen markings) acts as a sector controller. Each chip handles ~200 elements. It performs:

  • Up/down frequency conversion
  • Coarse beam steering via digital delay
  • Distribution of phase/amplitude commands to subordinate chips

The Front-End Module (called “Pulsar” in teardowns) is a small chip placed millimeters from the antenna elements. Each handles 2-4 elements and contains:

  • LNA (Low-Noise Amplifier) for receive
  • PA (Power Amplifier) for transmit
  • T/R switch to alternate between modes
  • Analog phase shifter for fine beam steering

The FEMs use daisy-chain control, similar to LED string lights where one wire carries data to all the bulbs rather than running a separate wire to each. Beam commands flow down the chain: Chip 1 → Chip 2 → … → Chip N. A global latch signal (a simultaneous trigger pulse) then applies all updates at once.

This topology slashes PCB complexity. Instead of hundreds of parallel control traces, you need one serial bus per chain. The routing engineers can focus on optimizing RF trace impedance rather than playing Tetris with control signals.

What the Hierarchy Costs

The shift from Gen 1’s distributed architecture to Gen 3’s hierarchical approach wasn’t free. SpaceX traded RF performance for manufacturability.

In Gen 1, each of the ~80 beamformer chips had fine-grained control over its ~15-20 elements. Phase and amplitude could be set with high resolution (likely 8+ bits, or 256+ discrete levels per element). This enabled precise beam shaping: deep nulls could be placed in specific directions to reject interference, sidelobes could be minimized through careful amplitude tapering, and per-element calibration could correct for manufacturing variations.

Gen 3’s hierarchy introduces quantization at multiple levels. The Pulsar FEMs use analog phase shifters with coarser resolution, likely 5-6 bits (32-64 discrete phase states). This is adequate for main beam steering but limits fine control. When you can only set phase in ~6° increments rather than ~1.4° increments, the achievable sidelobe suppression and null depth both degrade.

The architecture also introduces sector boundaries. Each Shiraz chip controls ~200 elements as a semi-independent unit. Phase continuity across sector boundaries depends on calibration accuracy between chips. Any mismatch creates discontinuities in the aperture illumination, which manifest as elevated sidelobes or pointing errors.

Null steering becomes harder. In Gen 1, placing a null at a specific angle (to reject interference from a neighboring satellite or ground source) required adjusting perhaps 50-100 elements with precise phase/amplitude control. The fine-grained architecture made this straightforward. In Gen 3, the coarser control and sector structure make deep nulls more difficult to achieve and maintain as the beam scans.

Calibration is also more constrained. Gen 1 could measure and correct each element individually, compensating for PCB trace length variations, component tolerances, and temperature drift. Gen 3’s calibration must work within the coarser analog control, relying more heavily on factory trimming and accepting wider tolerances.

What the Hierarchy Buys

These tradeoffs are real, but acceptable for Starlink’s use case. The hierarchy provides three benefits.

Cost dropped dramatically. The ~80 to ~6 chip reduction cascades through the entire bill of materials. Fewer chips means less silicon, fewer packages, fewer solder joints, fewer test points, simpler PCB routing, and lower assembly time. This is the difference between losing $2,000 per unit and making money.

Power consumption dropped. Gen 1’s distributed architecture required 80 beamformer chips, each with its own power-hungry control logic and high-speed digital interfaces to the baseband processor. Gen 3 consolidates this into 6 chips, with the Pulsar FEMs requiring only bias current for the LNA/PA and minimal logic for the serial control interface.

Reliability improved through simplicity. Fewer chips means fewer failure points. The daisy-chain topology means a single broken trace doesn’t orphan half the array (as could happen with parallel routing). The simpler PCB stackup has fewer layer-to-layer vias that could fail.

SpaceX could accept the RF performance tradeoffs because the Starlink link budget has margin. The dish doesn’t need to place nulls with surgical precision; it just needs to point at the satellite and not violate ITU sidelobe masks. The satellites aren’t so close together that interference nulling is critical. The system was designed with enough SNR headroom that a few dB of degraded sidelobe performance doesn’t matter.

Gen 1 was over-engineered for the application. The architecture provided capabilities (deep nulls, precise sidelobes, per-element calibration) that the system didn’t need, at costs it couldn’t afford.

Gen 4 Mini: DC-Native and Smaller Aperture

The “Mini” (Gen 4, 2024) shrinks the aperture for portability:

SpecificationGen 3 StandardGen 4 Mini
Element Count~1,200~600-800
Aperture Area~0.25 m\(^2\)~0.10 m\(^2\)
Antenna Gain~33-34 dBi~30-31 dBi
Power InputAC (100-240V)DC (12-48V, USB-C PD)
RouterExternalIntegrated
Scan Range\(\pm 60°\)\(\pm 60°\) (reduced at edges)

The smaller aperture has consequences. Antenna gain scales roughly with area. Reduce area by 60%, lose 3-4 dB of gain. The Mini compensates partially through:

  • Higher satellite density (more satellites = shorter distances = less path loss)
  • Software-managed power reduction at extreme scan angles
  • Acceptance of lower peak throughput

The Mini also faces a regulatory challenge: with a smaller aperture, its beam is wider. A wider beam means more energy directed toward the geostationary arc (the “Clarke Belt” at 36,000 km where traditional satellites live). To avoid interference, the Mini likely reduces transmit power when the beam points near the equator, which explains the lower upload speeds users observe in certain pointing directions.


The Silicon: FD-SOI and Custom ASICs

SpaceX partnered with STMicroelectronics to develop custom silicon. The key technology choice: FD-SOI (Fully Depleted Silicon On Insulator).

Why FD-SOI?

Semiconductor processes involve tradeoffs:

TechnologyRF PerformanceCostPowerTypical Use
GaAs (Gallium Arsenide)ExcellentVery HighGoodMilitary radar
GaN (Gallium Nitride)ExcellentHighExcellentHigh-power RF
SiGe (Silicon Germanium)Very GoodModerateGood5G infrastructure
Bulk CMOSPoorLowVariableDigital logic
FD-SOIGoodModerateExcellentMixed signal
ℹ️ NOTE

Semiconductor acronyms:

  • III-V (GaAs, GaN): Compound semiconductors using elements from periodic table groups III and V. Superior electron mobility for RF, but require specialized fabs and can’t use standard silicon equipment.
  • SiGe: Silicon with germanium added to improve electron mobility. Faster than pure silicon, but still compatible with standard silicon fabs.
  • CMOS: Complementary Metal-Oxide-Semiconductor, the standard silicon process used in nearly all digital chips.
  • FD-SOI: Fully Depleted Silicon On Insulator, a modified CMOS process with an insulating layer (explained below).
ℹ️ NOTE “Parasitic” in electronics: A parasitic element is an unintended circuit component that emerges from physical construction. Parasitic capacitance forms between any two conductors separated by an insulator, whether you wanted a capacitor there or not. At low frequencies these are negligible; at 14 GHz they become significant loss and coupling paths.

FD-SOI is a “Goldilocks” process for Starlink:

  • Better RF than bulk CMOS: The insulator layer reduces parasitic capacitance and substrate coupling
  • Cheaper than III-V semiconductors: Standard silicon fab equipment
  • Excellent power efficiency: Critical for a 50W power budget
  • Good analog/digital integration: Can put the phase shifter and the control logic on the same die

The insulator layer (buried oxide, or BOX) differentiates FD-SOI from bulk CMOS:

block columns 2 H1["Standard Bulk CMOS"] H2["FD-SOI"] T1["Transistor"] T2["Transistor"] S1["Silicon Substrate (parasitic capacitor)"] BOX["Buried Oxide ~25nm"] space S2["Silicon Substrate (isolated)"] style H1 fill:#f5f5f5,stroke:#333 style H2 fill:#f5f5f5,stroke:#333 style T1 fill:#bbdefb,stroke:#1976d2 style T2 fill:#bbdefb,stroke:#1976d2 style S1 fill:#ffcdd2,stroke:#c62828 style BOX fill:#fff9c4,stroke:#f9a825,stroke-width:3px style S2 fill:#c8e6c9,stroke:#388e3c

In regular CMOS, the transistor channel couples capacitively to the substrate beneath it (red = lossy). At 14 GHz, this parasitic capacitance becomes a significant loss mechanism. Signal energy leaks into the substrate instead of going where you want it.

FD-SOI interposes a thin (~25nm) insulating oxide layer. The transistor floats on this oxide, electrically isolated from the substrate below. The result is lower parasitics, better high-frequency gain, and the ability to use body biasing to tune transistor performance on the fly.

ℹ️ NOTE “Fully Depleted” refers to the silicon layer above the oxide being thin enough that it’s completely depleted of free carriers during operation. This gives better electrostatic control of the channel and lower leakage current compared to partially-depleted SOI.

The Chip Architecture

Based on teardown analysis, the Gen 3 silicon stack follows a hierarchical pattern:

block columns 4 space BB["Baseband SoC"] space:2 space:4 S1["Shiraz #1"] S2["Shiraz #2"] S3["..."] S4["Shiraz #6"] space:4 P1["Pulsar"] P2["Pulsar"] P3["Pulsar"] P4["..."] space:4 A1(("Ant")) A2(("Ant")) A3(("Ant")) A4(("...")) BB --> S1 BB --> S2 BB --> S3 BB --> S4 S1 --> P1 S1 --> P2 S1 --> P3 S1 --> P4 P1 --> A1 P2 --> A2 P3 --> A3 P4 --> A4 P1 -- "serial" --> P2 --> P3 --> P4 style BB fill:#e3f2fd,stroke:#1976d2 style S1 fill:#fff9c4,stroke:#f9a825 style S2 fill:#fff9c4,stroke:#f9a825 style S3 fill:#fff9c4,stroke:#f9a825 style S4 fill:#fff9c4,stroke:#f9a825 style P1 fill:#c8e6c9,stroke:#388e3c style P2 fill:#c8e6c9,stroke:#388e3c style P3 fill:#c8e6c9,stroke:#388e3c style P4 fill:#c8e6c9,stroke:#388e3c

Vertical arrows show RF signal flow down through the hierarchy. Horizontal arrows show the serial control daisy chain between Pulsar FEMs (peers at the same level). Each Shiraz drives multiple chains; each Pulsar feeds 2-4 antenna elements.

Each “Shiraz” beamformer handles a sector of the array. It receives digital I/Q samples (in-phase and quadrature components that together capture both amplitude and phase of the signal) from the baseband processor, performs up/down conversion to Ku-band RF, and distributes signals to its chain of “Pulsar” FEMs. The Pulsar chips handle the final amplification and fine phase adjustment.


The PCB: Hybrid Stackup and Routing Challenges

A 12" \(\times\) 19" PCB carrying 14 GHz signals is not a simple board.

The Material Problem

RF signals are picky about their substrate. Standard FR-4 (the green fiberglass-epoxy used in most electronics) has:

  • High dielectric loss (\(\tan \delta \approx 0.02\))
  • Variable dielectric constant (\(\varepsilon_r \approx 4.2-4.5\), but inconsistent)
  • Rough copper that increases conductor loss at high frequencies

For RF, you want specialized laminates. Rogers Corporation makes the industry-standard high-frequency materials. Their RO4003C is a ceramic-filled hydrocarbon laminate offering:

  • Low loss: \(\tan \delta \approx 0.0027\) (the “loss tangent”, how much signal energy converts to heat). FR-4 is \(\sim\)0.02, meaning \(10 \times\) more loss per unit length.
  • Stable dielectric constant: \(\varepsilon_r = 3.55 \pm 0.05\). FR-4 varies from 4.2-4.8 depending on weave direction and glass content, a nightmare for impedance control since trace impedance and signal velocity both depend on \(\varepsilon_r\).
  • Smooth copper: Electrodeposited copper with low surface roughness. At GHz frequencies, current flows only in the outer few microns of the conductor (the “skin effect”), so a rough surface forces current to travel a longer, lossier path.
ℹ️ NOTE Why does loss matter? In the receive path, every 1 dB of loss before the LNA adds directly to the system noise figure. A 2 dB trace loss means 2 dB worse sensitivity, the difference between working and not working in marginal conditions.

But Rogers material costs \(10\text{-}20 \times\) more than FR-4. A 16-layer board made entirely of RF laminate would cost hundreds of dollars in materials alone.

The Hybrid Solution

Starlink uses a hybrid stackup, with expensive RF laminate only where it matters:

block columns 1 H1["RF Layers (Low-Loss Laminate)"] L1["L1: Antenna Patches + RF Traces"] L2["L2: Ground Plane + Feed Slots"] H2["Transition"] L3["L3: RF/Digital Interface"] H3["Digital/Power Layers (FR-4)"] L4["L4-6: Control Bus Routing"] L5["L7-10: Power Distribution"] L6["L11-12: Ground + Thermal"] style H1 fill:#f5f5f5,stroke:#333 style H2 fill:#f5f5f5,stroke:#333 style H3 fill:#f5f5f5,stroke:#333 style L1 fill:#e3f2fd,stroke:#1976d2 style L2 fill:#e3f2fd,stroke:#1976d2 style L3 fill:#fff3e0,stroke:#f57c00 style L4 fill:#e8f5e9,stroke:#388e3c style L5 fill:#e8f5e9,stroke:#388e3c style L6 fill:#e8f5e9,stroke:#388e3c

Rogers RO4003C (or equivalent ceramic-filled hydrocarbon) for layers 1-2; standard FR-4 epoxy-glass for the rest. The RF substrate costs ~$15/sq ft; FR-4 costs ~$0.50/sq ft.

The RF signals live only on the top layers where losses matter. The cheap FR-4 handles power distribution and low-speed digital, signals that don’t care about dielectric loss.

The challenge is thermal expansion mismatch. Rogers and FR-4 expand at different rates when heated. During reflow soldering (260°C), a large board with mismatched materials wants to warp like a potato chip. The fact that Starlink boards come out flat reflects careful copper balancing (symmetric copper on both sides) and lamination process control.

The LO Distribution Tree

One signal demands perfect routing: the Local Oscillator (LO).

In a phased array, phase equals direction. If the LO clock arrives at element A later than element B due to trace length mismatch, the beam points in the wrong direction.

The solution is an H-tree distribution network. The LO signal starts at a central synthesizer and splits symmetrically. Every path from source to destination has identical electrical length. Where routing obstacles force asymmetry (say, the left branch routes around a mounting hole), the right branch adds a serpentine delay line, a squiggly trace that artificially lengthens the path to match.

High-resolution teardown photos show these serpentines clearly on Starlink boards. They’re not aesthetic. They’re precision timing structures.

How sensitive is this? At 14 GHz, one complete wavelength takes about 71 picoseconds. A 10 ps timing mismatch between elements corresponds to roughly 50° of phase error, enough to steer the beam several degrees off target.


Signal Processing: Zero-IF and Software Calibration

Understanding the signal chain reveals another area where Starlink trades analog complexity for digital sophistication.

The Modulation: OFDM

Starlink uses OFDM (Orthogonal Frequency Division Multiplexing), the same modulation scheme as WiFi and LTE. Instead of one carrier, the signal consists of thousands of closely-spaced subcarriers, each carrying a small piece of the data.

Why OFDM for satellite links?

  • Multipath resilience: Each symbol includes a cyclic prefix (a repeated copy of the end of the symbol prepended to the beginning), which absorbs timing variations from reflections
  • Flexible allocation: Subcarriers can be assigned per-user
  • Spectral efficiency: Tight subcarrier packing maximizes bits/Hz
  • Beamforming friendly: Phase shifts apply uniformly across all subcarriers

The downside: OFDM has a high peak-to-average power ratio (PAPR), meaning the power amplifier must handle peaks 8-10 dB above average. This hurts efficiency since you can’t run the PA near saturation.

The Architecture: Zero-IF

Traditional satellite radios use superheterodyne architecture: RF → IF → Baseband, with filters at each stage. This requires bulky SAW (Surface Acoustic Wave) or ceramic filters.

Evidence suggests Starlink uses Zero-IF (Direct Conversion) instead. The received 11 GHz signal is mixed directly with an 11 GHz LO, producing baseband I/Q signals centered at DC. No intermediate frequency stage.

Advantages:

  • Fewer components (no IF filters)
  • Smaller form factor
  • Lower power consumption

Disadvantages:

  • DC offset: Mixer imbalances create DC components that corrupt the signal
  • I/Q imbalance: Amplitude/phase mismatches between I and Q paths cause image problems
  • 1/f noise: Active devices have higher noise at low frequencies

These problems would be fatal in a traditional radio. But Starlink handles them differently.

ℹ️ NOTE Why not just use SAW filters? A SAW filter converts an electrical signal into a mechanical acoustic wave on a piezoelectric crystal. The wave propagates across the surface, and the crystal’s geometry determines which frequencies pass. Sharp filtering in a passive component, but the acoustic wave travels at ~3,000 m/s (much slower than EM waves), so wavelengths are millimeters to centimeters. Above 1 GHz, SAW filters become impractically large; designs shift to BAW (Bulk Acoustic Wave) or avoid IF filtering entirely.

Software-Defined Calibration

Starlink accepts the analog imperfections and corrects them digitally.

The baseband processor runs continuous calibration loops:

  • Inject known test tones
  • Measure DC offset, I/Q imbalance, phase errors
  • Compute correction coefficients
  • Apply corrections to received/transmitted data

A perfectly matched analog mixer costs money. A software routine that corrects for mismatch costs transistors, and transistors are nearly free at scale. Starlink applies this tradeoff throughout the design.


Thermal Management: The “RF Heater”

Starlink dishes melt snow. There are no heating coils.

How It Works

During normal operation, heat comes from two main sources. The baseband processor (SoC) runs continuously, handling protocol processing, beamforming calculations, and network management. This alone dissipates 10-20W. The power amplifiers in each Pulsar FEM add more, but because Starlink uses TDD/TDMA scheduling, the PAs transmit in bursts rather than continuously. At typical duty cycles, PA waste heat alone wouldn’t melt snow.

In “snow melt” mode, the system deliberately increases heat generation. It can raise the transmit duty cycle (sending dummy data during otherwise idle slots), bias the PAs into a high-current state even without RF output (Class A operation), or both. The ~300 Pulsar chips become distributed heating elements across the array face.

Heat flows through a carefully designed thermal path:

flowchart LR subgraph Silicon["Heat Sources"] SOC["Baseband SoC
(continuous)"] PA["~300 Pulsar PAs
(burst + snow mode)"] end subgraph Spreading["Heat Spreading"] TP["Thermal Pads"] GND["PCB Ground Plane
(copper heat spreader)"] end subgraph Dissipation["Dissipation"] AL["Aluminum Backplate"] RAD["Radome Surface"] SNO["Snow/Air"] end SOC --> TP PA --> TP TP --> GND --> AL --> RAD --> SNO style Silicon fill:#ffcdd2,stroke:#c62828 style Spreading fill:#fff9c4,stroke:#f9a825 style Dissipation fill:#bbdefb,stroke:#1976d2

The entire RF assembly becomes a ~75-100W distributed heater. This is why Gen 3 moved the power supply to a separate unit, isolating the uncontrolled heat of AC/DC conversion from the controlled heat of the RF array.

The radome (the plastic cover) is designed for thermal conductivity here, not just weather protection. It must efficiently transfer heat from the aluminum backplate to the snow on its surface.


Manufacturing: How They Hit the Price Point

The original round Dishy reportedly cost SpaceX ~$3,000 to manufacture and sold for $499. The company was losing money on every unit shipped.

Gen 3 likely costs $300-400 to manufacture. How?

Integration

The move from ~80 beamformer chips to ~6 is the single biggest cost reduction. Each chip requires:

  • Silicon area (cost per mm\(^2\))
  • Package (the plastic/ceramic housing)
  • Assembly (placing and soldering)
  • Testing
  • PCB routing (board area, layer count)

Reducing chip count by \(13 \times\) cascades through the entire bill of materials.

Volume

STMicroelectronics runs FD-SOI fabs at scale. SpaceX has shipped millions of terminals. At these volumes:

  • Silicon costs drop (amortized mask costs, optimized yield)
  • PCB fabs optimize their process for this specific stackup
  • Assembly lines reach steady-state efficiency

Design for Manufacturing

Aperture coupling eliminates 1,200+ solder joints per board. Daisy-chain control eliminates hundreds of PCB traces. Hybrid stackup uses expensive materials only where necessary. Every design choice trades elegance for manufacturability.


The Satellite Side: What’s Up There

The dish doesn’t work alone. Understanding what’s happening 550 km overhead explains many of the design constraints discussed above.

Satellite Architecture

Each Starlink satellite (v1.5/v2 generation) carries:

  • User link phased arrays: Ku-band, pointing at Earth, capable of forming multiple simultaneous beams to serve different geographic cells
  • Gateway antennas: Ka-band (higher frequency = more bandwidth), connecting to ground stations for internet backhaul
  • Inter-satellite laser links: 1550nm optical terminals enabling satellite-to-satellite communication at ~100 Gbps (newer v1.5+ satellites)

The satellite’s Ku-band arrays face similar physics to the dish (\(\lambda/2\) spacing, scan angle limits, thermal management) but in a harsher environment. In vacuum, there’s no convection; heat can only leave via radiation. The satellites use large radiator panels and careful thermal design to dump waste heat to space.

Orbital Mechanics and Coverage

At 550 km altitude, satellites orbit Earth in approximately 95 minutes. From any ground location, a satellite is visible for only 4-6 minutes before dropping below the horizon, requiring continuous handoffs.

flowchart LR subgraph Ground["Ground"] DISH["Dish"] end subgraph Sky["550 km Overhead"] SAT1["Sat A
(setting)"] SAT2["Sat B
(overhead)"] SAT3["Sat C
(rising)"] end SAT1 -.->|"fading"| DISH SAT2 ==>|"active link"| DISH SAT3 -.->|"acquiring"| DISH style DISH fill:#e3f2fd,stroke:#1976d2 style SAT2 fill:#c8e6c9,stroke:#2e7d32

At any moment, 3-5 satellites may be visible. The dish must:

  1. Track the current satellite as it moves (ground track velocity ~7 km/s)
  2. Continuously update beam pointing to compensate
  3. Monitor signal quality from the next satellite
  4. Execute handoff before the current satellite’s signal degrades

The Handoff Protocol

Handoffs happen every 15-90 seconds depending on geometry. The process:

  1. Ephemeris broadcast: Satellites transmit orbital predictions (similar to TLE, Two-Line Element format used for tracking satellites) so the dish knows where to look
  2. Pre-acquisition: The dish begins tracking the next satellite while still connected to the current one
  3. Soft handoff: Brief period where the dish communicates with both satellites
  4. Switchover: Traffic shifts to the new satellite; old link terminates

This is harder than cell phone handoffs, which happen between stationary towers. Starlink executes handoffs between targets moving at 7 km/s relative to ground, while the dish simultaneously steers to track them.

The baseband processor manages this autonomously. From the user’s perspective, it’s invisible. TCP connections persist across handoffs. The latency spike during handoff is typically <50ms, masked by buffering.

The Ground Segment

User traffic doesn’t go directly to the internet:

flowchart LR DISH["User Terminal"] -->|"Ku-band"| SAT["Satellite"] SAT -->|"Ka-band"| GW["Ground Station"] GW -->|"Fiber"| NET["Internet"] style DISH fill:#e3f2fd,stroke:#1976d2 style SAT fill:#fff9c4,stroke:#f9a825 style GW fill:#c8e6c9,stroke:#388e3c style NET fill:#f5f5f5,stroke:#757575

Ground stations (called “gateways”) use large tracking dishes operating in Ka-band (27-30 GHz uplink, 17-20 GHz downlink). Each gateway serves as the internet on-ramp for dozens of satellites in its view. SpaceX operates hundreds of gateways worldwide.

With laser inter-satellite links, traffic can hop between satellites to reach a gateway on the other side of the planet. This is critical for oceanic and polar coverage where no gateway is nearby.


Challenges and Limitations

The system isn’t without compromises.

Interference Management

The ITU requires Starlink to protect geostationary satellites. When the dish’s beam approaches the geostationary arc, transmit power must drop. This creates “exclusion zones” where uplink speeds decrease, visible as the slower upload speeds users experience in certain geographic locations.

Obstruction Sensitivity

Unlike geostationary dishes, which point at one fixed spot in the sky, Starlink must see a wide swath of sky. Trees, buildings, and terrain that partially block the view cause intermittent dropouts as the dish loses satellites during their transit.

Weather Sensitivity

Ku-band signals experience rain fade. Heavy precipitation attenuates the signal, reducing throughput or causing outages. This is fundamental physics (water absorbs microwaves) and no antenna design can fully overcome it.

Power Consumption

50W average (100W peak) is significant for a consumer device. The dish consumes more power than many routers and modems combined. For off-grid installations, this matters.


Conclusion

The Starlink dish took technology that cost millions and required military budgets, and shipped it for the price of a router.

The technical path was integration. Fewer, more capable chips. Analog beamforming pushed to the edge. Hybrid materials used strategically. Software calibration replacing analog precision. Every architectural decision traded complexity in one domain for simplicity in another.

The dish is a 1,200-element phased array achieving 33 dBi of gain, focusing RF energy \(2000 \times\) more effectively than an omnidirectional antenna. It operates at 14 GHz, tracks objects moving at 7 km/s, executes autonomous handoffs every few minutes, and melts its own snow, all for a manufacturing cost under $500.

Whether this architecture becomes a template for other consumer RF systems (automotive radar, 5G fixed wireless, future satellite constellations) remains to be seen. But the existence proof is now sitting on rooftops worldwide.


References and Further Reading

Teardowns and Hardware Analysis

  • Ken Keiter’s Gen 1 teardown (YouTube): First public disassembly of the round dish
  • Oleg Kutkov’s blog: RF measurements and antenna pattern analysis
  • r/Starlink community investigations: Ongoing crowdsourced reverse engineering

Official Sources

  • SpaceX FCC filings (search “Space Exploration Holdings” on FCC ECFS): Frequency allocations, EIRP limits, antenna patterns
  • FCC Part 25 blanket license applications: Technical specifications for user terminals
  • ITU Radio Regulations Article 22: NGSO/GSO spectrum sharing framework (EPFD limits)
  • STMicroelectronics press releases (2021): Confirmed FD-SOI partnership for Starlink chips

Technical Background

  • Balanis, Antenna Theory: Analysis and Design: Patch antenna fundamentals
  • Pozar, Microwave Engineering: Transmission line and RF system theory
  • Rogers Corporation datasheets: RF laminate specifications

Notes on Sources

This post synthesizes information from multiple sources with varying levels of certainty:

Confirmed facts

  • Frequency bands (FCC filings)
  • STMicroelectronics partnership and FD-SOI process (press releases)
  • Orbital parameters (public TLE data)
  • General phased array physics (textbook material)

Teardown-derived (high confidence)

  • Approximate element counts
  • PCB layer structure and materials
  • Chip placement and general architecture

Community analysis (moderate confidence)

  • “Shiraz” and “Pulsar” chip names (PCB silkscreen observations)
  • Specific chip counts per generation
  • Manufacturing cost estimates

SpaceX does not publish detailed technical specifications for the user terminal. The engineering details in this post represent my best understanding based on available evidence.