The GEOINT Mind Map

Five patterns connect everything in geospatial intelligence. Every technique in this course, every tool, every analysis method traces back to one of them. Internalize these and you’ll understand WHY you’re doing things, not just HOW.


Pattern 1: Resolution Is a Four-Dimensional Tradeoff

There is no perfect satellite. Every sensor trades off four axes of resolution, and understanding this tradeoff is the first real intelligence skill in GEOINT.

Spatial resolution — how small can you see? Measured as Ground Sample Distance (GSD): the area on the ground that one pixel represents. Sentinel-2 gives you 10m pixels — good for fields, forests, urban sprawl. To count vehicles, you need <1m commercial imagery. To read license plates, you need airborne sensors at <10cm.

Spectral resolution — how many wavelengths do you measure? A normal camera captures 3 bands (RGB). Sentinel-2 captures 13 bands from visible through shortwave infrared. A hyperspectral sensor captures 200+ narrow bands. More bands = more material discrimination (you can distinguish types of vegetation, minerals, even camouflage vs real foliage), but the data volume explodes and spatial resolution usually drops.

Temporal resolution — how often can you revisit? Sentinel-2 revisits every 5 days. Planet revisits daily. A GEO weather satellite images the same spot every 15 minutes but at kilometer resolution. If your target is a military convoy that moves in 6 hours, 5-day revisit is useless.

Radiometric resolution — how precisely do you measure brightness? 8-bit gives 256 levels. 12-bit gives 4096. Higher radiometric resolution lets you distinguish subtle differences in shadow, water depth, soil moisture — but generates larger files and needs more careful calibration.

The tradeoff is physical. To collect more light (higher radiometric resolution), you need a bigger pixel or longer exposure. To see more bands (higher spectral resolution), you split the light into more bins, reducing signal per bin. To revisit faster (higher temporal resolution), you need more satellites or wider swath — which means worse spatial resolution. Physics enforces this. No satellite breaks the tradeoff; they just choose where on the curve to sit.

Intelligence example: You’re tasked with monitoring a military airfield. Sentinel-2 (10m, 5-day) tells you if new hardened shelters are being built (construction takes weeks). But it can’t tell you how many aircraft are on the ramp — you need 50cm commercial imagery for that. And 50cm commercial has 2-3 day revisit for most of the world. If the question is “did aircraft deploy last night,” you need SAR (works at night) or tasked commercial imaging — which costs money and time.

import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import numpy as np
 
# Compare satellite systems across resolution dimensions
satellites = {
    "Sentinel-2":  {"spatial": 10,   "spectral": 13,  "temporal": 5,   "radiometric": 12},
    "Landsat-9":   {"spatial": 30,   "spectral": 11,  "temporal": 16,  "radiometric": 16},
    "Planet Dove": {"spatial": 3,    "spectral": 4,   "temporal": 1,   "radiometric": 12},
    "MAXAR WV-3":  {"spatial": 0.31, "spectral": 29,  "temporal": 4.5, "radiometric": 11},
    "Sentinel-1":  {"spatial": 5,    "spectral": 1,   "temporal": 6,   "radiometric": 16},
    "MODIS":       {"spatial": 250,  "spectral": 36,  "temporal": 0.5, "radiometric": 12},
}
 
fig, axes = plt.subplots(2, 2, figsize=(14, 10))
dims = ["spatial", "spectral", "temporal", "radiometric"]
labels = ["Spatial (m, lower=better)", "Spectral (bands, higher=better)",
          "Temporal (days, lower=better)", "Radiometric (bits, higher=better)"]
 
for ax, dim, label in zip(axes.flat, dims, labels):
    names = list(satellites.keys())
    values = [satellites[s][dim] for s in names]
    colors = plt.cm.Set2(np.linspace(0, 1, len(names)))
    ax.barh(names, values, color=colors)
    ax.set_xlabel(label)
    ax.set_title(dim.capitalize() + " Resolution")
 
plt.tight_layout()
plt.savefig("resolution_tradeoff.png", dpi=150, bbox_inches="tight")
plt.show()

Self-test: You need to detect illegal logging in the Amazon over the past year. Which satellite system would you choose and why? What tradeoffs are you accepting?


Pattern 2: The Ground Truth Is Never in the Image Alone

A satellite image is an input to analysis, not an answer. Every detection needs context: time of day, season, weather, baseline comparison, corroborating sources. Single-image conclusions are guesses.

When you see a cluster of white rectangles at a military base, that could be: deployed vehicles under camouflage nets, portable shelters for a field exercise, new construction materials staged for building, or containers for supply delivery. The pixels alone cannot tell you which. You need temporal context (were they there last month?), OSINT corroboration (any announcements of exercises?), seasonal knowledge (is this a routine deployment cycle?), and analytical judgment.

This is where GEOINT separates from remote sensing. A remote sensing scientist maps land cover. An intelligence analyst assesses activity, intent, and capability. The difference is context.

The baseline problem. You cannot detect change without knowing what “normal” looks like. An airfield with 12 aircraft might be alarming — unless the baseline is 15 and three are in maintenance. This is why persistent monitoring matters more than one-time collection. A single snapshot is almost useless for intelligence. A time series tells a story.

The confirmation problem. Satellite imagery has an authority bias. A crisp overhead image feels like proof. But that “chemical weapons facility” might be a fertilizer plant. Those “mass graves” might be irrigation trenches. Without ground truth or corroborating intelligence, imagery alone can mislead. The analyst’s job is to describe what is OBSERVED, list what it COULD mean, and assess which interpretation is most likely given ALL available evidence — not just the image.

Intelligence example: In 2003, Colin Powell presented satellite imagery of “mobile biological weapons labs” in Iraq to the UN Security Council. The imagery showed trucks. The interpretation — biological weapons production — was wrong. The vehicles were for producing hydrogen for weather balloons. The imagery was real; the analysis lacked ground truth.

Self-test: You observe 20 new tents at a border crossing in satellite imagery. List 5 possible explanations, and for each, describe what additional information would confirm or deny it.


Pattern 3: Change Is the Signal

Static features are background. Intelligence value comes from what CHANGED: new construction, vehicle movements, ship activity, vegetation removal. Change detection is the core analytical technique.

A military base exists — that’s geography. A military base where 200 vehicles appeared overnight — that’s intelligence. A port exists — infrastructure. A port where three submarines surfaced this week when the baseline is zero — that’s a collection priority.

Change types carry different intelligence weight:

Change TypeTimescaleWhat It Means
New constructionWeeks-monthsCapability investment, permanent posture change
Vehicle deploymentHours-daysOperational activity, readiness change
Earthworks/bermsDays-weeksDefensive preparation, concealment effort
Vegetation removalDays-weeksSite preparation, field of fire clearing
Ship movementsHoursOperational deployment, logistics

The absence of change is also a signal. A factory that normally has trucks loading every day but has been empty for a week — is it shut down? Sanctioned? Relocated? Absence is harder to detect than presence, but often more important.

import numpy as np
import matplotlib.pyplot as plt
 
# Simulate change detection: NDVI differencing
np.random.seed(42)
size = 200
 
# Before: vegetated area with some buildings
before_ndvi = np.random.normal(0.6, 0.1, (size, size))  # healthy vegetation
before_ndvi[80:120, 60:100] = np.random.normal(0.1, 0.05, (40, 40))  # existing building
 
# After: new construction cleared vegetation
after_ndvi = before_ndvi.copy()
after_ndvi[40:70, 130:170] = np.random.normal(0.1, 0.05, (30, 40))  # new cleared area
after_ndvi[150:170, 30:60] = np.random.normal(0.15, 0.05, (20, 30))  # another change
 
# Change detection
diff = after_ndvi - before_ndvi
threshold = -0.3  # significant vegetation loss
change_mask = diff < threshold
 
fig, axes = plt.subplots(1, 4, figsize=(20, 5))
axes[0].imshow(before_ndvi, cmap="RdYlGn", vmin=-0.1, vmax=0.9)
axes[0].set_title("Before NDVI")
axes[1].imshow(after_ndvi, cmap="RdYlGn", vmin=-0.1, vmax=0.9)
axes[1].set_title("After NDVI")
axes[2].imshow(diff, cmap="RdBu", vmin=-0.7, vmax=0.7)
axes[2].set_title("NDVI Difference")
axes[3].imshow(change_mask, cmap="Reds")
axes[3].set_title(f"Detected Changes (threshold={threshold})")
 
for ax in axes:
    ax.axis("off")
plt.tight_layout()
plt.savefig("change_detection_demo.png", dpi=150, bbox_inches="tight")
plt.show()

Self-test: You’re monitoring a facility monthly. In month 3, NDVI drops sharply in an area that was previously forest. In month 5, the NDVI is still low but a new high-reflectance area appears. What two-phase activity does this suggest? What would you look for in SAR to confirm?


Pattern 4: Physics Constrains Collection

Every sensor operates within physical laws that determine what it can and cannot see. Understanding these constraints prevents false conclusions and guides collection planning.

Optical sensors need daylight and clear sky. This is obvious but its implications are profound. In Estonia, winter provides only 6-7 hours of usable daylight. Cloud cover in Northern Europe averages 60-70%. A 5-day revisit satellite might only give you one cloud-free image per month in winter. This is why SAR matters — it works through clouds and at night.

SAR has its own physics. SAR is side-looking — it images at an angle, not straight down. This creates geometric distortions: tall objects lean toward the sensor (layover), slopes facing the sensor are compressed (foreshortening), and areas behind tall objects are invisible (shadow). These aren’t bugs — they’re physics. A building in SAR looks nothing like a building in optical imagery. An analyst who doesn’t understand SAR geometry will make wrong measurements and false identifications.

Atmospheric effects change what you see. Blue light scatters more than red (Rayleigh scattering), which is why the sky is blue and why distant mountains look hazy. Water vapor absorbs specific wavelengths, creating “atmospheric windows” where sensors can operate. Aerosols (smoke, dust, pollution) reduce contrast and shift apparent reflectance. Two images of the same field on different days will have different pixel values purely due to atmospheric conditions — this is why radiometric normalization matters for change detection.

Orbit determines everything else. A sun-synchronous orbit at 700km altitude with a 290km swath width gives you a specific revisit time. You can compute it. You can predict exactly when a satellite will pass over a target. This is useful for collection planning — and for denial and deception by adversaries who know the same orbital mechanics.

import numpy as np
import matplotlib.pyplot as plt
 
# Compute ground track of a sun-synchronous satellite
# Simplified: circular orbit, Earth rotation
R_earth = 6371  # km
altitude = 700  # km (typical for Sentinel-2)
mu = 398600.4418  # km^3/s^2, gravitational parameter
 
r = R_earth + altitude
orbital_period = 2 * np.pi * np.sqrt(r**3 / mu)  # seconds
print(f"Orbital period: {orbital_period/60:.1f} minutes")
 
# Earth rotates 360 deg in 86164 seconds (sidereal day)
earth_rotation_rate = 360 / 86164  # deg/s
 
t = np.linspace(0, orbital_period * 3, 5000)  # 3 orbits
inclination = np.radians(98.6)  # sun-synchronous inclination
 
# Satellite position (simplified)
orbit_angle = 2 * np.pi * t / orbital_period
lat = np.degrees(np.arcsin(np.sin(inclination) * np.sin(orbit_angle)))
lon_inertial = np.degrees(np.arctan2(
    np.cos(inclination) * np.sin(orbit_angle),
    np.cos(orbit_angle)
))
lon = (lon_inertial - earth_rotation_rate * t) % 360 - 180
 
plt.figure(figsize=(14, 7))
plt.scatter(lon, lat, c=t/60, cmap="viridis", s=0.5)
plt.colorbar(label="Time (minutes)")
plt.xlabel("Longitude (deg)")
plt.ylabel("Latitude (deg)")
plt.title(f"Ground Track — Sun-Synchronous Orbit at {altitude}km")
plt.grid(True, alpha=0.3)
plt.xlim(-180, 180)
plt.ylim(-90, 90)
plt.tight_layout()
plt.savefig("ground_track.png", dpi=150, bbox_inches="tight")
plt.show()
 
print(f"Swath width: ~290 km")
print(f"Adjacent track separation at equator: "
      f"{earth_rotation_rate * orbital_period:.1f} degrees "
      f"≈ {earth_rotation_rate * orbital_period * 111:.0f} km")

Self-test: An adversary knows Sentinel-2 passes over their facility at approximately 10:30 local time every 5 days. How might they exploit this for denial and deception? What collection strategy counters this?


Pattern 5: Scale Determines Method

A 10m Sentinel-2 pixel covers 100m^2 of ground. At this scale, a football stadium is maybe 20 pixels across. A car is invisible. A forest is a green blob. The spatial scale of your question determines which data you need, which methods work, and which don’t.

Scale governs what you can detect, not just resolve. At 10m resolution you can detect a new building (it changes several pixels), but you can’t identify it as a hospital vs a warehouse. At 1m you can see the building footprint and roof structure. At 30cm you can see vehicles in the parking lot and antennas on the roof. Each scale answers different questions.

Methods must match scale:

QuestionMinimum ResolutionMethod
Is the forest being cut?10-30mNDVI time series
Where is the flood?10-30mNDWI threshold, SAR backscatter
Has a new building been constructed?10mChange detection (pixel-based)
How many aircraft on the ramp?<1mManual interpretation or object detection
What type of vehicle?<0.5mObject classification
Is the runway operational?<1mSurface condition assessment
Ship detection (open ocean)5-20mSAR CFAR threshold
Ship classification<1mVisual features, length measurement

The scale ladder. Intelligence workflows often cascade through scales: use wide-area Sentinel-2 to detect a change, task commercial high-res to characterize it, then use very-high-res or aerial to identify specific objects. This “tip and cue” approach is how most national imagery agencies work. You don’t waste expensive commercial tasking on the whole world — you use free wide-area data to find where to look.

import numpy as np
import matplotlib.pyplot as plt
 
# Demonstrate what you can see at different resolutions
# Simulate a military base at different GSD values
np.random.seed(42)
 
# Create a "truth" image at 0.5m resolution (high detail)
base_size = 400  # 200m x 200m area at 0.5m
truth = np.random.normal(0.3, 0.02, (base_size, base_size))  # background
 
# Buildings (10-20m footprint)
truth[50:90, 50:110] = 0.7   # large building
truth[150:180, 200:250] = 0.65  # another building
truth[250:270, 100:150] = 0.6   # smaller building
 
# Vehicles (2-5m)
for vx, vy in [(120, 60), (125, 60), (130, 60), (120, 66), (125, 66)]:
    truth[vy:vy+3, vx:vx+5] = 0.5  # row of vehicles
 
# Road (5m wide)
truth[195:200, 30:350] = 0.45  # E-W road
 
resolutions = [0.5, 2, 5, 10, 30]
fig, axes = plt.subplots(1, 5, figsize=(20, 4))
 
for ax, res in zip(axes, resolutions):
    factor = int(res / 0.5)
    if factor > 1:
        h, w = truth.shape
        new_h, new_w = h // factor, w // factor
        downsampled = truth[:new_h*factor, :new_w*factor].reshape(
            new_h, factor, new_w, factor
        ).mean(axis=(1, 3))
    else:
        downsampled = truth
    ax.imshow(downsampled, cmap="gray", vmin=0.2, vmax=0.8)
    ax.set_title(f"GSD = {res}m\n({downsampled.shape[0]}x{downsampled.shape[1]} px)")
    ax.axis("off")
 
plt.suptitle("Same 200m x 200m Area at Different Resolutions", fontsize=14)
plt.tight_layout()
plt.savefig("scale_comparison.png", dpi=150, bbox_inches="tight")
plt.show()

Self-test: A naval analyst asks you to count the number of submarines in a port and determine their class. What resolution do you need? What if they only need to know if “more than usual” submarines are present?


How These Patterns Connect

Every analytical decision you make in this course activates one or more of these patterns:

  • Choosing a data source → Pattern 1 (resolution tradeoff) + Pattern 4 (physics constraints)
  • Interpreting an image → Pattern 2 (context needed) + Pattern 5 (scale determines what’s visible)
  • Detecting activity → Pattern 3 (change is signal) + Pattern 2 (need baseline)
  • Planning collection → Pattern 4 (orbit, weather, lighting) + Pattern 1 (which resolution axis matters most)
  • Writing an assessment → Pattern 2 (don’t over-interpret) + Pattern 3 (what changed and what does it mean)

When you’re stuck on an analysis problem, come back to this page and ask: which pattern am I violating?


Further Reading


Next: Satellite Fundamentals