← Photo Developer
Technical Architecture

Under the Hood

How twenty custom GPU shaders, a deterministic pipeline, and a refusal to hide anything combine to make a RAW editor that respects the photographer.

25
Pipeline stages
20
Custom Metal shaders
60fps
Proxy rendering
0
Cloud dependencies
The pipeline

Order is ideology

Every RAW developer has a processing pipeline — a fixed sequence of stages that the image flows through. Most don't talk about it. Photo Developer's pipeline has 25 stages, and their order isn't arbitrary. It's a statement about how editing should work: broad before specific, corrective before creative.

Demosaic and white balance come first — you need correct color before you adjust it. Exposure and tone come before color grading — you need the tonal foundation before you paint on it. Sharpening and grain come last — they're output effects that should see the final image. Every stage has a reason to be where it is, and you can trace the logic from start to finish.

"Same input, same output, every time. No hidden AI fighting your adjustments."

This is what makes the pipeline deterministic. Move Highlights to −40 and you get a predictable tone curve compression with film-like quadratic rolloff. Not "approximately −40, adjusted for scene content." Not "smart highlight recovery that varies per image." The number means what it says. The slider never lies about its value.

RAW Decode White Balance Geometry Exposure & Tone Tone Equalizer Color Grading HSL Curves Local Adjustments Effects Detail Grain Display P3

Highlighted stages run on custom Metal shaders. Remaining stages use Apple's CIRAWFilter and Core Image.

Performance

Instant feedback

CPUs can't render a 45-megapixel RAW file through 25 processing stages in real time. The delay between moving a slider and seeing the result is measured in seconds — and that delay breaks creative flow. You stop exploring. You start guessing.

Photo Developer solves this with proxy rendering. During editing, images are processed at a configurable lower resolution — enough for the display, not enough to bottleneck the GPU. Every filter runs as a Metal compute kernel: one dispatch, one result, no CPU round-trips. At 100% zoom or export, the pipeline switches to full resolution seamlessly.

The result: slider movements feel instant. The image responds to your input without perceptible delay. This isn't a nice-to-have — it changes how you work. You try things you wouldn't try if each experiment cost you two seconds of staring at a progress bar.

Breakthrough

Shadow Detail Recovery

Most shadow recovery tools lift exposure globally — brightening noise along with detail, flattening the image. Photo Developer's ShadowDetailKernel uses frequency separation with shadow-weighted masking to recover texture in dark regions without raising brightness.

1
Extract high-frequency
original − blurred = texture map
2
Shadow mask
Targets only dark regions
3
Add back texture
Preserves the tonal map

Shadow regions regain microcontrast and perceived detail without the flat, lifted look or amplified noise. Dark clothing reveals texture. Underexposed portraits come back to life. The tonal map stays exactly where you put it.

// ShadowDetailKernel.ci.metal — frequency separation, shadow-weighted float luminance = dot(original.rgb, float3(0.2126, 0.7152, 0.0722)); float shadowMask = smoothstep(0.4, 0.0, luminance); float3 highFreq = original.rgb - blurred.rgb; float3 result = original.rgb + highFreq * shadowMask * amount;
Optical simulation

Film halation that feels photographic

CineStill 800T is famous for its red glow around bright lights — caused by the missing anti-halation layer in the original cinema film stock. Every photo editor with a "bloom" slider tries to approximate this. Most produce a generic digital glow that looks nothing like the real thing.

Photo Developer's BloomKernel simulates the actual optical phenomenon: veiling flare. Real lens coatings scatter light and reduce local contrast around bright areas. The kernel extracts bright pixels with a soft threshold, applies a separable Gaussian blur on the GPU, then composites the result using a veiling flare model combined with a soft-knee highlight rolloff. The result is glow that wraps into the surrounding image the way light actually behaves through glass — not a Photoshop outer glow.

"The key difference: we simulate veiling flare — the way real lens coatings scatter light and reduce local contrast."
Breakthrough

Spot removal without the artifacts

Traditional clone tools use a multi-step pipeline: sample the source, feather the edges, composite onto the target. Each step introduces potential artifacts — edge halos, color shifts, brightness mismatches across gradients. Tuning parameters to minimize these issues is a losing game.

The SpotCloneKernel runs the entire operation in a single Metal compute pass. Smoothstep feathering at patch edges, luminance matching that compensates for brightness gradients automatically. One dispatch, zero intermediate buffers, no accumulated error.

1
Edge-aware sampling
Source extraction in one pass
2
Smoothstep feather
Soft circular blend at edges
3
Luminance match
Auto brightness compensation

Clean cloning across brightness gradients — sky, skin, anything with tonal variation. Spots are saved to XMP sidecars, so the workflow stays non-destructive. Undo is instant.

Optical simulation

Grain that knows where it belongs

Most grain implementations add uniform noise across the image and call it done. Real film grain is luminance-dependent — it's more visible in shadows, finer in highlights, and it has a physical warmth that comes from the chemical process. Photo Developer's GrainKernel models this with five controls instead of one.

Amount sets intensity. Roughness controls the texture. Size scales the grain particles. Shadow Bias concentrates grain in darker tones, mimicking how silver halide crystals actually respond to light. Warmth adds a subtle tint that moves grain from clinical to organic. The grain is applied in luminance only — no color noise, no chromatic artifacts.

The difference is subtle in screenshots. It's unmistakable in prints.

Transparent AI

Intelligence that shows its work

Modern photo editors increasingly rely on machine learning to make decisions for you. The results can be good — but they're unpredictable. "Auto" produces a different interpretation every time, and you can't see what it changed or why. The algorithm becomes a collaborator you can't talk to.

Smart Develop takes a different approach. It analyzes the scene across 14 dimensions — portrait, landscape, golden hour, street, macro, backlit, and more — using Apple Vision's scene classification combined with EXIF data. Then it sets visible sliders to concrete values. Exposure to +0.3. Highlights to −25. Vibrance to +12. You can see exactly what it did, agree or disagree, and adjust from there.

"The machine assists; you decide."

Auto Enhance works the same way. It detects faces, backlighting, and composition, then adjusts exposure, contrast, and tone — always through the same parametric sliders you'd use yourself. The values are deterministic. Same image, same analysis, same starting point.

Workflow

Of course you can drag the histogram

Every photo editor shows you a histogram. Photo Developer lets you reach in and grab it. Drag the left edge (0–10%) to adjust Blacks. Drag the right edge (90–100%) to adjust Whites. The slider in the inspector moves in sync. It's direct manipulation — the most natural way to set your tonal boundaries.

Target Adjustment works the same way for everything else. Click on the image and drag up or down. A dark pill near the cursor shows which parameter you're affecting — "Tone EQ · Lights" or "HSL · Orange Saturation." The relevant slider highlights in the inspector. You're adjusting the photograph, not hunting through panels.

"Point at what you want to change. The app figures out which slider that is."
No catalog

Your files are the database

Every adjustment, every rating, every keyword lives in an XMP sidecar file — a small XML text file sitting next to your image. Delete the app and your creative decisions still exist as readable text. Standard adjustments use industry-standard XMP fields any tool can read. The non-standard ones (bloom, tone equalizer, spot removals) live in Photo Developer's own pd: namespace — documented, open, never encrypted.

Namespace What it stores Compatibility
crs: Camera Raw Settings — exposure, tone, color, detail Adobe compatible
pd: Bloom, Tone EQ, Local Adjustments, Spot Removals, Snapshots Photo Developer
xmp: dc: lr: Ratings, keywords, pick status, collections Preserved on save

The app runs in macOS sandbox. It only sees files you explicitly open — no full disk access, no background indexing, no "phone home" license checks. Zero telemetry. Works entirely offline, now and in ten years.

Architecture

Pure Apple stack

No Electron. No cross-platform abstractions. No web views pretending to be native. The entire application is SwiftUI and AppKit talking directly to Metal, Vision, and Core Image. The rendering context is created once and reused. Intermediate results are cached. State management uses Swift's native @Observable protocol — no third-party reactive framework.

User Interface SwiftUI + AppKit
State Management @Observable + DocumentManager
Image Processing CIRAWFilter + Custom CIKernels
GPU Compute Metal Shading Language
AI / Vision Vision.framework
Persistence XMP Sidecar (open standard)
"The kill list is longer than the feature list. The reason it hangs together is the thirty-something things it doesn't do."
← Back to Photo Developer