When you open an un-processed RAW file: The colors are flat. The shadows are muddy. The highlights are blown. The camera captured the data — but it seemingly didn't capture the feeling.
A RAW developer is the tool that bridges that gap. It takes the raw sensor data and lets you shape it into the image you remember — or the image you imagine. Exposure, contrast, color, detail. Every adjustment brings the file closer to the photograph.
I've been building one for two-three months. Here's what I've learned about what happens between the sensor and the screen.
Why build your own?
The honest answer: because I had opinions.
After twenty years with Lightroom I knew exactly what I wanted from a develop module. I knew which sliders I reached for on every image and which ones I'd never touched. I knew that the Highlights slider should feel like film shoulder compression, not a digital clamp. I knew that sharpening should only affect luminance, never color. I knew that grain should behave differently in shadows than highlights (because that's how real film works - and to me that looks more what I expect).
These aren't features you can request from Adobe. They're convictions about how light should be treated — and they're subjective enough that no commercial product will ever perfectly match them.
So I did what only a «crazy» person would do? I spent a few months writing Metal shaders (with help from ChatGPT, Gemini and Claude).
The pipeline
Every RAW developer has a pipeline — a fixed sequence of processing stages that the image flows through. The order matters. Each stage builds on the result of the one before it. (Even though the pipeline has its set of ordered steps — the user can change whatever-whenever; even though it helps to know how the pipe works.)
Photo Developer's pipeline has 25 stages:
RAW decode comes first, because exposure and white balance operate on the actual sensor data — before any tone curve has been applied. This is why exposure recovery in RAW is so much cleaner than adjusting a JPEG: you're reaching into information that was captured but not yet rendered.
Noise reduction comes before tonal adjustments, because lifting shadows or adding contrast amplifies noise. Clean the signal first, then shape it.
Tone — the big block. Contrast, highlights, shadows, whites, blacks, tone curves, and a tone equalizer that can independently adjust five brightness zones. This is where most of the creative work happens. The principle is progressive specificity: broad adjustments first (contrast), then shaped response (curves), then surgical targeting (tone equalizer).
Color comes after tone is settled. Global vibrance and saturation, then an eight-channel mixer that lets you shift individual color families independently. Want to deepen a blue sky without affecting skin tones? That's what selective color is for.
Local adjustments apply after the global look is established. Gradients, brushes, AI-powered masks for people and skies, luminosity and color range selections. Each carries its own set of adjustments — a mini develop module applied only where you paint.
Presence — texture, clarity, dehaze. These are mid-frequency contrast tools that add dimension and depth. They come late because they should operate on the finished tonal balance, not fight with it.
Detail — sharpening and shadow detail recovery. Sharpening comes near the end because every upstream operation can blur or shift detail. Sharpen the finished image, not an intermediate.
Effects — bloom, diffuse glow, vignette, grain. Creative finishes that should be the last thing applied.
The order isn't arbitrary. It follows a principle: broad before specific, corrective before creative. And it's fixed — no exceptions, no reordering. This makes the pipeline predictable. Same settings, same image, same result. Every time.
Why Metal matters
Every custom filter in Photo Developer runs on the GPU via Apple's Metal framework. This isn't a technical flex — it's a practical necessity.
When you drag a slider, the entire pipeline re-renders. All 25 stages. For that to feel responsive — to feel like you're directly manipulating the image rather than requesting a change and waiting — the render needs to complete in about 16 milliseconds. That's one frame at 60fps.
CPUs can't do this. A modern laptop CPU might process a 40-megapixel RAW file through 25 filter stages in a few seconds. That's fine for batch export, but it's miserable for interactive editing. The delay between moving a slider and seeing the result breaks the creative flow. You stop exploring. You start making careful, deliberate adjustments — and careful deliberation is the enemy of the intuitive, experimental process that good editing requires.
GPUs can do this because they're massively parallel. Where a CPU processes pixels sequentially (or in small batches), a GPU processes thousands simultaneously. A tone curve that takes a CPU a hundred milliseconds to apply across a 40-megapixel image takes the GPU a fraction of a millisecond.
The trick is proxy rendering. During editing, the image is scaled down to a manageable preview size — 1280, 1920, or 2560 pixels on the long edge. The GPU processes this proxy through the full pipeline in about 16ms. When you zoom to 100% for detail inspection, or when you export, the full-resolution image is rendered. This is slower — but you only do it once, when you're done editing.
The result: even on a laptop, slider movements feel instant. The image responds to your input without perceptible delay. This changes how you work.
19 custom shaders
Alongside Apple's built-in image processing filters, Photo Developer uses 19 custom Metal compute kernels — small GPU programs written for specific image processing tasks.
Some examples of what these do and why they need to exist:
Luminance-only sharpening. Standard sharpening enhances edges in all channels — red, green, and blue. This creates colored halos at high-contrast boundaries. The custom sharpening kernel converts to a luminance representation, sharpens only the brightness information, and leaves color untouched. Cleaner results, especially at aggressive settings.
Guided filter noise reduction. Consumer noise reduction tends to smear detail along with noise. The guided filter is an edge-preserving technique: it smooths noise in flat areas while respecting edges and fine detail. The implementation operates in YCbCr color space — reducing color noise aggressively (where it's most visible) while being gentle with luminance detail (where sharpness lives).
Film-accurate grain. Real film grain isn't uniform. It's more visible in midtones, less in deep shadows and bright highlights. Faster film stocks show more grain in shadows. The grain kernel models all of this: luminance-masked intensity, a shadow bias control, fine/coarse spatial frequency blending, and even a warmth parameter that tints the grain to simulate the color of silver halide crystals. Five controls for something most apps reduce to a single slider.
Tone equalizer. Five luminance zones — shadows through highlights — each independently adjustable in stops of light, with smooth cosine blending at the boundaries so there are no visible seams. This is the tool that lets you darken a bright sky without equally darkening everything else at similar brightness. Each zone adjustment is multiplicative in exposure value, which preserves color ratios — no hue shifts when pushing zones hard.
These kernels could be written using Apple's built-in filters, but the results would be compromised. Built-in filters are general purpose. Custom kernels can make assumptions about exactly what they're trying to achieve — and those assumptions make the output better.
Honest controls
There's a philosophy embedded in the slider design: every control does exactly one thing, and what it says it does.
No "AI enhance" that changes behavior depending on the image. No hidden processing that fights your adjustments. When you move Highlights to -40, you get a predictable tone curve compression with a film-like quadratic rolloff. Same input, same output, every time.
This might sound obvious. It isn't. Modern photo editors increasingly rely on machine learning to make decisions for you. "Scene detection" that applies different processing to skies versus skin. "Adaptive" sliders that change their behavior based on content analysis. These can produce good results — but they're unpredictable. You lose the ability to build intuition about what a control does, because what it does changes.
Photo Developer takes the opposite approach. The pipeline is deterministic. The sliders are honest. You develop an intuition for how +30 clarity feels, and that intuition holds across every image you edit.
There is a "Smart Develop" feature that analyzes the image and sets a starting point. But it sets visible sliders to concrete values. You can see exactly what it did, agree or disagree with each choice, and adjust from there. The machine assists; you decide.
Everything is a number
Every adjustment in Photo Developer is parametric — it's a number, not a pixel operation. Move a slider, and the app records "Highlights: -40." It doesn't modify the image data. It stores the instruction.
This means everything is reversible. Everything is copyable. Everything is portable.
The instructions are stored in XMP sidecar files — plain text XML files, the same format Adobe invented for Lightroom. Open one in a text editor and you can read your settings. Move the file to another computer and the settings follow. Stop using Photo Developer and the settings remain, readable by any tool that understands XMP.
This is the same principle behind leaving Lightroom's catalog model: the files are the source of truth. Your creative decisions travel with your photos, not locked inside an application database.
What's deliberately not here
A RAW developer that tries to do everything ends up doing nothing particularly well. Photo Developer makes deliberate omissions:
No library or catalog. A RAW developer and a photo library have fundamentally different UI needs. Combining them means compromising both. Photo Developer opens files, you edit them, you export. For organization — rating, keywords, collections, browsing — use a dedicated tool.
No layers. Layers are a compositing paradigm. A RAW developer's job is parametric adjustment — numbers, not pixels. Local adjustments with masks provide targeted control without layer management complexity.
No AI auto-everything. No sky replacement. No object removal. No neural style transfer. These are legitimate tools, but they belong in a different kind of application. Photo Developer is about your relationship with the light that was actually captured.
The discipline of omission is harder than the discipline of addition. Every feature you don't build is a question you don't ask the user. And every question you don't ask is cognitive space preserved for the ones that matter.
The XMP contract
Photo Developer doesn't exist in isolation. It's part of a suite of apps that share metadata through XMP sidecars.
The RAW developer writes Camera Raw-compatible develop settings. The photo archive reads them to show develop badges and detect staleness. A double-exposure blending tool writes its own namespace. A kaleidoscope transform tool writes its own namespace. Each app reads what it needs, writes what it owns, and preserves everything else byte-for-byte.
There's no sync service. No central database. No server coordinating the apps. Just XML files on disk, with a clear contract about who writes what.
This only works because the contract is strict. Each app owns its namespace and never touches another's. The develop settings namespace belongs to the RAW developer. The rating and keywords namespace belongs to the archive. When the archive writes a rating, it preserves all develop settings. When the developer saves a crop, it preserves all keywords. The sidecars grow with use — accumulating the creative history of the image across every tool that touches it.
It's not the most efficient architecture. Parsing XML on every read is slower than querying a database. But it's resilient. Any app can be replaced without affecting the others. Any file can be moved without breaking the chain. The metadata goes where the photo goes.
The 80% philosophy
Photo Developer doesn't try to replace Lightroom feature-for-feature. It targets the 80% of develop work that photographers actually do, and tries to do it better than anything else.
That means the tone controls are deeply considered. The color tools are precise. The sharpening is technically superior. The grain simulation is obsessively accurate. The local adjustments cover the real use cases — dodge, burn, selective color, sky darkening, skin smoothing.
What it doesn't have: tethered shooting, panorama stitching, HDR merge, face recognition, cloud sync, mobile editing. These are the 20% — valuable features, but features that dilute focus if you try to ship them all.
The bet is that photographers who care deeply about the develop experience — the feel of the sliders, the quality of the rendering, the honesty of the controls — will accept a focused tool over a comprehensive one.
Weniger, aber besser. Less, but better.
Photo Developer is part of the Photo Suite — a collection of focused tools for the complete photography workflow on macOS.