After years of using Linux on my desktop, I decided to install Windows on my computer, to get access to a few commercial photo editing applications. I'll go into my grievances with Linux later, but for now:
I tried to install Windows, you won't believe what happened next
Like I have done many times with Linux, I download a Windows image from my university, and write it to a USB drive, then reboot into the USB drive. The USB drive can't be booted. A quick internet search leads me to a Microsoft Support page on how to Install Windows from a USB Flash Drive, which says that
Instead, one has to format the stick as FAT32, make it active, then copy the files from the image to it. So I follow the instructions and open the Disk Management program. It does not offer an option of FAT32, nor for making the partition active. I settle on (inactive) exFAT instead. It doesn't boot.
I switch over to Linux, where I can indeed make a FAT32 partition, and I can mark it as bootable, which I take as the equivalent of active. But Linux can not open the Windows image to copy the files onto the USB stick. So back to Windows, for copying the files. Except they can't be copied, because some of them are larger than 4Gb, which can't be written to a FAT32 partition. What now?
While researching how to download a different version of Windows 10, I stumble upon the Media Creation Tool, which automatically downloads Windows 10 and writes it to the USB stick correctly. Why was this not pointed out in the article above? Who knows. At any rate, it works. I can finally install Windows.
The installation process requires the usual dozen-or-so refusals of tracking, ads, privacy intrusions, and voice assistants. I wish I could simply reject them all at once. And then the install hangs, while "polishing up a few things". Pressing the helpful back button, and then immediately the forward button unhangs it, and the installation completes.
Next up are drivers. It feels anachronistic to have to install drivers manually in this day and age, but oh well. The new GPU driver to make screen tearing go away, a driver for my trackball to recognize the third mouse button, a wacom driver, ten or so Intel drivers of unknown utility. The trackball driver is not signed. I install it anyway. The GPU driver does not recognize my GPU and can't be installed. A quick Internet search reveals that my particular AMD/Intel GPU/CPU was discontinued from support by both AMD and Intel, and does not have a current driver. But fora suggest that up to version 20.2.1 of the AMD driver work fine. They don't, the driver crashes when I open images in my photo editor. An even older version published by Intel does work correctly. So now I am running an AMD GPU with an Intel driver from 2018.
Installing and setting up Firefox and my photo editors works without issue, thank goodness. Emacs has a Windows installer now, which is greatly appreciated. OpenCL and network shares just work. This is why I'm installing Windows next to my Linux.
But Windows is still not activated. I copy my university's product key in the appropriate text box, but hesitate: That's for Windows Enterprise, and I'd be just fine with Home. So I cancel the activation without activating. A helpful link in the activation systems sends me to the Microsoft Store to get my very own version of Windows Home for €145, which normally retails for around €95, so that's a no-go. Whatever, I'll go with my university's Enterprise edition. Except the activation box now says my product key is invalid. And the Store now literally says "We don't know how you got here but you shouldn't be here" instead of selling me Windows. After a restart it installs and activates Windows Enterprise, even though I never actually completed the activation.
I install Git, but in order to access my Github I need to copy over my SSH key from the Linux install. Which I can't boot at the moment, because installing Windows overwrites the boot loader. This is normal. So I download Ubuntu, write it to the USB stick, boot into it, recover the bootloader, boot into my old install, reformat the stick, copy the files to the stick, boot back into Windows, and the files aren't on the stick. Tough. Boot back into Linux, copy the files onto the stick, eject the stick, boot back into Windows, copy the files to the computer. Great user experience.
Now that I have my SSH key, I open a Git Bash to download a project. It says my credentials are incorrect. I execute the same commands in a regular CMD instead of Git Bash, and now my credentials are correct. Obviously.
There are several programs that claim to be able to read Linux file systems from Windows. They do not work. But Microsoft has just announced that you will be able to mount Linux file systems from WSL in a few weeks or months. So maybe that will work!
I set my lock screen to a slideshow of my pictures. Except my pictures do not show up, and I get to see Window's default pictures instead. An internet search reveals that this is a wide-spread problem. Many "solutions" are offered in the support fora. What works for me is to first set the lock screen to "Windows Spotlight", then to "Slideshow". Only in that order will my pictures be shown.
I will stop here. I could probably go on ad infinitum if I wanted to. This was my experience of using Windows for one day. I consider these problems relatively benign, in that all of them had solutions, if non-obvious ones.
Why install Windows in the first place?
Part of the reason for installing Windows was my growing frustration with Linux. I have been a happy user of KDE of various flavors for about seven years now. But ever since I got into photo editing, things began to become problematic:
My photo editor requires OpenCL, but the graphics driver situation on Linux is problematic, to say the least. I generally managed to get RocM running most of the time, but kernel updates frequently broke it, or required down- or upgrading RocM. It was a constant struggle.
I wanted to work with some of my data on a network share, but KDE's implementation of network shares does not simply mount them for applications to use, but instead requires each application to be able to open network locations on their own. Needless to say, this worked almost never, requiring many unnecessary file copies. Perhaps Gnome handles network shares better, but let's not open that can of worms.
Printing photos simply never worked right for me. The colors were off, photo papers were not supported, the networked printer was rarely recognized. Both for a Samsung printer and an Epson and a Canon. One time a commercial printer driver for Linux printed with so much ink it dripped off the paper afterwards. Neither Darktable nor Gimp nor Digikam have a robust printing mode. I generally resorted to Windows for printing.
I ran that Windows in a virtual machine. With Virtualbox, the virtual machine would be extremely slow, to the point where it had a delay of several seconds between typing and seeing letters on the screen. VMWare did better, but would suddenly freeze and hang for minutes at a time. Disabling hugepages helped sometimes, for a short while. The virtual machine network was extremely unreliable. Some of these issues were probably related to my using a 4K screen.
Speaking of screens, I have two screens, one HighDPI 4k and one normal 1440p. Using X, the system can be either in HighDPI mode, or in normal mode. But it can't drive the two displays in different modes. Thus the second monitor was almost useless and I generally worked only on the 4k screen. With Wayland I would have been able to use both screens in different modes, but not be able to color-calibrate them or record screen casts. Which is completely unacceptable. So I stuck with one screen and X. In Windows, I can use both screens and calibrate them.
Additionally, Linux hardware support is still a bit spotty. My SD card reader couldn't read some SD cards because of driver issues. It would sometimes corrupt the SD card's file systems. USB-connected cameras were generally not accessible. The web cam did not work reliably. The CPU fan ran too hot most of the time.
So there had been numerous grievances in Linux that had no solutions. Still I stuck with it because so many more smaller issues were actually fixable if I put in the work. In fact I had accumulated quite a number of small hacks and scripts for various issues. I feared that Windows would leave me without recourse in these situations. And it doesn't. But at least the bigger features generally work as advertised.
Where do we go from here?
Just for completion's sake, I should really find an Apple computer and run it through its paces. From my experience of occasionally using a Macbook for teaching over the last few years, I am confident that it fares no better than Linux or Windows.
Were things always this broken? How are normal people expected to deal with these things? No wonder every sane person now prefers a smartphone or tablet to their computers. Limited as they may be, at least they generally work.
There is no joy in technology any more.
Assembling scientific documents is a complex task. My documents are a combination of graphs, data, and text, written in LaTeX. This post is about combining these elements, keeping them up to date, while not losing your mind. My techniques work on any Unix system on Linux, macOS, or the WSL.
For engineering or science work, my deliverables are PDFs, typically rendered from LaTeX. But writing LaTeX is not the most pleasant of writing environments. So I've tried my hand at org-mode and Markdown, compiled them to LaTeX, and then to PDF. In general, this worked well, but there always came a point where the abstraction broke, and the LaTeX leaked up the stack into my document. At which point I'd essentially write LaTeX anyway, just with a different syntax. After a few years of this, I decided to cut the middle-man, bite the bullet, and just write LaTeX.
That said, modern LaTeX is not so bad any more: XeLaTeX supports normal OpenType fonts, mixed languages, proper unicode, and natively renders to PDF. It also renders pretty quickly. My entire dissertation renders in less than three seconds, which is plenty fast enough for me.
To render, I run a simple makefile in an infinite loop that recompiles my PDF whenever the TeX source changes, giving live feedback while writing:
diss.pdf: diss.tex makefile $(graph_pdfs) xelatex -interaction nonstopmode diss.tex
We'll get back to
$(graph_pdfs) in a second.
A major challenge in writing a technical document is keeping all the source data in sync with the document. To make sure that all graphs are up to date, I plug them into the same makefile as above, but with a twist: All my graphs are created from Python scripts of the same name in the
But you don't want to simply execute all the scripts in
graphs, as some of them might be shared dependencies that do not produce PDFs. So instead, I only execute scripts that start with a chapter number, which conveniently sorts them by chapter in the file manager, as well.
Thus all graphs render into the main PDF and update automatically, just like the main document:
graph_sources = $(shell find graphs -regex "graphs/[0-9]-.*\.py") graph_pdfs = $(patsubst %.py,%.pdf,$(graph_sources)) graphs/%.pdf: graphs/%.py cd graphs; .venv/bin/python $(notdir $<)
The first two lines build a list of all graph scripts in the
graphs directory, and their matching PDFs. The last two lines are a makefile recipy that compiles any graph script into a PDF, using the virtualenv in
graphs/.venv/. How elegant these makefiles are, with recipe definitions independent of targets.
This system is surprisingly flexible, and absolutely trivial to debug. For example, I sometimes use those graph scripts as glorified shell scripts, for converting an SVG to PDF with Inkscape or some similar task. Or I compile some intermediate data before actually building the graph, and cache them for later use. Just make sure to set an appropriate exit code in the graph script, to signal to the makefile whether the graph was successfully created. An additional makefile target
graphs: $(graph_pdfs) can also come in handy if you want ignore the TeX side of things for a bit.
All of the graph scripts and TeX are of course backed by a Git repository. But my dissertation also contains a number of databases that are far too big for Git. Instead, I rely on git-annex to synchronize data across machines from a simple webdav host.
To set up a new writing environment from scratch, all I need is the following series of commands:
git clone git://mygitserver/dissertation.git dissertation cd dissertation git annex init env WEBDAV_USERNAME=xxx WEBDAV_PASSWORD=yyy git annex enableremote mywebdavserver git annex copy --from mywebdavserver (cd graphs; pipenv install) make all
This will download my graphs and text from
mygitserver, download my databases from
mywebdavserver, build my Python environment with
pipenv, recreate all the graph PDFs, and compile the TeX. A process that can take a few hours, but is completely automated and reliable.
And that is truly the key part; The last thing you want to do while writing is being distracted by technical issues such as "where did I put that database again?", "didn't that graph show something different the other day?", or "I forgot to my database file at work and now I'm stuck at home during the pandemic and can't progress". Not that any of those would have ever happened to me, of course.
This semester is weird. Instead of holding my "Applied Programming" lecture as I normally would, live-coding in front of the students and narrating my foibles, this time it all had to be done online, thanks to the ongoing pandemic. Which meant I had to record videos. I had no idea how to record videos. This is a writeup of what I did, in case I have to do more of it. You can see the results of my efforts in my Qt for Python video tutorials and my file parsing with Python video tutorials. Through some strange coincidences, wired.com wrote an article about my use of OBS.
Working on Linux, I used the Open Broadcaster Software, or OBS for short, as my recording program. OBS can do much more than record screencasts, but I only use it for two things: Recording a portion of my screen, and switching between different portions.
To this end, I divide my screen into four quadrants. The top left is OBS, for monitoring my recording and mic levels. The bottom left is a text editor with my speaker notes. The top right and bottom right are my two recording scenes, usually a terminal or browser in the top right, and a text editor in the bottom right. The screenshot shows the Editor scene, which has a filter applied to its source to record only the bottom right quadrant. On a 4K screen, each quadrant is exactly full HD.
In OBS's settings, I set hotkeys to switch scenes: I use F1 and F2 to select the Browser and Editor scenes, and F6 for starting and stopping recordings. For more compatible video files, I enable "Automatically remux to mp4" in OBS' advanced settings.
The second ingredient to my recording setup is KDE, where I assign F3 and F4 to activate the browser or editor window (right click any window → More Actions → Assign Window Shortcut). And to make my recordings look clean, I disable window shadows for the duration of the recording.
With these shortcuts, I hit F1 and F3 to switch focus and scene to the browser, or F2 and F4 for the text editor. To make this work smoothly, I disabled these shortcuts within my terminal, browser, and text editor. But always be weary of accidentally getting those out of sync. I don't know how often I accidentally recorded the wrong part of the screen and had to redo a recording.
Anyway, with this setup, I can record screen casts with very minimal effort. The last ingredient however is editing; and I loathe video editing. I'd much rather record a few more takes than spend the same time in a video editor. Instead, I record short snippets of a few minutes each, and simply concatenate them with FFmpeg:
Create a file concatenate.txt, that lists all the files to be concatenated:
file part-one.mp4 file part-two.mp4 file part-three.mp4
ffmpeg -f concat -i concatenate.txt -c copy output.mp4 to concatenate them into a new file
The great thing about this method is that it uses the
copy codec, which does not re-encode the file. I.e. it only takes a fraction of a second, and does not degrade quality.
In summary, this setup works very well for me. It is simple and efficient, and does not require any video editing. The ability to switch scenes is cool and powerful. Still, recording videos is a lot of work. All in all, the 18 videos in the file parsing tutorials took 250 takes, according to my trash directory.
This video series was produced in the spring of 2020, during the COVID19-pandemic, when all lectures had to be held electronically, without physical attendance. It is a tutorial, in German, for parsing text files, and basic unit testing.
If the videos are too slow, feel free to speed them up by right-clicking, and adjusting play speed (Firefox only, as far as I know).
You may also download the videos and share them with your friends. Please do not upload them to social media or YouTube, but link to this website instead. If you want to modify them or create derivative works, please contact me.
The Qt for Python Video Tutorial by Bastian Bechtold is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Prerequisites: A basic understanding of Python, and a working installation of python ≥3.6.
An overview over the topics discussed in the rest of the videos, and installation of pytest.
5 INI: Bugfixes and Integration Tests
6 INI: Test the Tests
Tests can be wrong, too.
7 CSV: First Prototype
8 CSV: Quotes
9 CSV: A Few More Features
10 JSON: Keyword Parser
11 JSON: Strings
13 JSON: Data Structures
14 Regular Expressions 1
How to parse parts of INI files with regular expressions.
15 Regular Expressions 2
How to parse parts of JSON files with regular expressions.
A summary of the topics discussed.
It bit me again: I got software envy. What if I could develop my pictures faster with a different RAW developer? What if they looked better than they do now? Questions like these keep me up at night.
Choice. There are so many RAW developers out there. And they all have rabid fan bases, and apparently unique rendering. How to choose?
Here are my house rules:
- Must run on Windows or Linux
- Must run acceptably on my Surface tablet
- Must run acceptably with files on a network share
- Must support my past and present cameras (Fuji X-E3, Ricoh GR, Pentax Q7, Nikon D7000)
In contrast to most other comparisons on the 'net1, I won't concern myself too much with sharpness and noise reduction and demosaicing. I have yet to see a photograph that was ruined by them, and most RAW developers seem to do a sufficient job at them.
I would prefer a file-based workflow with edits stored alongside the RAW files2, and I would prefer a perpetual license instead of a rental contract, but I'm willing to compromize on both if it's worth it3.
Obviously, I am a lot more proficient in my current tool, Darktable, than in any of the others. But for this test, I'm explicitly not doing anything particularly artistic; merely some highlight recovery, shadow recovery, and local white balance adjustments. By limiting myself to these edits, I hope to get an unbiased idea of the various RAW developer's implementations, without needing to ask the endless “what if” of what else I could have done. That said, I will leave all other adjustments on their default settings, to still get an impression of the general look of the programs.
- Darktable 3.0.2 (Free)
Free, works on Linux, very familiar to me. Allegedly, not particularly fast, with a confusing user interface.
- Adobe Lightroom Classic CC 7.5 ($10/month)
Probably the most widespread tool. Somehow unappealing to me. Currently only available at a subscription price, but there is a “free” version available through my university.
- Capture One 20.0.4 ($29/month or $350)
Enormously expensive, even with the educational discount. Allegedly the best default color rendition, particularly for Fuji cameras.
- RawTherapee 5.8 (Free)
Free, works on Linux. Allegedly extremely high image quality, but no local adjustments whatsoever.
- Luminar 4.2.0 ($90)
A rather new developer, with fancy AI features such as automatic sky replacements. Not exactly what I'm looking for, but we'll see about its “normal” RAW development chops.
- ON1 Photo RAW 2020.1 ($100)
Another highly regarded developer with rather traditional tools. This one works straight on files in the file system, though, which is highly attractive to me.
- ACDSee Photo Studio Ultimate 2020 13.0 ($9/month or $150)
Wasn't this a fancy image viewer a few years ago? Apparently it's now a RAW developer.
- Exposure X5 126.96.36.199 ($120)
Allegedly super fast, with great sharpening and noise removal.
- Photo Ninja 1.3.7a ($130)
Another new developer, borne out of a dedicated noise reduction tool, and with an emphasis on “intelligent” tools.
- Silkypix Developer Studio Pro 10.0.3.0 ($200)
A Japanese RAW developer. I wasn't aware of it until a comment brought it up, thank you for that! And I mention its country of origin, as the English translation is a bit rough sometimes. Quite unusual in its feature set.
- Aftershot ($80)
Works on Linux, but doesn't support my Fuji camera because it hasn't been updated in ages.
- DXO Photolab ($150)
Does not support my Fuji or Q7. Not even if I convert the files to DNG, probably out of spite.
- Iridient Developer, Raw Power, Aperture
- Affinity Photo ($55)
Cheap, awesome, but not non-destructive. I'll probably buy this regardless in a sale, just because it's so affordable.
In terms of price, it is hard to argue with free. But considering the price of my other photographic equipment, most prices in this list are pretty adequate. Except for Capture One (and maybe Silkypix). $350 ($200) is a hard price to swallow.
I prepared 18 RAW images for this test, and made a plan of what exactly I would do with every one of them. Then I developed them all in each of the RAW editors. I will only show excerpts here, both to keep private the pictures I don't want to share, and to keep this already long post from exploding.
Also, I mostly show difficult files here that have some obvious challenge. This is because I am usually quite satisfied with my cameras' JPEGs in easy cases, and don't bother with RAW development.
📂 DSCF3861.RAF (23.0 MB)
A shot of the sunset in Greece, with both the sun and its reflection in the water blowing out. I want to lower the highlights, and boost the shadows a bit. The transition from sky to sun should be smooth without lightness reversals or rings. The transition from water to reflection should have no color cast. The hills in the background should not show any halos.
Capture One, Lightroom, and Silkypix show the smallest sun without artifacts. RawTherapee, Darktable, and ACDSee produce a smooth transition, but a bigger sun. In Luminar, Exposure, Photo Ninja, and ON1 the sun is smaller, but has a distinct ring around it that looks wrong. In RawTherapee the sun is big and slightly ringed. Actually, Capture One and Silkypix also have a ring, but so faint that it wouldn't matter to me.
The reflections in the water are artifact-free in Darktable, Lightroom, Exposure, and RawTherapee. The other developers show magenta artifacts to varying degrees. In terms of detail, Capture One, Lightroom, and Exposure recover a bit more wave details in the blown-out reflections.
The hills in the background show distracting halos in Capture One, Lightroom, and Exposure.
In the following list, the RAW developer name links to the sidecar file, if there is one:
- Camera: Dynamic Range 400
- ACDSee: Highlights 100, Fill Light 25
- Darktable: My Defaults, Filmic RGB to shift dynamic range to include highlights, Highlight Reconstruction LCh and lower until magenta halo disappears
- Capture One: Highlight and White -100, Shadow +20
- Exposure: Highlights -100, Whites -50, Shadows +50 (less Whites desaturates)
- Lightroom: Highlight -100, Shadow +50
- Luminar: Highlights -100, Whites -50, Shadows +25
- ON1: Highlights -50, Shadows +25 (More Highlights produce lightness reversals)
- Photo Ninja: Illumination 27, Exposure offset -1.62, Highlights -0.50 (all chosen automatically)
- RawTherapee: Highlight Compression 250, Highlights 100, Shadows 25
- Silkypix: Highlight Dynamic Range +3, Hue 100
While a bit of a pathological image, there are clear differences in how these RAW developers handle it. Really, only Darktable and Lightroom produce a truly pleasing image for me, with second place to Capture One, ON1, and Silkypix. Surprisingly, the camera's own JPEG is amongst the best renditions as well.
Silkypix deserves a special mention, though, as its highlight control tool has a fantastic Hue slider, which trades off higher saturation against more accurate hue. Which is exactly the tradeoff that underlies all the rings and magenta artifacts in all the other programs.
On a side note, I have never quite understood why nobody seems to complain about the obvious haloing in Lightroom. I see it in almost every high dynamic range landscape shot on the internet, and I do not enjoy the look. But apparently I'm alone with this.
Dynamic Range Reduction
📂 DSCF6535.RAF (21.6 MB)
A shot of a very contrasty forest scene at Mt. Washington, with highlights slightly blowing out, and shadows close to drowning. I want to lower highlights and raise shadows, without it looking crushed or unrealistic.
The most important thing in this picture is to maintain a realistic progression of tones, even though the dynamic range is crushed beyond reason. To my eyes, Lightroom really stands out here, with a three-dimensional look that no other developer can match. ACDSee, Darktable, ON1, Photo Ninja, and RawTherapee come second, with a believable progression. Exposure, Luminar, and Capture One seemingly applied some kind of local contrast compression that destroys the balance between highlights and shadows and flattens the image.
All developers show magenta artifacts on the bright forest floor to some degrees. They are particularly unpleasant in Capture One, Darktable, ACDSee, and Exposure.
- ACDSee: Highlights 100, Fill Light 25
- Capture One: Highlights -50, Shadows +25, Black +50
- Darktable: My Defaults, Filmic RGB to expand dynamic range
- Exposure: Highlights -100, Shadows +50, Blacks +25 (Blacks and Shadows interact weirdly)
- Lightroom: Highlights -75, Shadows +50, Blacks +50
- Luminar: Highlights -100, Whites -50, Shadows +50, Blacks +50
- ON1: Highlights -75, Shadows +50 (disable Recover Highlight Hue to prevent color fringes)
- Photo Ninja: Illumination 25, Exposure offset -1.47, Highlights -0.50 (all chosen automatically)
- RawTherapee: Highlights 50, Shadows 25, Dynamic Range Compression 50
- Silkypix: HDR 50, Exposure -2/3
In terms of tools, I like the explicit dynamic range slider in Darktable, RawTherapee, and Silkypix better than the shadows and highlights sliders in the other tools. But if calibrated well, both methods can result in a pleasing image.
To my eyes, Lightroom, RawTherapee, and Photo Ninja take the crown in this shot. But I expect that the tone progression could be improved in the other tools as well if I strayed beyond the default tools.
Local White Balance
📂 DSCF8214.RAF (22.1 MB)
A shot of myself, underexposed, in front of Space Shuttle Enterprise. I want to brighten myself and adjust the white balance on my body so it matches the rest of the room. (I have better examples than this, but they showed people other than me, which I don't share.)
Luminar, Photo Ninja, and RawTherapee fail this test, as they lack local adjustment tools. Exposure for some reason shows terrible color bleeding, where my arm's color is leaking out onto the Space Shuttle in the background. Truly noteworthy is ACDSee with its intelligent brush, much like the intelligent selection tools in pixel editors. Darktable als stands out for being able to combine a drawn mask with a luminosity and hue mask.
Capture One strangely did something terrible to my skin, with weird gradients where there should be none. The Shuttle in the background lost details in the highlights in ACDSee and Exposure. Silkypix by default insisted on crazy noise reduction that turned the picture into a watercolor. Thankfully that is easy to turn down.
- ACDSee: Fill Light 50, Develop Brush with WB -50 (no picker)
- Capture One: Shadows +50, Black +75, Drawn Layer with White Balance picker on Backpack
- Darktable: My Defaults, Filmic RGB to shift dynamic range to include shadows, Luminosity and Painted mask with Color Balance picker
- Exposure: Shadows +100, Blacks +25, Layer with Color Temperature lowered (no picker)
- Lightroom: Shadows +100, Local Adjustment with WB -14 (no picker)
- Luminar: Shadows +100, No local adjustments available
- ON1: Shadows +50, Local Adjustment with WB -18 and tint +4 (no picker)
- Photo Ninja: Illumination 25, Exposure offset -1.61, Highlights -0.50 (all chosen automatically), Shadows +0.50, No local adjustments available
- RawTherapee: Shadows 50, No local adjustments
- Silkypix: Dodge HDR 50, Noise Reduction Smoothness 25, Partial Correction with Hue 130, Saturation 0.37
I find local color adjustments my main use for localized edits. Having a color picker for that is very useful, but only available in Capture One and Darktable. In the other tools, I had to either eyeball it, or manually adjust tones until the RGB values read grey.
Thus, it is Lightroom, ON1, and Darktable that pass this test.
Out of Gamut Colors
📂 DSCF0034.RAF (15.7 MB)
A shot of the Congress building in Leipzig, with a bright purple light that blows out the red color channel, which is wildly out of gamut of any reasonable color space. I want to see how the RAW developers deal with out-of-gamut colors. I raise Exposure by 1 EV, then push shadows until the clouds become faintly visible.
ACDSee, ON1, Photo Ninja, and RawTherapee fail this task, with obvious magenta or blue artifacts on the illuminated water jet. The other developers use various methods of inpainting, which look particularly convincing in Capture One, Lightroom, Silkypix, and Luminar. Exposure and Darktable look less realistic, but acceptable in a pinch. Again, Silkypix' hue slider is very handy.
- ACDSee: Fill Light 50, Exposure +1
- Capture One: Black +75, Exposure +1
- Darktable: My Defaults, Filmic RGB
- Exposure: Blacks +50, Exposure +1
- Lightroom: Shadows +100, Exposure +1
- Luminar: Shadows +25, Exposure +1
- ON1: Shadows +50, Exposure +1
- Photo Ninja: Illumination 9, Highlights -0.50 (all chosen automatically), Exposure offset 0.0
- RawTherapee: Shadows 50, Exposure +1 (Highlight Reconstruction: Blend)
- Silkypix: Dodge HDR 100, Noise reduction Smoothness 25, Highlight Hue 100
I know the Darktable devs are actively working on improving this. In truth, Darktable would have failed this task just a few months ago. Issues like these also often happen with deep-blue flowers, which turn purple in the failing developers but maintain hue in the better ones.
Color Rendition and Detail
📂 DSCF9670.RAF (25.7 MB)
A shot of a field and forest. I want to see how the RAW developers render these details and colors. Zero out noise reduction, use default sharpening, JPEG 100%.
In terms of detail, Lightroom, Capture One, Exposure, Silkypix, and Darktable seem to retain the most fine details, particularly in the little trees and the forest floor. ACDSee, Luminar, RawTherapee, Photo Ninja, and ON1 look comparatively soft or lose detail in the shadows. Silkypix, however, has a strange, painterly look to the grass details that I wasn't able to get rid of.
In terms of overall color, Exposure, Photo Ninja, RawTherapee, and Capture One clearly tend towards the most saturated look, with a clear distinction between a green and a yellow part in the field. I suspect that these try to approximate the punchy look of Fuji's colors. These color transitions are much more subtle in ACDSee, Darktable, Exposure, Lightroom, Luminar, Silkypix, and ON1. The sky is distinctly blue in Darktable, Photo Ninja, and Luminar, more cyan in ACDSee, Lightroom, and Silkypix, and weirdly purple in Capture One and RawTherapee.
- ACDSee: Amount 25
- Capture One: Amount 140
- Darktable: My Defaults, Sharpen 2
- Exposure: Amount 50
- Lightroom: Amount 40
- Luminar: Details Enhancer, Sharpen 50
- ON1: Sharpening 50
- Photo Ninja: Sharpening strength 50
- RawTherapee: Sharpening 20
- Silkypix: Zero Noise reduction, Outline emphasis 30, Ringing artifact control 15 (defaults)
I would not put too much emphasis on the colors, saturation, and contrast here, as these are easily and typically adjusted manually. I am a bit surprised about the differences in detail retention, however.
I went into this expecting to find Lightroom and Capture One to be vastly faster in use than Darktable, particularly on my Surface tablet. I also expected better out-of-the-box image beauty, large differences in user interfaces, and for most tools to have very few graphical artifacts. Surprisingly, however, almost every tool showed obvious artifacts of one kind or another, and few tools were actually faster than Darktable. In terms of tools, I found most tools look very similar, yet function vastly differently.
Simple saturation and contrast adjustments, a bit of local contrast, and rarely some dodging and burning or local color adjustments are apparently all I do most of the time, and this generally works well and similarly in all of these tools. However, that is not to say that the individual sliders do remotely the same thing in different tools. So confused was I by this that I measured the response curves of several tools, and they indeed did entirely different things. In one tool, Highlights pushes the upper half of the tone curve. In another, even the darkest shadows are affected a little bit. In yet another, Highlights burns out to the upper quarter of the tone curve if pushed all the way. Sometimes the white point stays white, sometimes it moves. Sometimes it only moves if the slider is pushed past half-way. And that's not even taking into account their different blending behavior and value scales; these sliders may look the same, but there hides complexity beyond measure.
And I did find a surprising amount of graphical artifacts in these programs, particularly the color bleeding in Exposure, and the highlight recovery problems in Luminar and ON1, as well as a number of smaller issues. The one program that truly stands out here is Lightroom, which is more robust to artifacts than any other tool in this list, seemingly due to some significant image-adaptive intelligence under the hood.
I have strong mixed feelings about Capture One. On the one hand, it has one of the most attractive user interfaces of all these tools. On the other, its color renditions are very opinionated, and not my favorite. I love how it reads and applies Fuji color profiles as shot, but then it doesn't apply the Fuji shadow/highlight adjustments and crushes the shadows unnecessarily. And while its color tools sure look nice, their functionality is not that much different than the other developers', and they are spread out needlessly across several tabs. And that price.
Playing around with Luminar was deeply impressive. There are a ton of magic and automatic features in there. But as cool as AI sky replacement is, it simply has no place in my toolbelt, and the lack of local adjustments and general speed of the UI are a big minus.
I like ON1. It's relatively affordable, works with simple JSON sidecar files instead of a library, has reasonable tools, and impressive effects. It can even mimic the look of the embedded JPEG and supports Fuji film simulations. Not quite on the graphical level as Capture One or Lightroom, but very close. And it even runs acceptably fast on my Surface tablet.
Exposure is another program I could like a lot, but the color bleeding and graphical artifacts are just not up to snuff. In one example, it entirely failed to guess colors from an underexposed bar scene (not shown). In another it bled colors out onto adjacent objects for no reason. And white balance sometimes changed lightness as well as colors. I read that this might be a graphics driver issue, but regardless, it shook my confidence in Exposure.
ACDSee was a real surprise to me. I seem to remember it as a fast image viewer, but apparently it is (now?) an impressive RAW editor as well. There is a lot to like about this tool. The magic brush for local adjustments is a particularly noteworthy touch, as well as very robust healing tools. Alas, I found the UI rather slow, and it failed on exporting a few files. I'll try again in a year or so.
I was only made aware of Silkypix through a comment after the post had already been published (thank you!). And what I read on the website made me quite excited! Its tools stray somewhat from the Lightroom-inspired norm, which is a very good thing in some cases, such as the hue-priority highlight recovery. It also works on plain files, and seems to have outstanding Fuji film simulations. Alas, it was very slow to use, and not suitable for my Surface tablet.
I had tried RawTherapee a few times in the past, and was always frustrated by its lack of local adjustments, and the need to view things at 100% to see some adjustments. On the other hand, it can match the embedded JPEG tones, and has quite a number of impressive algorithms. Still, it does not appeal to me. But it's still an amazing achievement and a pretty inspiring community as well.
Photo Ninja is a curious program. Certainly not because of its ease of use, or speed of operation, or image quality. But because it did most things almost correctly automatically. That's not what I am looking for, but it is truly impressive.
And Lightroom. As I said, I somehow do not like Lightroom. Maybe because I like to be “different”, or because I associate Adobe too much with bloated software. But I have to say, Lightroom surprised me. While its tools are sometimes in weird locations, it is highly streamlined for a very fast workflow, and it deserves my highest praise for being outstandingly robust against artifacts. But I still don't like it.
Which leaves Darktable. This is a tool I am deeply familiar with, and have used for several years. Yet until this day, I never realized just how strange its tools are compared to the other programs. How weird Filmic RGB must feel if you are used to shadows and highlights sliders, and how alien the graph-based color zones and tone equalizer and contrast equalizer must seem.
Yet, in direct comparison, I find Darktable's tools equally efficient at solving problems, even if the solutions are sometimes a bit different from its Lightroom-inspired peers. One tool in particular I want to emphasize: Color Zones. At first glance, it looks like your standard HSL tool that allows brightness, saturation, and hue changes by color (albeit as a graph instead of sliders). But then you discover the “select-by” switch, and realize that you can modify colors by lightness and saturation, as well as hue. I use this frequently to saturate shadows, which is a great effect I haven't seen in any other program.
No doubt other programs have cool features, too, but Darktable (and RawTherapee) seem uniquely open about their inner workings. And this brings joy to me, on a level the closed, artistic programs can't match. I like graphs, and maths. I'm weird like that.
But what really prompted this whole comparison blog post was my frustration with Darktable's speed. Particularly on my 4K screen, it is not the fastest program out there. And the AMD/OpenCL situation on Linux is still a travesty, which doesn't help. But I learned a thing during this experiment: You must work bottom-up through the rendering chain, if you want Darktable to be fast4. Which, in my case, usually means working through Lens Correction → Crop and Rotate → Exposure → Tone Equalizer → Contrast Equalizer → Color Balance → Filmic RGB → Color Zones. As long as I (mostly) edit things in this order, Darktable is fast enough, even on my Surface tablet.
Lastly, I have to say a few words about file management. Most programs here work on some kind of local library that stores all edits. The downside of this is that these libraries are hard to sync between computers, are hard to back up, and need to manually be kept in sync when file locations change. Notable exceptions here are Lightroom, Darktable, Exposure5, Silkypix5, RawTherapee, and ON1, which keep their edits alongside the RAW files in little text “sidecar” files6. Thus even if their library goes out of sync or is lost, at least the edits are still there. To be honest, this is quite an important factor for me.
As for library management, my requirements are small: I want to filter by date, rating, and maybe camera or lens. These needs are met by all programs except possibly Luminar and Photo Ninja. I do most of my file management in external programs on camera import, and with the exported JPEGs, so this area of the RAW developers is not very important to me.
However, this comparison also highlighted just how useful sidecar files are to photo management. I might choose different RAW developers over the years, and my photo management solutions might change over time as well. But as long as all edits are stored in simple text files next to the RAW file, I can rest safe in the knowledge that my edits will never be lost. This is a serious downside to ACDSee, Capture One, Luminar, and Photo Ninja, who keep edits secretly7 in their opaque databases.
I also timed my work with every one of these programs. Quite surprisingly, I couldn't find any significant differences between programs. Darktable's workflows, for example, are sometimes entirely different from other tools; but if you know what you're doing the path from identifying a problem to fixing it is still similarly straight-forward and fast. And there is no less experimentation until I arrive at a look I like.
Thus, I am left with Darktable, Lightroom, and ON1. And theoretically Capture One, but that price is just too high for me. If Capture One were $100 instead of $350, I would probably switch to it. Even educational pricing is only available for rentals. I'll have to decline that. And despite all my praise for Lightroom, I still don't like it.
I'll probably buy ON1 at its current, discounted price ($50), and see how I like it in actual daily use8. But at the same time. I'll also stick with Darktable on my Linux machine, at least for more complicated edits. I now know that Darktable can dance with the best of them, which is mighty impressive for a piece of free software.
With all that said and done, I have learned a lot about RAW development during this experiment. Regardless of which tool I end up sticking with, this has been a fascinating comparison. We'll see how long I can resist the urge to compare this time.
Addendum: Customer Support
So far, I have been in contact with the Customer Support people of ON1 and Capture One. And I must say, ON1 was incredibly pleasant and quick and helpful, while Capture One seemed almost reluctant to help. This is actually a big plus for ON1 and kind of big turnoff for Capture One.
There are surprisingly few non-superficial comparisons; most are just feature matrix comparisons. The best ones I could find are a fantastic, in-depth comparison on nomadlens, A Fuji-centric discussion of detail extraction on Fuji vs. Fuji, one by Andy Bell that might be sponsored/biased by Luminar, a pretty good one on PetaPixel, an older one on DPReview, and a Nikon-centered one on WY Pictures.