A while ago, I bought an RNI film pack for Capture One. That's a set of presets that makes your digital photos look similar to analog film scans. However, since then my other image editor, Darktable just released a new version, I'm now back to using Darktable instead of Capture One, thus without access to those presets.
Here's how to export Capture One presets to LUTs, to make them accessable to other programs.
The fun thing is, LUTs are just PNG files that contain a table of colors. You know, a "Look Up Table", of sorts. So, in order convert a preset to a LUT, all we need to do is apply the preset to a pristine "identity" LUT, and export it as a new PNG.
- Get yourself an identity LUT.
For example, the one included in Stuart Sowerby's Fuji Film Simulation Profiles. Choose the sRGB PNG LUTs, for RawTherapee and Affinity Photo.
- Open the LUT PNG in Capture One.
- Apply the preset you want.
Optionally lower saturation by -15, see below.
- Export as PNG.
Make sure the color space is sRGB, just like the original file.
As easy as that.
A few more adjustments: many Capture One presets expect to be working on raw data, which is less saturated than Darktable's default. So I export with -15 saturation. Also, many presets include spacial adjustments such as Highlights or Shadows that are bound to not play well with the LUT PNG. To disable them, delete the offending lines from the *.costyle files1, or compensate the values with opposite slider movements.
When applying the LUTs in Darktable's lut 3D module, there are a few more things you can do to fit them into your workflow. For example, you can lower the opacity of the lut 3D module to vary their effect. Or you can choose chromaticity as blend more to only apply their color transformation, but keep Darktable's tonal rendering. In normal blend mode, some LUTs prefer a flat rendering as their input, so lower contrast in filmic rgb to zero and use the auto-pickers to set the image black and white point.
don't do this for RNI LUTs, it's forbidden by the User License Agreement that is hidden quite well in dark-grey-on-black at the very bottom of their website
In the last years, I have almost exclusively read books in English. Science fiction, in particular, seems to happen exclusively in English (and perhaps Chinese). So much so, that I have come to associate German only with bad translations and personal communications, whereas English was the language of science, engineering, and fiction. In 2021 however, to my surprise, I stumbled upon Tom Hillenbrand's Hologrammatica, an science fiction thriller in German. It was a peculiar experience, reading the familiar tropes of the genre in a different language. Somehow it made the story feel more immediate and approachable to me. Strange, what effect language can have.
In the book, humanity has decided to hide reality behind holograms. Clothes and hairstyles can be altered on the fly, street lighting is replaced with projected advertisements. The twist is that in this semi-dystopian world, everyone knows that the holograms hide the truth, which is a collapsed society and dilapidated infrastructure. It is a smart, very current backdrop for a detective story with mind uploads, space elevators, and all the modern trappings of science fiction. A thoroughly enjoyable read!
Sci-Fi honorable mentions: Andy Weir's pop sci-fi Project Hail Mary and Martha Well's first novel-length murderbot entry Fugitive Telemetry.
Not Much of an Engineer
I have always been fascinated with aviation, and the technology of warfare. But most of the non-fiction I have read about these topics focuses on the stories of pilots and companies and soldiers, not engineers. In these stories some of the most important plot points came from advances in technology, yet they didn't describe how those changes came about. In 2021, I finally found a good history of aviation technology: Sir Stanley Hooker's Not Much of an Engineer describes the work of the author as the principal engineer at Bristol and Rolls-Royce from the Merlin-era second world war engines to modern turbofans. I guess you need to be a bit of an aviation/engineering geek to enjoy this, but to me this book was an important missing link that I had always looked for.
Non-fiction honorable mentions: David Goodsell's The Machinery of Life wonderfully illustrates the inner workings of cells.
How do computers work? This question is surprisingly hard to answer. For me, the answer came in three books: Charles Petzold's Code: The Hidden Language of Computer Hardware and Software explained to me how processors work. The Arpaci-Dusseau's Operating Systems: Three Easy Pieces describes the infrastructure of operating systems that make processors and storage and network available to programs. And now, finally, Robert Nystrom's Crafting Interpreters filled in the last step, how a programming language is built.
The book describes two implementations of a simple programming language. The first one is a high-level introductory scripting language implemented in Java, the second a high-performance reimplementation in C. Fascinatingly, the entire source code for these implementations is included in the book's text, such that you can entirely follow along (or program along) and have a working programming language at all times. A truly eye-opening glimpse into the innards of the tools we are using every day. The book requires familiarity with Java and C to enjoy.
When I recently installed Windows 11 on my desktop, my photo editor Darktable suddenly got much slower than it used to be. When I looked into its preferences, I noticed OpenCL was no longer available.
As it turns out, some versions of the AMD graphics driver apparently no longer ship with OpenCL support on Windows. However, they do ship with the necessary libraries, it's just that these libraries are not registered any longer.
To register them, open the Registry Editor (aka regedit.exe), navigate to the key
HKEY_LOCAL_MACHINE\SOFTWARE\Khronos\OpenCL\Vendors (if the key does not exist, create it), and create a new DWORD of Value
0. Now rename the DWORD to be the path to your amdocl64.dll. Mine is
C:\Windows\System32\DriverStore\FileRepository\u0344717.inf_amd64_d38cec78c83eee99\B343886\amdocl64.dll. Search C:\Windows\System32\ for amdocl64.dll to find the correct path on your computer.
With that, OpenCL was once again recognized by Darktable and the rest of my photo editing programs.
When I bought into the Fuji system, I selected the XF 18‑135 f/3.5‑5.6 R LM OIS WR as my main zoom lens. This is a lens with a very wide focal range, that is commonly called a “travel zoom” because you could travel the world with just this one lens. And indeed I happily did. In 2019, Fujifilm released a second travel zoom lens, the XF 16‑80 f/4 R OIS WR. Ever since, I have wondered how this new 16‑80 compares to my 18‑135. But given that these lenses are somewhat similar, few people on the internet were ever able to compare them side by side. This blog post will change that.
The 18‑135 has served me very well indeed. Of course it is not the world's brightest lens, nor sharpest, nor smallest. But so long as I can get the shot, these limitations don't bother me. Except for two things: The transition between in-focus and out-of-focus can be a bit rough, and my lens extends on its own when carried on a sling. My hope is that the 16‑80 has a nicer rendering, especially for people photos with out-of-focus backgrounds, and a stiffer zoom ring that doesn't creep.
Physically, the 16‑80 is a slight bit smaller (89 mm vs 98 mm) and ligher (440 g vs 490 g) than the 18‑135, but the difference is negligible on camera. If anything, the 16‑80 actually feels a bit bigger to me due to its larger front element. Some people claim that the 16‑80 feels better built than the 18‑135. To that end, the rings on my 16‑80 turn more smoothly than my 18‑135's, but then that is comparing a new-ish lens to a well-used one. Autofocus speeds are also reputably different, but they feel similar to me.
The 16‑80 has a numbered aperture ring that allows for adjustments while the camera is turned off. On the other hand, the 18‑135 can switch to and from auto-aperture without losing its preset aperture, which is useful as well. The ideal lens for me would have both the auto-aperture switch and the numbered aperture ring. Oh well.
The 16‑80 has a 72 mm filter thread, while the 18‑135 uses 67 mm filters. This is somewhat annoying for me, as my filters are all 67 mm, which also fit on my 70‑300. I'd imagine users of the 72 mm 10‑24 see this differently. Anyway, using a thin polarizer with a 72‑to‑67 step-down ring works without issue on the 16‑80. My inch-thick macro filter however does vignette heavily until 23 mm.
In terms of rendering, I find the 16‑80 to have a gentler transition from in-focus to out-of-focus, and indeed render out-of-focus backgrounds more smoothly than the 18‑135. At the long end, the 16-80 is actually a useful portrait lens, which is an unexpected but welcome feature for my photography.
At the wide end, the 16‑80 exhibits some significant distortion. While the camera or post processing programs can easily correct this, it leaves the image corners stretched, and renders the 16‑80's nice round bokeh balls as ugly ovals. Thus large-aperture shots at the wide end can be somewhat problematic.
Next, let's compare the resolution of the two lenses. To do that, I printed a test chart, and took some test images. All images were taken in identical illumination, on a tripod, with the chart covering half the sensor height/width. The following graphic shows a resolution scale near the center of the frame and near the left edge of the frame. And just for fun, I've thrown in a similar analysis from two prime lenses as well. The resolution scales are in 200x line widths per picture height. On my 6000×4000-pixel sensor, the theoretical maximum resolution would therefore be a resolution number of ⁴⁰⁰⁰∕₂₀₀ = 20. All test images are straight crops from original images, reproduced at their original resolution.
Center images were focused in the center, and side images were focused on the side. A two-second timer was used to eliminate camera shake. Due to the geometry of my room and the size of my printer, all pictures were taken near the close-focusing distance of the lenses. A red line indicates the limit of resolution of these lines, as judged by my eyes. The corresponding resolution numbers are generally consistent with the ones published by opticallimits and lenstip1, albeit I judged them a bit more conservatively.
The results of this test are surprising to me: the 16‑80 is actually my sharpest lens. It not only bests the 18‑135 at all settings, but also the 35 f/1.4. Only the 60 f/2.4 can reach similar resolution at the same f‑numbers. Generally, the 16‑80 is perfectly sharp right from f/4, while the 18‑135 has to be stopped down to f/8 to reach a similar resolution, especially on the image edges. Only at the long end does the 16‑80 benefit from stopping down.
That said, take these resolution measurements with a grain of salt. For instance, a “great” result of 15 (3000 LW/PH) translates to a blur radius of 1.3 pixels, while a “mediocre” result of 10 (2000 LW/PH) is instead 2 pixels. This is scientifically significant, but not at all relevant to (my) photography.
Additionally, the fact that the 60 f/2.4 macro lens scores so highly but the 35 f/1.4 does not is an indication that these measurements might be biased by being taken near the close focusing distance of the lenses. Thus the next set of images compares these lenses at more natural distances.
At this farther distance, and with a more natural subject, the differences are no longer as easily visible. What differences there are this time favor the 18‑135 instead of the 16‑80. Interestingly, I didn't see any significant differences between these pictures when looking at them “merely” side-by-side in Capture One. Only when I actually assembled these here graphics and looked at them at 200% did the differences become apparant.
Nevertheless, it remains curious that there would be such a difference between the two lenses. Then, someone mentioned that the 16‑80 might suffer from shutter shock, where the camera's mechanical shutter jolts the camera enough to upset the image stabilization system and induce a slight bit of motion blur. An issue such as this might explain the 16‑80's slightly reduced resolution in my test shots. So I created another series of images, but this time both, with the mechanical shutter, and with electronic shutter. In electronic shutter mode, nothing moves in the camera and there can be no shutter shock.
From this comparison, I can see no evidence of shutter shock. It might have been an issue on earlier firmware versions of the 16‑80, but my camera (an X-T2) and lens (at firmware 1.05) does not does not exhibit shutter shock. Furthermore, this series of pictures shows the 16‑80 and 18‑135 essentially matched in image resolution.
All of that said, I must add that all of these comparisons used extremely tight crops of high-contrast geometrical features. Most of the differences here are all but invisible in actual photographs. From these resolution experiments, I see no reason to prefer one lens over the other. Both of them are perfectly sharp in everyday use.
So, how to choose between the Fujifilm XF 16‑80 f/4 R OIS WR and the XF 18‑135 f/3.5‑5.6 R LM OIS WR? My 16‑80 has tighter aperture and zoom rings, does not creep, and has a smoother rendering of out-of-focus background. On the other hand, I do find the increased telephoto of the 18‑135 very useful, and it doesn't suffer from wide-angle distortion as much as the 16‑80.
In terms of resolution, I did not find fault with either lens. Both are very sharp across their entire focal range and the entire frame. That said, the 18‑135 does benefit from stopping down for optimum resolution, while the 16‑80 is sharp right from f/4, and the 16‑80 might be sharper for closer subjects.
My tentative conclusion from these experiments is therefore that I would slightly prefer the 16‑80 for people pictures, where the close-focus sharpness and nicer background rendering are advantageous, and the larger aperture at 80 mm might make a difference. And I would prefer the 18-135 for landscapes, where stopping down is usually easy and the longer focal length comes in handy.
That said, the differences in rendering and resolution are really very minor, and the choice most importantly comes down to the focal range. Which is as it should be with modern lenses. And both lenses are of course very well-built, weather sealed, and have fantastic image stabilization. But you probably knew that already.
multiply lenstip numbers by 2×16.7 mm to convert from lpmm (lines per millimeter) to LW/PH (line width per picture height)
With version 3.0, my favorite photo editing software Darktable started the journey towards a scene-referred editing pipeline. Which means that most edits are no longer bounded between a fixed black zero, and a white one, but can range between zero and infinity, like light itself. This is the norm in video editing and video games, but unique in photo editors at the moment. The scene-referred pipeline has brought changes to pretty much all parts of the editing workflow. I have been frustrated with it, too, even opting to use Capture One for a while. But with version 3.6, Darktable seems to have found a new stride. This article is about my favorite features in Darktable 3.6, contrasting it with Capture One's more traditional toolset.
A camera sensor records light. When there is very little light, we call that "black", but there is really no similar label for when there is a lot of it. It's just very bright. A very bright blue light is still blue, and could always be brighter. However, screens and print and analog film can not get arbitrarily bright. Instead, they have a limited maximum brightness, and the brightest color is always white. Which is why pushing things ever brighter in photo editing eventually turns them white, because that is the brightest thing possible on screen or print.
But this is a limitation of the display, not the editor or the recording. Thus it makes sense to edit photos with a physical understanding of light, where making things brighter does not make them whiter. Consider sharpening, or color grading, or contrast adjustments: All of these can push pixels brighter or darker, but really shouldn't suddenly whiten them at some arbitrary brightness. In the new scene-referred Darktable, colors are squeezed into a viewable or printable form only at the very end of the processing, retaining their physical properties for as long as possible. In most other tools, the squeezing is done early on, and edits hobbled thereafter.
In practice, this means Darktable can change tones with any number of tools without losing color information: the tone equalizer, or the rgb curve, or levels rgb, or the contrast equalizer, or sharpening, or color balance all manipulate tones without fear of losing highlight information. Only filmic rgb at the very end determines which bright pixels to turn white. In a way, the scene-referred part of the pipeline feels like raw editing, while the rest feels like editing a JPEG. Contrast this with Capture One, where only the sliders in the Exposure and High Dynamic Range modules can recover bright pixels from white, but e.g. Clarity, Levels, and Curve can not. After getting used to the scene-referred way of working, this seems like an arbitrary limitation, and really makes Levels and Curve less useful than they should be.
Interestingly, this leads to a somewhat different editing workflow in Darktable compared to Capture One. Since the scene-referred pipeline is mostly nondestructive, I can play with the overall tonality with abandon, safe in the knowledge that I will be able to recover any lost detail later. Thus I commonly use the exposure slider to expose for my mid-tones, and then reign in my burning highlights with the tone equalizer1 or filmc rgb white relative exposure2. In Capture One, digital overexposure is much more difficult to recover than underexposure, so I usually use the high dynamic range tools to underexpose slightly, and then recover mid-tone exposure with the high dynamic range shadows slider or tone curve. While these two workflows are largely equivalent, I find it interesting that Darktable can go either way, while Capture One can not.
A related area of rapid development in Darktable has been the color balance rgb module, and color calibration. Both of these allow for very fine-grained control over various aspects of color, handily parametrized by hue or tone or channel. There is some subtlety in the various controls for "colorfulness", "vibrance", "chroma", "brilliance", and "saturation" that took some learning. But it also taught me to see color in a more nuanced way, which is a good thing. In mixed lighting, I can now use several instances of color calibration modules3 for different image areas, producing more natural results that I am capable of in Capture One.
A particular pain point in Darktable 3.4 and Capture One was the rendering of colored highlights. In Darktable 3.4, bright colors often turned weirdly radiant, and then aggressively desaturated, with a sharp and unnatural edge in between. I don't quite know why, but this problem seems much reduced in 3.6. Both issues are now dealt with quickly and cleanly with a quick tug on the filmic rgb white relative exposure slider. A real boon that slider is. Now I just wish it didn't affect contrast quite so much and didn't also influence shadows, despite its name. Oh well, a man can dream. Capture One had the opposite problem, and sometimes wisted colored highlights into bright primary colors before desaturating, leading to cyan skies and yellow skin at the fringes of overexposure. This is still one of my major complaints about Capture One.
My last point of struggle with Darktable has long been the rendering of colors of my Fujifilm camera. Something about the colors always seemed off, in a way they weren't in Capture One. Greens too blue, browns too magenta, and skin too pink. After quite some experimentation, I found that this was simply due to Fujifilm depending on a bit of proprietary color magic for appropriate rendering. I now apply one of Stuart Sowerby's LUTs in chromaticity blending mode after filmic rgb. This leaves tonal adjustments fully to Darktable, but fixes the awkward colors perfectly. And it even allows me to play with the film simulations Fuji is famous for.
Actually, this highlights a unique strength of Darktable: the order of modules in the image editing pipeline is customizable, and most modules can be applied with masks and blending modes. Only since I used Capture One for a few months did I understand how powerful this concept really is. The key difference is that "layers" in Capture One (et al) merely apply the image editing pipeline with different parameters in different places4, while Darktable's modules actually apply sequentially. This enables true incremental editing, such as fixing white balance correctly for one part of the image, then applying another level of white balance just for the blue parts. Or fixing colors in a LUT after all the tonal edits, without fear of destroying highlights or shadows in the process.
So overall, I am very happy with Darktable in its latest iteration. As always, there are plenty of nitpicks I could bring forward, such as being a bit sluggish in operation, and the UI could be streamlined a bit5. But it has once again displaced Capture One for my own work, and it makes me happy.
similar to the high dynamic range module, but more fine-grained
essentially a tone curve adjustment
the new way of doing white balance
i.e. you can fully desaturate a picture to black-and-white, then "recover" saturation in a layer
the new quick access panel in 3.6 has already helped a lot!
bastibe.de by Bastian Bechtold is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.