Basti's Scratchpad on the Internet
27 Oct 2020

Differences Between Camera Sensors

Most cameras have the option to capture raw images, i.e. un-processed image data right from the image sensor. In theory, these images are pure physical measurements of light, and should therefore be very comparable between cameras. But are they? To investigate, I took a raw picture of a color target with each of my five cameras, and compared their output.

Capturing accurate colors is a surprisingly intricate matter, as the color depends on the spectrum of the illumination, the reflective spectrum of the colored object, and the color filters in the camera. I normalized the illumination by taking pictures on an overcast day outside1, and used a standard color target with 24 colored patches of a certified color.

The resulting colored spectra are captured on the camera sensor by photon counters behind three color filters “red”, “green”, and “blue”2, which differ from camera to camera in their spectral sensitivity. The recorded colors then get projected on a computer screen with another set of “red”, “green”, and “blue” LEDs of some different spectral makeup, or printed in three or more inks on paper. And this is then seen with eyes of yet another set of “red”, “green”, and “blue” retinal cells. Let's leave it at color is complicated.

At any rate, I took pictures at base ISO in raw, white-balanced and exposed for the third grey patch, and adjusted in saturation on the red patch. Here are the colors on the color checker and the recorded colors of my cameras:

The image shows the 24 colored patches of the color target3. Within each patch, five rectangles show the recorded color of (in reading order) the Pentax Q7, the Ricoh GR, the Panasonic LX100, the Fujifilm X-T2, and the Google Pixel 2.

Due to my exposure and white balance calibration, the third grey patch is completely equalized and shows no difference between the cameras and the target. The “Light Skin” patch (second on the first row) is also very close to equal, which is an important sanity check for my measurement procedure, as skin tones are optimized by all cameras. Seeing that the skin color patch is one of the most similar hopefully indicates that I did not do anything gravely wrong.

The last row shows greys, which are very similar across cameras (excluding the white patch for now). This indicates that RAW values indeed correspond to photon counts, with little additional processing. I have no idea why the white patch is off. Perhaps due to specular reflections on the white patch? Color is complicated.

As for the other colors, the Panasonic LX100 (center) and the Fujifilm X-T2 (bottom left) seem to record somewhat more accurate colors than the other three cameras. The Pentax Q7 (top left) and Ricoh GR (top right) seem comparatively weak across the board. The Panasonic (center) is great in darker blues and reds and greens, but somewhat weaker in yellows and pastels. The Fujifilm (bottom left) is very good in almost all colors, with pastels somewhat weaker than darker colors. The Google Pixel 2 (bottom right) is better in reds and greens and yellows than in blues and oranges. Pastel colors in general seem weaker than darker colors, which might be due to the same process that affected the white patch as well.

I think I learned something from this experiment: There are indeed color differences between cameras even when shooting RAW, but they are relatively minor. Whenever I see larger differences in my pictures, they are likely (easily correctible) white balance differences, and not sensor differences. When in doubt however, my Panasonic LX100 and Fujifilm X-T2 do capture more lifelike colors than my other cameras.

Footnotes:

1
any kind of artificial light is comparatively terrible for color reproduction!
2
in quotes, because colors are three-channel information, and we're talking about single channels here.
3
encoded as sRGB, which might or might not be displayed correctly on your screen, depending on your OS, your browser, your settings, your room's illumination, and your screen. Color is complicated.
Tags: photography
16 Oct 2020

Lenses Matter

A common trope in discussions about cameras on the internet is: “sensor size trumps all”, and as a corollary, “smartphone cameras suck”. Which is obviously false to anyone who has ever taken a good picture on a smartphone (with its tiny sensor).

The argument, however, goes like this: Bigger pixels1 can capture a greater dynamic range, and bigger sensors are made from bigger pixels. Therefore, and here is the dangerous leap, cameras with bigger sensors take better pictures.

But I currently have a bit of time on my hands, and way too many cameras, so let's try it out, shall we?

My five contestants are:

So I went for a walk around the neighborhood, and took a few pictures with a bag full of cameras. In particular, I was looking for a high dynamic range scene, in the form of a sunset. All pictures were taken at a low ISO2 and underexposed by two stops to account for the backlighting. All images were taken at 27 mm (equivalent3) focal length.

Measurements predict that the dynamic range and noise levels of these cameras should get better as the sensor size increases.


The first scene was taken essentially in auto mode, and adjusted in Darktable to match roughly in tonality (exposure adjustments of ±0.25EV), but not color. In general, all cameras were able to capture the entire dynamic range of this scene, from the brightest almost-white area around the sun, to the darkest patches in the shade near the waterline.

This, right off the bat, is the most important result of this experiment: Most scenes do not have enough dynamic range to matter.

Differences appear in the noise levels, which are fairly constant across all cameras, with only the X-T2 being noticeably less noisy than the rest. However, this advantage of the X-T2 is not one of sensor size (the Ricoh GR has a same-size sensor), but simply of being two-generations newer.

The largest difference between these photos is instead in flare: The Pixel 2, Pentax Q7 and Ricoh GR are flaring across the entire image, while the LX100 and X-T2 are far more controlled. This is probably because the Pixel 2 and Ricoh GR have collected dust and scratches from my pockets, and the Pentax Q7 lens being somewhat mediocre. The LX100 and X-T2 are in a better condition, and a better design, too. All lenses also show some amount of colorful lens flares to the bottom-right of the sun, particularly in the LX100 and Pentax Q7, and strange, colored rings around the edge of the Ricoh GR's frame.


To look into these differences between lenses in more detail, the second scene was taken from a slightly different spot, with the lenses stopped down (if possible). This image includes two bright sun spots, one from the sun itself, and one in the reflection on the water.

Notably, the Pixel 2, like most smartphones, can not change its aperture. And it produces a bright red flare towards the left edge of the image. The Pentax Q7 again flares dramatically and colorful, and my Ricoh GR's lens is probably beyond cleaning at this point, judging from the massive flare it produces. Again, the standout pictures here are the X-T2 and LX100, with well-defined sun stars and comparatively little flaring. (Note the red dots in the X-T2: these are the phase-detection pixels on the X-T2's sensor reflecting light back into the lens and back on the image.)


Here is an enlargement of the dark patch at the right edge of the second scene, to judge noise levels and detail in the shadow. Perhaps surprisingly, the differences between these images are not big at all. The X-T2 clearly wins in terms of both shadow detail and noise levels, but by a rather small margin, and only really visible when enlarged. The Pentax Q7 resolves almost no detail in the shadow, but neither does the Ricoh GR. So sensor size does not seem to affect this round.


This third scene was shot one day later, with settings optimized for optimal image quality: Base ISO, aperture one stop closed from wide open, focused on a high-contrast part of the scene. I processed these images with white balance from a grey card, then edited exposure, contrast, and saturation to match across images. Chromatic aberrations and vignetting were corrected using a profile or by hand. No sharpening was applied, but the highest-quality deinterlacing was selected (AMaZe for Bayer sensors, Markensteijn 3-pass for X-Trans sensors).

At this magnification, even my 24in/4K screen only barely hints at differences in sharpness. Which implies that prints up to A3 should be possible from all of these cameras without much difference in perceived sharpness.


If we crop into these images, differences in detail resolution indeed become apparent. The Pentax Q7 is definitely the least detailed, on account of its soft lens. The Pixel 2 and LX100 show a very similar amount of details, which is a testament to how good modern smart phone cameras truly have become. While the two cameras with the biggest sensors, the Ricoh GR and the X-T2, do resolve visibly more detail, the differences are not dramatic.

So does sensor size trump all? From these experiments, I wouldn't say so. Essentially, all of these cameras took decent images. The X-T2 clearly was better than the rest, but not hugely so, while being much bigger and more expensive and newer.

While the measured dynamic range differences surely exist, they did not manifest even in a sunrise, so I'm inclined to discount their importance. This is, frankly, not what I expected. The miniscule differences in resolution were equally unexpected.

What did however make a big difference was lens quality. Flare control, aperture range, and focal range play a huge role in the resulting image quality. Zooming in (and stopping down) are still the most effective way of extracting more detail from a scene. And having a camera with a secure grip that is quick to operate helps, too. I think I am growing out of sensor size snobbery, is what I'm saying.

Footnotes:

1

technically, “sensor elements”, not “picture elements”, i.e. “sensels”

2

lower ISO → less noise.

3

the focal length a lens on a 35 mm film camera would need to cover the same angle of view.

Tags: photography
17 Sep 2020

Dear Computer, We Need to Talk

After years of using Linux on my desktop, I decided to install Windows on my computer, to get access to a few commercial photo editing applications. I'll go into my grievances with Linux later, but for now:

I tried to install Windows, you won't believe what happened next

Like I have done many times with Linux, I download a Windows image from my university, and write it to a USB drive, then reboot into the USB drive. The USB drive can't be booted. A quick internet search leads me to a Microsoft Support page on how to Install Windows from a USB Flash Drive, which says that

Instead, one has to format the stick as FAT32, make it active, then copy the files from the image to it. So I follow the instructions and open the Disk Management program. It does not offer an option of FAT32, nor for making the partition active. I settle on (inactive) exFAT instead. It doesn't boot.

I switch over to Linux, where I can indeed make a FAT32 partition, and I can mark it as bootable, which I take as the equivalent of active. But Linux can not open the Windows image to copy the files onto the USB stick. So back to Windows, for copying the files. Except they can't be copied, because some of them are larger than 4Gb, which can't be written to a FAT32 partition. What now?

While researching how to download a different version of Windows 10, I stumble upon the Media Creation Tool, which automatically downloads Windows 10 and writes it to the USB stick correctly. Why was this not pointed out in the article above? Who knows. At any rate, it works. I can finally install Windows.

The installation process requires the usual dozen-or-so refusals of tracking, ads, privacy intrusions, and voice assistants. I wish I could simply reject them all at once. And then the install hangs, while "polishing up a few things". Pressing the helpful back button, and then immediately the forward button unhangs it, and the installation completes.

Next up are drivers. It feels anachronistic to have to install drivers manually in this day and age, but oh well. The new GPU driver to make screen tearing go away, a driver for my trackball to recognize the third mouse button, a wacom driver, ten or so Intel drivers of unknown utility. The trackball driver is not signed. I install it anyway. The GPU driver does not recognize my GPU and can't be installed. A quick Internet search reveals that my particular AMD/Intel GPU/CPU was discontinued from support by both AMD and Intel, and does not have a current driver. But fora suggest that up to version 20.2.1 of the AMD driver work fine. They don't, the driver crashes when I open images in my photo editor. An even older version published by Intel does work correctly. So now I am running an AMD GPU with an Intel driver from 2018.

Installing and setting up Firefox and my photo editors works without issue, thank goodness. Emacs has a Windows installer now, which is greatly appreciated. OpenCL and network shares just work. This is why I'm installing Windows next to my Linux.

But Windows is still not activated. I copy my university's product key in the appropriate text box, but hesitate: That's for Windows Enterprise, and I'd be just fine with Home. So I cancel the activation without activating. A helpful link in the activation systems sends me to the Microsoft Store to get my very own version of Windows Home for €145, which normally retails for around €95, so that's a no-go. Whatever, I'll go with my university's Enterprise edition. Except the activation box now says my product key is invalid. And the Store now literally says "We don't know how you got here but you shouldn't be here" instead of selling me Windows. After a restart it installs and activates Windows Enterprise, even though I never actually completed the activation.

I install Git, but in order to access my Github I need to copy over my SSH key from the Linux install. Which I can't boot at the moment, because installing Windows overwrites the boot loader. This is normal. So I download Ubuntu, write it to the USB stick, boot into it, recover the bootloader, boot into my old install, reformat the stick, copy the files to the stick, boot back into Windows, and the files aren't on the stick. Tough. Boot back into Linux, copy the files onto the stick, eject the stick, boot back into Windows, copy the files to the computer. Great user experience.

Now that I have my SSH key, I open a Git Bash to download a project. It says my credentials are incorrect. I execute the same commands in a regular CMD instead of Git Bash, and now my credentials are correct. Obviously.

There are several programs that claim to be able to read Linux file systems from Windows. They do not work. But Microsoft has just announced that you will be able to mount Linux file systems from WSL in a few weeks or months. So maybe that will work!

I set my lock screen to a slideshow of my pictures. Except my pictures do not show up, and I get to see Window's default pictures instead. An internet search reveals that this is a wide-spread problem. Many "solutions" are offered in the support fora. What works for me is to first set the lock screen to "Windows Spotlight", then to "Slideshow". Only in that order will my pictures be shown.

I will stop here. I could probably go on ad infinitum if I wanted to. This was my experience of using Windows for one day. I consider these problems relatively benign, in that all of them had solutions, if non-obvious ones.

Why install Windows in the first place?

Part of the reason for installing Windows was my growing frustration with Linux. I have been a happy user of KDE of various flavors for about seven years now. But ever since I got into photo editing, things began to become problematic:

My photo editor requires OpenCL, but the graphics driver situation on Linux is problematic, to say the least. I generally managed to get RocM running most of the time, but kernel updates frequently broke it, or required down- or upgrading RocM. It was a constant struggle.

I wanted to work with some of my data on a network share, but KDE's implementation of network shares does not simply mount them for applications to use, but instead requires each application to be able to open network locations on their own. Needless to say, this worked almost never, requiring many unnecessary file copies. Perhaps Gnome handles network shares better, but let's not open that can of worms.

Printing photos simply never worked right for me. The colors were off, photo papers were not supported, the networked printer was rarely recognized. Both for a Samsung printer and an Epson and a Canon. One time a commercial printer driver for Linux printed with so much ink it dripped off the paper afterwards. Neither Darktable nor Gimp nor Digikam have a robust printing mode. I generally resorted to Windows for printing.

I ran that Windows in a virtual machine. With Virtualbox, the virtual machine would be extremely slow, to the point where it had a delay of several seconds between typing and seeing letters on the screen. VMWare did better, but would suddenly freeze and hang for minutes at a time. Disabling hugepages helped sometimes, for a short while. The virtual machine network was extremely unreliable. Some of these issues were probably related to my using a 4K screen.

Speaking of screens, I have two screens, one HighDPI 4k and one normal 1440p. Using X, the system can be either in HighDPI mode, or in normal mode. But it can't drive the two displays in different modes. Thus the second monitor was almost useless and I generally worked only on the 4k screen. With Wayland I would have been able to use both screens in different modes, but not be able to color-calibrate them or record screen casts. Which is completely unacceptable. So I stuck with one screen and X. In Windows, I can use both screens and calibrate them.

Additionally, Linux hardware support is still a bit spotty. My SD card reader couldn't read some SD cards because of driver issues. It would sometimes corrupt the SD card's file systems. USB-connected cameras were generally not accessible. The web cam did not work reliably. The CPU fan ran too hot most of the time.

So there had been numerous grievances in Linux that had no solutions. Still I stuck with it because so many more smaller issues were actually fixable if I put in the work. In fact I had accumulated quite a number of small hacks and scripts for various issues. I feared that Windows would leave me without recourse in these situations. And it doesn't. But at least the bigger features generally work as advertised.

Where do we go from here?

Just for completion's sake, I should really find an Apple computer and run it through its paces. From my experience of occasionally using a Macbook for teaching over the last few years, I am confident that it fares no better than Linux or Windows.

Were things always this broken? How are normal people expected to deal with these things? No wonder every sane person now prefers a smartphone or tablet to their computers. Limited as they may be, at least they generally work.

There is no joy in technology any more.

Tags: computers linux windows
27 May 2020

How to Write a Dissertation

Assembling scientific documents is a complex task. My documents are a combination of graphs, data, and text, written in LaTeX. This post is about combining these elements, keeping them up to date, while not losing your mind. My techniques work on any Unix system on Linux, macOS, or the WSL.

Text

For engineering or science work, my deliverables are PDFs, typically rendered from LaTeX. But writing LaTeX is not the most pleasant of writing environments. So I've tried my hand at org-mode and Markdown, compiled them to LaTeX, and then to PDF. In general, this worked well, but there always came a point where the abstraction broke, and the LaTeX leaked up the stack into my document. At which point I'd essentially write LaTeX anyway, just with a different syntax. After a few years of this, I decided to cut the middle-man, bite the bullet, and just write LaTeX.

That said, modern LaTeX is not so bad any more: XeLaTeX supports normal OpenType fonts, mixed languages, proper unicode, and natively renders to PDF. It also renders pretty quickly. My entire dissertation renders in less than three seconds, which is plenty fast enough for me.

To render, I run a simple makefile in an infinite loop that recompiles my PDF whenever the TeX source changes, giving live feedback while writing:

diss.pdf: diss.tex makefile $(graph_pdfs)
    xelatex -interaction nonstopmode diss.tex

We'll get back to $(graph_pdfs) in a second.

Graphs

A major challenge in writing a technical document is keeping all the source data in sync with the document. To make sure that all graphs are up to date, I plug them into the same makefile as above, but with a twist: All my graphs are created from Python scripts of the same name in the graphs directory.

But you don't want to simply execute all the scripts in graphs, as some of them might be shared dependencies that do not produce PDFs. So instead, I only execute scripts that start with a chapter number, which conveniently sorts them by chapter in the file manager, as well.

Thus all graphs render into the main PDF and update automatically, just like the main document:

graph_sources = $(shell find graphs -regex "graphs/[0-9]-.*\.py")
graph_pdfs = $(patsubst %.py,%.pdf,$(graph_sources))

graphs/%.pdf: graphs/%.py
    cd graphs; .venv/bin/python $(notdir $<)

The first two lines build a list of all graph scripts in the graphs directory, and their matching PDFs. The last two lines are a makefile recipy that compiles any graph script into a PDF, using the virtualenv in graphs/.venv/. How elegant these makefiles are, with recipe definitions independent of targets.

This system is surprisingly flexible, and absolutely trivial to debug. For example, I sometimes use those graph scripts as glorified shell scripts, for converting an SVG to PDF with Inkscape or some similar task. Or I compile some intermediate data before actually building the graph, and cache them for later use. Just make sure to set an appropriate exit code in the graph script, to signal to the makefile whether the graph was successfully created. An additional makefile target graphs: $(graph_pdfs) can also come in handy if you want ignore the TeX side of things for a bit.

Data

All of the graph scripts and TeX are of course backed by a Git repository. But my dissertation also contains a number of databases that are far too big for Git. Instead, I rely on git-annex to synchronize data across machines from a simple webdav host.

To set up a new writing environment from scratch, all I need is the following series of commands:

git clone git://mygitserver/dissertation.git dissertation
cd dissertation
git annex init
env WEBDAV_USERNAME=xxx WEBDAV_PASSWORD=yyy git annex enableremote mywebdavserver
git annex copy --from mywebdavserver
(cd graphs; pipenv install)
make all

This will download my graphs and text from mygitserver, download my databases from mywebdavserver, build my Python environment with pipenv, recreate all the graph PDFs, and compile the TeX. A process that can take a few hours, but is completely automated and reliable.

And that is truly the key part; The last thing you want to do while writing is being distracted by technical issues such as "where did I put that database again?", "didn't that graph show something different the other day?", or "I forgot to my database file at work and now I'm stuck at home during the pandemic and can't progress". Not that any of those would have ever happened to me, of course.

Tags: computers emacs workflow
19 May 2020

How I Record Screen Casts

This semester is weird. Instead of holding my "Applied Programming" lecture as I normally would, live-coding in front of the students and narrating my foibles, this time it all had to be done online, thanks to the ongoing pandemic. Which meant I had to record videos. I had no idea how to record videos. This is a writeup of what I did, in case I have to do more of it. You can see the results of my efforts in my Qt for Python video tutorials and my file parsing with Python video tutorials. Through some strange coincidences, wired.com wrote an article about my use of OBS.

Working on Linux, I used the Open Broadcaster Software, or OBS for short, as my recording program. OBS can do much more than record screencasts, but I only use it for two things: Recording a portion of my screen, and switching between different portions.

To this end, I divide my screen into four quadrants. The top left is OBS, for monitoring my recording and mic levels. The bottom left is a text editor with my speaker notes. The top right and bottom right are my two recording scenes, usually a terminal or browser in the top right, and a text editor in the bottom right. The screenshot shows the Editor scene, which has a filter applied to its source to record only the bottom right quadrant. On a 4K screen, each quadrant is exactly full HD.

In OBS's settings, I set hotkeys to switch scenes: I use F1 and F2 to select the Browser and Editor scenes, and F6 for starting and stopping recordings. For more compatible video files, I enable "Automatically remux to mp4" in OBS' advanced settings.

The second ingredient to my recording setup is KDE, where I assign F3 and F4 to activate the browser or editor window (right click any window → More Actions → Assign Window Shortcut). And to make my recordings look clean, I disable window shadows for the duration of the recording.

With these shortcuts, I hit F1 and F3 to switch focus and scene to the browser, or F2 and F4 for the text editor. To make this work smoothly, I disabled these shortcuts within my terminal, browser, and text editor. But always be weary of accidentally getting those out of sync. I don't know how often I accidentally recorded the wrong part of the screen and had to redo a recording.

Anyway, with this setup, I can record screen casts with very minimal effort. The last ingredient however is editing; and I loathe video editing. I'd much rather record a few more takes than spend the same time in a video editor. Instead, I record short snippets of a few minutes each, and simply concatenate them with FFmpeg:

Create a file concatenate.txt, that lists all the files to be concatenated:

file part-one.mp4
file part-two.mp4
file part-three.mp4

then run ffmpeg -f concat -i concatenate.txt -c copy output.mp4 to concatenate them into a new file output.mp4.

The great thing about this method is that it uses the copy codec, which does not re-encode the file. I.e. it only takes a fraction of a second, and does not degrade quality.

In summary, this setup works very well for me. It is simple and efficient, and does not require any video editing. The ability to switch scenes is cool and powerful. Still, recording videos is a lot of work. All in all, the 18 videos in the file parsing tutorials took 250 takes, according to my trash directory.

Tags: programming
Other posts
Creative Commons License
bastibe.de by Bastian Bechtold is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.