<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" >
    <channel>
        <title>PIXLS.US</title>
        <link>https://pixls.us</link>
        <description>The PIXLS.US feed. The F/OSS photography website.</description>

        <atom:link href="https://pixls.us/feed.xml" rel="self" type="application/rss+xml" />
        <ttl>60</ttl>
        <lastBuildDate>Mon, 11 Jan 2021 18:28:50 GMT</lastBuildDate>
        <category></category>
        

        <item>
            <title><![CDATA[Darktable 3:RGB or Lab? Which Modules? Help!]]></title>
            <link>https://pixls.us/articles/darktable-3-rgb-or-lab-which-modules-help/</link>
            <guid isPermaLink="true">https://pixls.us/articles/darktable-3-rgb-or-lab-which-modules-help/</guid>
            <pubDate>Sun, 26 Jan 2020 00:00:00 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/darktable-3-rgb-or-lab-which-modules-help/hanny-naibaho-correct-blur-scaled.jpg" /><br/>
                <h1>Darktable 3:RGB or Lab? Which Modules? Help!</h1> 
                  
                <p><a href="https://darktable.fr/2020/01/darktable-3-rgb-ou-lab-quels-modules-au-secours/">Original post in French</a> by <a href="https://darktable.fr/author/aurelienpierre/">Aurélien PIERRE</a>, edited by the pixls community.</p>
<p>Darktable is slowly converging to a scene-referred RGB workflow. Why is that? What does it involve? How does the use of darktable change? Answers here…</p>
<p><em>This article begins with a 3 section introduction of the Lab space. You don’t need to understand it in detail in order to understand what happens next.</em></p>
<h2 id="what-is-lab-">What is Lab?<a href="#what-is-lab-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>The color space <a href="https://en.wikipedia.org/wiki/CIELAB_color_space">CIE Lab</a> was published in 1976 by the International Commission on Illumination (CIE), in an attempt to mathematically describe the color perception of the average human being. Lab space aims to decouple the brightness information (L channel) from the chroma information (channels a and b) and takes into account the non-linear corrections that the human brain makes to the linear signal it receives from the retina. Lab space is derived from <a href="https://en.wikipedia.org/wiki/CIE_1931_color_space">CIE XYZ space</a>, which represents the physiological response of 3 of the 4 types of photo-sensitive cells in the retina (the cones).</p>
<p>The XYZ space represents what happens in the retina, and Lab represents what subsequently happens in the brain, but both color spaces are
<a href="https://en.wikipedia.org/wiki/Scientific_modelling">models</a>,
that is, attempts to describe reality and not the reality itself. There are always discrepancies between a model and reality, but these models are refined and improved as research progresses. Moreover, a model often represents reality only under certain conditions and assumptions, which define the area of validity of each model.</p>
<p>Regarding their respective areas of validity, XYZ works well almost all the time, Lab only works as long as the image has a contrast less than 100:1 (i.e. a maximum dynamic range of 6.5 EV). In the context of the creation of the Lab model in 1976, researchers were working with scanned negatives, and color negatives have a dynamic range of 6 to 7 EV. 6.5 EV is also the static contrast of the retina, and it was a little after 1976 that we realized that the brain was constantly performing HDR fusion of several images per second, meaning that static contrast as a model parameter doesn’t make much sense in the context of human vision.</p>
<p>What is CIE Lab for? It is intended to predict the perceptual difference between 2 colors (the delta E) and to make gamut adaptations when converting an image from one color space to another. One can then try to remap the gamut to the closest color in the target color space via strategies that minimize the delta E digitally.</p>
<p>The big disadvantages of Lab are:</p>
<ol>
<li>It doesn’t work well for strong contrast (> 7 EV), and especially outside the range [1:100] Cd/m²,</li>
<li>It is not linear in hue, i.e. if one fixes a pixel’s a and b chromaticity components and changes only its brightness L, the same hue would be expected at a different brightness (this was the design purpose of the Lab space), however there is a slight shift in the hue, more or less marked depending on the original color of the pixel.</li>
</ol>
<h2 id="what-is-lab-doing-in-darktable-">What is Lab doing in darktable?<a href="#what-is-lab-doing-in-darktable-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>The original idea was to allow separate manipulation of the brightness and chromaticity. In 2009, the year of the project’s creation, cameras had dynamic ranges quite close to Lab’s valid range; the idea was far from bad at the time, especially because darktable did not have a complex masking option then.</p>
<p>Advantages:</p>
<ol>
<li>Lab, being a reference space and therefore independent of the display color, makes presets very easy to set up and transfer,</li>
<li>Lab sets the middle gray (18%) to 50%, so the interface is more intuitive (the middle gray is in the middle of the graph of the tones, for example).</li>
</ol>
<p>Problems:</p>
<ol>
<li>Today’s cameras have dynamic ranges that are largely outside of the conditions under which Lab is valid, which makes the defects of this space more apparent. With dynamic ranges from 10 to 14 EV at 100 ISO, any recent camera does HDR by default, and Lab is not designed to handle that much dynamic range</li>
<li>Pushing pixels in Lab space is very risky, especially when tackling compositing and image fusion with softened and feathered masks. We’ll get back to that, but it has to do with the next problem…</li>
<li>Lab is not adapted to physically realistic corrections, such as blurring, deblurring, denoising, and any filter that simulates or corrects for an optical effect.</li>
</ol>
<p>In brief, Lab was a youthful mistake. That said, all other photo processing pieces of software seem to work by default in non-linear RGB spaces (with a “gamma” applied at the beginning of the pipe) that are basically equivalent (regarding their flaws and drawbacks for image filters).</p>
<h2 id="how-does-lab-work-">How does Lab work?<a href="#how-does-lab-work-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Everything (e.g. the camera sensor) starts from a linear RGB space. We convert linear RGB to XYZ. For the purposes of the demonstration, we can consider the XYZ space as a special RGB space whose primary colors have been slightly manipulated (that’s not the case, but it behaves the same way). XYZ is also a linear space.</p>
<p>We then switch from XYZ to Lab by applying a “gamma correction” on the luminance channel (from Y to L), and a rotation on the channels a and b. Mathematically, Lab is like applying 2.44 gamma to linear RGB – it poses the same practical problem: it’s highly non-linear.</p>
<h2 id="summary">Summary<a href="#summary" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Lab doesn’t work for high-contrast images and doesn’t work  well for images with moderate contrast. It encodes pixel values in a perceptual manner rather than physical one, which will pose a problem in the following. Lab was not designed for image processing, but only as a way to study human vision.</p>
<p class="aside">
   <strong>Precision:</strong> I have used the term “gamma” or “gamma correction” incorrectly here. Strictly, a gamma function is the specific (technical) electrico-optical transfer function (EOTF) of old-school CRT screens, which is a power function with an exponent between 1.8 and 2.2. Nowadays, people incorrectly name “gamma” any power function used for technical integers encoding or artistic lightness adjustments, which is confusing. Any encoding transfer function (using a power function or not) should be called OETF (Opto Electrical Transfer Function), and is used only to alleviate the limits of 8 bits integer file formats. Any artistic power-like brightness corrections should be called a tone curve. Even if the operation is the same, it does not have the same meaning and should not be applied at the same place in the graphics pipe. But ICC nomenclature continues to call “gamma” the exponent used to encode/decode RGB pixels when using integer file formats, so here we are, mixing unrelated concepts under an umbrella name just because the maths write the same. But, when communicating with people out of the industry, it’s often easier to use the incorrect name so that everyone sort-of understands, even if it carries on the confusion.
   <br>
   By the way, power-like OETF are completely unnecessary as long as you use floating point arithmetic and files format (32 bits TIFF, PFM, OpenEXR…).
</p>

<h2 id="the-limits-of-non-linear-spaces-in-image-processing">The limits of non-linear spaces in image processing<a href="#the-limits-of-non-linear-spaces-in-image-processing" class="header-link"><i class="fa fa-link"></i></a></h2>
<p><strong>First of all, what do we mean by “linear”?</strong> If <em>y</em> is linear with respect to <em>x</em>, it means there’s a relationship between <em>x</em> and <em>y</em> in the form  <em>y= a . x + b</em>, where <em>a</em> and <em>b</em> are real constants. <strong>Linear means proportional to something plus or minus a constant</strong>.</p>
<p>So, when we talk about linear RGB space, we mean that the RGB values are proportional to something. <strong>But proportional to what?</strong></p>
<p>The sensor counts the number of <a href="https://en.wikipedia.org/wiki/Photon">photons</a> it receives at each photosite. Every pixel contains information on the light spectrum captured at its position, in the form of 3 intensities (red, green, blue). The coefficient of proportionality <em>a</em> between the number of photons and the final RGB value is the ISO sensitivity of the sensor. The constant <em>b</em> is the sensor noise threshold. The RGB signal is proportional to the energy of the light emission picked up by the camera sensor.</p>
<p>From the point of view of human perception, these intensities being proportional to the physical energy level of the light emission, does not make sense. In fact, the brain applies a non-linear, logarithmic correction that the Lab color space approximates using a cubic root. This means that we have an increased sensitivity to dim light, and reduced sensitivity to bright light.</p>
<p>However, all optical operations that are performed during image <em>capture</em> (e.g. lens blur, noise creation, or the effect of a color filter added to the lens) are applied directly to the photons. To reverse the lens blur or to simulate it when processing, we need to work on the linear RGB information, which is the closest thing to the photon data that is available to us.</p>
<p>See for yourself: Which one of these two computer generated bokeh (original below) seems the most natural to you? (See also a <a href="https://chrisbrejon.com/cg-cinematography/chapter-9-compositing/#exposure-control-by-dof">more spectacular example on Chris Brejon’s website</a>)</p>
<figure>
   <img src="https://pixls.us/articles/darktable-3-rgb-or-lab-which-modules-help/hanny-naibaho-bad-blur-scaled.jpg" alt='Lens blur applied in sRGB'>
   <figcaption>
      Lens blur applied in sRGB
   </figcaption>
</figure>

<figure>
   <img src="https://pixls.us/articles/darktable-3-rgb-or-lab-which-modules-help/hanny-naibaho-correct-blur-scaled.jpg" alt='Lens blur applied in linear RGB then encoded in sRGB'>
   <figcaption>
      Lens blur applied in linear RGB then encoded in sRGB
   </figcaption>
</figure>

<figure>
   <img src="https://pixls.us/articles/darktable-3-rgb-or-lab-which-modules-help/hanny-naibaho-original.jpg" alt='Original photo: Hanny Naibaho'>
   <figcaption>
      Original photo: Hanny Naibaho
   </figcaption>
</figure>

<p>Observe in particular how the dark silhouettes (bottom left) merge into the light background, or the contrast of the pentagons formed by the lens diaphragm on spotlights.</p>
<p>Another example, with a simple blur on smooth surfaces: Which of these gradations seems to you to be the most progressive?</p>
<figure>
   <img src="https://pixls.us/articles/darktable-3-rgb-or-lab-which-modules-help/rgb-blur.jpg" alt='Left: Linear RGB blurring sRGB encoding; Right: sRGB encoding, then blurring'>
   <figcaption>
      Left: Linear RGB blurring sRGB encoding; Right: sRGB encoding, then blurring
   </figcaption>
</figure>


<p>These two examples were generated with Krita, which allows you to work in both linear and non-linear RGB, and has filter layers including a physically realistic lens blur.</p>
<p>This type of problem will occur the same way in darktable, as soon as you use the modules <strong>sharpen,</strong> <strong>high-pass,</strong> <strong>low-pass</strong>, and <strong>feathering/smoothing of drawn and/or parametric masks</strong> (which are blurs).</p>
<blockquote>
<p>Blurring, deblurring, or anything else connected to optics <strong>must</strong> take place in linear RGB. There’s no mathematical model* that allows correct gradients in RGB encoded for display (with an OETF) or in Lab, due to loss of connection between pixel values and light energy.</p>
</blockquote>
<p class="aside">
   * and just because the problems aren’t visible all the time doesn’t mean the problems aren’t always there. We can, up to a certain point, hide them with mathematical trickery (thresholds, opacity, etc.), but they will always end up coming out at the worst time. Trust me, I know exactly where to push to make it break.
</p>

<p>This is also the problem that arises with hue zones blending in the <strong>color zones</strong> module (even if a tweak, introduced under the “smooth” process mode, attempts to hide this under the rug), which produces granular and sharp transitions.</p>
<p>The only darktable module that works in Lab to make a blur, and where it still works reasonably, is the <strong>local laplacian</strong> mode of the <strong>local contrast</strong> module. The price we pay for it to work is that it’s very computationally heavy and theory is like rocket science. And, even if the blur is stable, it comes with an ungracious desaturation with a hue shift to muddy grey-blue when you push the sliders a little too hard.</p>
<h2 id="the-benefits-of-a-linear-rgb-treatment">The benefits of a linear RGB treatment<a href="#the-benefits-of-a-linear-rgb-treatment" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>So here’s where you say “as long as I’m not blurring my images or working only on color, I can still use Lab”.</p>
<p>That’s partly true, but in fact, even in those cases, working in linear RGB is simpler, with faster algorithms that can tolerate more extreme adjustments without showing annoying side-effects. Also, once again, Lab can’t support high dynamic ranges, so care must be taken to use the Lab modules <strong>after</strong> HDR tone mapping.</p>
<p>Strictly speaking, the only application where Lab is required is the gamut mapping, when changing color space before sending the image to a file or to the screen. And even then, since 1976, better spaces have been developed (IPT-HDR, JzAzBz) for this purpose, in HDR and with an almost perfect linearity of hues.</p>
<h2 id="the-current-state-of-darktable">The current state of darktable<a href="#the-current-state-of-darktable" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>With the release of darktable 3.0, the default pipeline (i.e. the basic module order) has been reordered around filmic RGB. There are 4 essential steps in this pipe:</p>
<ol>
<li>the <strong>demosaic</strong> module, which converts the raw file (which only contains the intensity of a single layer, R, G or B at each pixel site) to a picture (with complete RGB data for each pixel location),</li>
<li>the <strong>input color profile</strong> module, which converts the sensor’s RGB space to a standard working color space,</li>
<li>the <strong>filmic RGB</strong> (or the <strong>base curve</strong>) module, which translates between linear space (proportional to light energy) into non-linear (perceptually compressed) space,</li>
<li>the <strong>output color profile</strong> module, which converts from the standard working space to the RGB space of the screen or the image file.</li>
</ol>
<p>Note that the <strong>base curve</strong> approach remains the one applied by default because it allows darktable to more-or-less approximate the rendering of the camera JPEG as soon as the software is opened, which seems to be the preference of many users. Nevertheless, as part of darktable 3.0, the <strong>base curve</strong> was pushed back in the pixel pipe by default, to just before the <strong>filmic RGB</strong> module, which makes it safe for the colors produced by the modules that are applied earlier. The base curve module also was provided with a color preservation mode, which produces results similar to filmic RGB. Between <strong>base curve</strong> and <strong>filmic RGB,</strong> <strong>for darktable 3.0,</strong> the difference is now only about ergonomics and on the ability to recover very low light. <strong>filmic RGB</strong> is a little more complex to understand but faster to set up (once properly understood), and is more powerful when working in deep shadows.</p>
<p>Modules that work in linear RGB and output in linear (thus leaving the pipeline linear after them) are:</p>
<ol>
<li><strong>exposure</strong></li>
<li><strong>white balance</strong></li>
<li><strong>channel mixer</strong></li>
<li><strong>tone equalizer</strong> (which is linear in parts).</li>
</ol>
<p>The advantage of performing linear operations is that they do not affect
the chrominance of the image (because changing the luminosity leaves the
chrominance intact) and preserve the energy proportionality of the
signal. These modules must be positioned before <strong>filmic RGB</strong> or the <strong>base curve</strong>. <strong>Exposure</strong> and <strong>tone equalizer</strong> are recommended prior to
the <strong>input color profile</strong>. They can be used safely and without moderation. Note that there is a catch here on the <strong>tone equalizer</strong>,
which preserves <em>local</em> linearity (within the image areas),
but not the <em>overall</em> linearity (between zones). It corresponds to what
would happen if we walked onto the scene with a flashlight, and hand reilluminated the objects in the scene, so we still keep
the physical coherence of the signal.</p>
<p>Modules that work in linear RGB and carry out non-linear, but chrominance-preserving operations, (provided that the <em>chroma preservation</em> mode is activated) are:</p>
<ol>
<li><strong>RGB curves</strong></li>
<li><strong>RGB levels</strong></li>
</ol>
<p>The chrominance is preserved via methods that constrain the RGB ratios in and out of the module, so as to keep them identical. Note that <strong>RGB curves</strong> and <strong>RGB levels</strong> can be moved before or after <strong>filmic RGB</strong> depending on the intention, since they’re doing non-linear operations anyway. On the other hand, be careful not to use the mask feathering on modules that come later, as linearity is no longer assured and mask blurring + blending could produce unpleasant results.</p>
<p>Modules that work in linear RGB and carry out non-linear operations without preserving the chrominance are:</p>
<ol>
<li><strong>local tone mapping</strong> (we’ll get back to that)</li>
<li><strong>color balance</strong></li>
<li><strong>LUT 3D</strong></li>
</ol>
<p><strong>Color balance</strong> is designed to be applied to linear RGB data
that hasn’t been corrected for contrast, i.e. before <strong>filmic RGB, tone curves</strong> etc. It does not preserve the chrominance because its <em>explicit purpose</em>
is to adjust chrominance creatively. Similarly for <strong>LUT 3D</strong>, for which
the main goal is to emulate analog film emulsions or complex aesthetic transforms.</p>
<p>I remind readers here that <strong>filmic RGB</strong> is a dynamic range compressor,
from the high dynamic range of the camera to the low dynamic range of the screen. It is not a tone curve intended to apply an artistic correction, but a mapping of tones to force fit the sensor data into the
the available screen space. filmic RGB tries to protect the details as much as possible (which we assume <em>a priori</em> are in the middle tones) and
to keep a certain optical readability in the image.</p>
<p>Before <strong>filmic RGB</strong>, in the linear pipe, we still find some
modules that work in Lab but perform linear operations
that should (strictly speaking) be realized in linear RGB:</p>
<ol>
<li><strong>contrast equalizer</strong></li>
<li><strong>high pass</strong></li>
<li><strong>low pass</strong></li>
<li><strong>sharpen</strong></li>
<li><strong>denoise (non-local means)</strong></li>
</ol>
<p>These modules need to be adapted in the future to be able to work on a linear Yxy space (derived from CIE XYZ) because <strong>it is a mistake to make them work in Lab</strong> (at least, as a default). It’s a relatively easy job to do,
because Yxy breaks down the luminance (Y channel) and chrominance
(channels x and y) with a logic similar to Lab, minus the
non-linear transformation. In the meantime, you can continue to
use them, but with moderation. For the <strong>contrast equalizer</strong>,
note that it uses an edge-sensitive wavelet separation,
which makes it quite cumbersome to execute, but very effective at preventing
halos, even considering that it works in Lab.</p>
<p>After <strong>filmic RGB</strong>, in the non-linear pipe, are all the
other Lab modules, since they require low dynamic range. Some of these modules could also be converted to xyY and moved before filmic RGB in the future (in particular the <strong>soften</strong>,  <strong>grain</strong> and <strong>fill light</strong> modules). Also note that the
 <strong>vignette</strong> module was left at the end of the pipe, as before, even though it works in RGB. It’s likely it’ll be better off before
<strong>filmic RGB</strong>, or even before the <strong>input profile</strong>, but its code is
surprisingly complex for what it does, and I haven’t had the time
to unravel the imbroglio in order to understand what its working hypotheses are.</p>
<h2 id="modules-not-recommended">Modules not recommended<a href="#modules-not-recommended" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>A number of modules are not recommended due to fundamental errors in
design (based on my personal opinion, which is based on my
practical and theoretical experience in image retouching), and in the
spirit of streamlining the workflow with a minimum number of steps. There is nothing stopping you from continuing to use them, especially since users regularly introduce me to new use cases that I hadn’t thought of. But the idea here is to give you the keys to the best possible result as quickly as possible with as little fuss as possible.</p>
<h3 id="local-tone-mapping">Local Tone Mapping<a href="#local-tone-mapping" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Local tone mapping internally encodes RGB values logarithmically (they are then decoded at the output, so no problem at
at this level), then applies a bilateral blur to these logarithmic values. As we saw above, theory is clear: a blur, on anything non-linear, produces halos  and fringes. And as promised, the default setting range of this module is much reduced, so that users have become accustomed to
merging the output of the module with low opacity –
this is only hiding the misery.</p>
<p><em>Prefer the tone equalizer.</em></p>
<h3 id="global-tone-mapping">Global Tone Mapping<a href="#global-tone-mapping" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>This module works in Lab color space to perform HDR compression, and if
you have followed my explanations, you will understand that this is a
contradiction in terms. In addition – and this is important – the
white value is adjusted automatically from the maximum in
the image, so the overall brightness of the image may change depending on
the size of the export, due to the smoothing effect of the setting to
scale (interpolation). To be expected: a lighter or darker JPEG
than the preview in the darkroom.</p>
<p><em>Prefer filmic RGB.</em></p>
<h3 id="shadows-and-highlights">Shadows and highlights<a href="#shadows-and-highlights" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Similarly, this module works in Lab color space to perform HDR compression and uses a Gaussian or bilateral blur to isolate highlights and shadows. In practice, it gives halos quickly as soon as you push the parameters (even if the bilateral blurring lessens the problems a little), and it even tends to add local contrast (as a secondary effect) in the highlights, giving clouds a very HDR look. In the shadows, used a little hard, colors turn blue-grey. In practice, it does not work, except for minor corrections.</p>
<p><em>Prefer the tone equalizer.</em></p>
<h3 id="low-pass-filter">Low-pass filter<a href="#low-pass-filter" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The low-pass filter is actually a simple blur. A lot of people
use it to invert the contrast, and then blend it with
overlay or soft/hard/linear light, to compress the
dynamic range. This is in fact exactly what the <strong>shadows and highlights</strong> module already does in fewer steps for the user. As
mentioned above, the <strong>low-pass</strong> module works in Lab color space, so for the
blur… Expect the worst.</p>
<p><em>Prefer the contrast equalizer for blur, or the tone equalizer
for local dynamic range compression</em></p>
<h3 id="high-pass-filter">High-pass filter<a href="#high-pass-filter" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>A lot of people use the high-pass module by blending it with
overlay or soft/hard/linear light, for adding
sharpness. This is in fact exactly what the <strong>sharpen</strong> module already does. The high pass is achieved by subtracting between a blur
(low-pass) and the original image, so we have the same problem as for the
<strong>low-pass</strong> because it’s still working in Lab.</p>
<p><em>Prefer the contrast equalizer for fine sharpness, or the local contrast
for the general sharpness.</em></p>
<h3 id="sharpen">Sharpen<a href="#sharpen" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The sharpen module was originally intended for
sensors with an optical low-pass filter as well as the
smoothing due to demosaicing in some cases. First, as this
module works in Lab, you need to avoid pushing it so much that it produces halos. Second, the internal sharpening method
(using <a href="https://en.wikipedia.org/wiki/Unsharp_masking">unsharp mask</a>) is rather archaic and
quickly artificial, even in RGB mode. Thirdly, in view of
the sharpness of modern optics, given that many sensors no longer have low-pass filters, and that most of the photos will be exported
at a reduction ratio of at least 8:1 (24 Mpx sensors to 3 Mpx
screen), pixel-level sharpness enhancement has become
practically useless. Generally speaking, the digital photographer of the 21st century would benefit from calming down with the crisp sharpness -
it would be good for everyone.</p>
<p><em>Prefer to the contrast equalizer to deflect the optics via the
presets provided, or the local contrast for general sharpness.</em></p>
<h3 id="monochrome">Monochrome<a href="#monochrome" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The <strong>monochrome</strong> module works in Lab, which it uses to define a
weighted contribution of certain colours to the density of the black, in order to convert color into shades of gray. The problem is
that the interface is quite sensitive to the settings, and a small
correction can produce large changes and break the
overall contrast in a rather ungraceful way. In practice, getting a predictable result
is quite difficult and this module often results in a lot of
tedious micro-adjustment sessions.</p>
<p>The idea of a weighted contribution of colors to the density of black
comes from silver film, which behaves exactly the same way as this. But, as you saw coming, film doesn’t work in Lab and is not perceptually realistic. This idea is taken up in a physically realistic way in the <strong>channel mixer</strong> module, where several emulsion presets of commercial silver film are offered to create a grey channel. Note that, in order for the coefficients to be accurate, the colour space of the operating mode (in the module <strong>input profile</strong>) must be set to REC 709 linear, otherwise the settings will have to be adjusted.</p>
<p>For a black and white treatment that is based on human perceptual luminance
(linear), simply lower the input or output saturation to 0% in the <strong>color balance</strong> module (right-click on the slider and enter 0
on the keypad – the setting is only up to 50% by default in
the interface).</p>
<p><em>Prefer the channel mixer for a silver approach or the color balance for a perceptual approach.</em></p>
<h3 id="fill-light-bloom-zone-system">Fill light/Bloom/Zone System<a href="#fill-light-bloom-zone-system" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>These three modules aim to re-illuminate a part of the image, and attempt
to dilute the correction in intensity and in space by blurring
in the picture. But since they’re working in the Lab color space …I won’t say it again… The results are just bad all the time, except
with very soft settings, in which case you didn’t’ really need those modules in the first place.</p>
<p><em>Prefer the exposure module with masks, or the tone equalizer</em></p>
<h3 id="color-correction">Color Correction<a href="#color-correction" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Every photograph has at least two sources of light: a
direct source (lamp, sun, candle) and a reflected source (walls,
clouds, floors, ceiling). It often happens that the white balance of
these two sources does not coincide. In practice, human vision has ways to correct for this this, but not the camera. So it requires a separate white balance correction for the highlights
(which generally receive direct light) and the shadows (which usually receive reflected light).</p>
<p>This is what the <strong>color correction</strong> module offers you, again in Lab color space, and with mixed and unnatural results as soon as you push the adjustment. When you think about it carefully, the white balance can be reduced to discussions of light spectrum, and the correction is simpler in RGB, especially to manage progressively of correction.</p>
<p>The <strong>color balance</strong> module allows you to adjust this
quickly, and not just for the shadows and the highlights, but
also for midtones. Using the color-pickers, to the right of the
tint sliders, it also allows you to go directly to sample
neutral tones in the image (for black, gray and white) and
let the software calculate the complementary color. See the manual
for more details.</p>
<p><em>Prefer color balance.</em></p>
<h3 id="velvia">Velvia<a href="#velvia" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Velvia works in RGB and works on a logic quite similar to the
color balance saturation. On the surface, it smells good. Except
that in fact, its colorimetric equation is not perceptually
correct. What it’s doing is changing the saturation
(which is its intention), but at the same time it also changes the hue and brightness (which
becomes awkward). The problem is that it seems to have been optimized for non-linear RGB. As a result, it is the kind of module that is typically unpredictable.</p>
<p><em>Prefer color balance.</em></p>
<h3 id="levels-rgb-levels">Levels/RGB Levels<a href="#levels-rgb-levels" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>These two are working as they should, no problem with that. But
when you look at the code, you can see that it duplicates exactly the
<strong>slope/offset/power</strong> mode of the <strong>color balance</strong> module. The white point is scaled by a simple exposure correction, such as the
slope factor or even the <strong>exposure</strong> of the exposure module. The
black point is adjusted by adding a constant, such as the factor of
the offset, or the black level correction of the <strong>exposure</strong> module. The
grey point is adjusted by a power function (sometimes improperly called
gamma), just like the power factor of the <strong>color balance</strong>.
They are not just the same features, they are exactly the same
math. The difference is therefore not only in ergonomics, but
also in the fact that the color balance gives you the numerical value
settings, making them more easily transferable from one image to another or from one application to another.
Curves and levels also assume you work SDR images, with data encoded between 0 and 1. If you work HDR
pictures or raised the exposure quite a lot earlier in the pipe, the pixel values will not be clipped, but the GUI
will not give you control over the pixels above 1 (or 100 %).</p>
<p><em>If you already use the color balance, there is no need to add an additional level module. Finish your retouching in the same module.</em></p>
<h3 id="curves-rgb-curves">Curves/RGB Curves<a href="#curves-rgb-curves" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>These also work well, but considering their classic use …
are they really useful? Usually they are used to
add/remove brightness, which falls in the same use case as the grey of the <strong>levels</strong> module or the power of the <strong>color balance</strong> module, or to add/remove contrast, which can be adjusted
or by decreasing/increasing the interval between white and black (in a
linear way) or by applying a non-linear brightness compression, again available from <strong>color balance</strong>.</p>
<p>Curve ergonomics is a real problem in an RGB linear workflow, because the middle gray is assumed to be in the center of the graph, which
therefore assumes that we are working in non-linear RGB (where the gray at
has been increased to 50%). In a linear encoding, the standard medium grey is
expected at 18% (but the practice depends or where you anchored your exposure in camera), and the
contrast control around this value not being centered on the graph
becomes complex in the interface. In addition, the graph of the curves
assumes a limited RGB signal between the values 0 and 100% (or 1)… 100% of
What? White screen luminance. In a linear workflow, the
HDR signal can go from 0 to infinity, and it is at filmic RGB step
that we’re in charge of putting everything back between 0 and 100% of the white screen.</p>
<p>The <strong>contrast</strong> in the <strong>color balance</strong> module is compatible with this approach using the <strong>contrast fulcrum</strong> parameter, which allows the selection of the
contrast reference. Thus when changing the contrast, we increase the light above the fulcrum, and reduce it below, but the fulcrum remains unchanged. The
display workflow (in Lab or non-linear RGB) always has the implicit assumption that gray is 50%, uses it as a contrast reference, and doesn’t allow you to change that value.</p>
<p><em>Prefer color balance.</em></p>
<h3 id="contrast-brightness-saturation">Contrast/Brightness/Saturation<a href="#contrast-brightness-saturation" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Module working in Lab, which duplicates again the modules
levels, curves, and color balance while adding undesirable effects on colours.</p>
<p><em>Prefer color balance.</em></p>
<h2 id="modules-to-be-used-with-care">Modules to be used with care<a href="#modules-to-be-used-with-care" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>There is no correct replacement for the following modules for the moment, but they should be used with caution because
they can be unpredictable and can cause you to lose a lot of
time.</p>
<h3 id="vibrance">Vibrance<a href="#vibrance" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Vibrance works in Lab by applying a saturation correction that
penalizes already saturated pixels to avoid over-saturation, but
also tends to darken colors. The result is far from ugly, but
the problem is that we can’t control how much we darken for
the amount we resaturate.</p>
<p><em>Prefer color zones with a selection by saturation.</em></p>
<h3 id="color-zones">Color zones<a href="#color-zones" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>This module would be awesome if the merging of colour zones were more
progressive. It now has two processing modes (<strong>strong</strong>,
the old, and <strong>smooth</strong>, the new) which are trying to meet this challenge of
two different ways, resulting in transitions too discrete for the
new, and too abrupt for the old. One more
time, it works in Lab, when similar functionality in
Capture One seems to be using HSL or HSV, which seems to perform better than Lab.</p>
<p>In some cases, <strong>color zones</strong> will benefit from being replaced by the
<strong>color balance</strong> module where parametric masking may be used to
isolate the shades you want to act on. Then the refinement
of the guided filter parametric mask should help in difficult cases. For the rest, color balance allows us to change the shade, saturation and brightness exactly the same.</p>
<p>Note, however, that the <strong>color balance</strong> module, although working in RGB
internally, merges the masks into Lab because this module is older than the
possibility to have 100 % RGB modules, and converts from Lab to RGB
internally. We’re still working on it…</p>
<p><em>Prefer color balance.</em></p>
<h3 id="vignetting">Vignetting<a href="#vignetting" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Adding a vignette around an image is not complicated: you just have to
to gradually lower the exposure, and eventually the saturation with a drawn mask.
However, the <strong>vignetting</strong> module performs incomprehensible black magic, which is much more complicated than that, with an
internal homogenization which would be superfluous if things were
well done. The result is rarely natural, the transition in luminosity being too violent compared to a real vignette.</p>
<p>You will get better results with an instance of the <strong>exposure</strong> module
set to -0.5 EV, a circular mask with a large transition area
whose polarity is reversed, possibly coupled with a desaturation in <strong>color balance</strong> to which you pass the same mask as used in <strong>exposure</strong> (via a rasterized mask).</p>
<p><em>Prefer the exposure (and, optionally, the color balance saturation) modules.</em></p>
<h2 id="mask-blend-modes-not-recommended-">Mask blend modes not recommended.<a href="#mask-blend-modes-not-recommended-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Few people know this, but the <a href="https://en.wikipedia.org/wiki/Blend_modes">blend modes</a> lighten, darken,
overlay, soft light, hard light, pin light and linear light
implicitly expect the grey level to be 50% grey and are thus
totally connected to the display-referred workflow. The blend modes are going to treat
the pixels differently depending on whether they are above or below 50 %. Remember that the linear RGB workflow keeps the gray point at 18% (or even less). These blend modes will therefore behave in a way that is unpredictable in the scene-linear portions of the pipe.</p>
<p>In linear RGB, you should only use blend modes based on
arithmetic operations (addition, multiplication, division,
subtraction, average), on maximum/minimum comparisons
(screen) or on channel separations (hue, color, chroma, etc.).</p>
<p>Note that the multiply mode is one of the most powerful in linear RGB.
For example, to enhance the contrast of an image in a natural way,
it is enough to use an instance of the <strong>exposure</strong> module blended with multiply. Set the exposure between 2 and 3 EV and the opacity between 10% and 50%. Exposure is then used to control the pivot of contrast, and opacity the intensity of the effect. It’s fast, simple and effective.</p>
<h2 id="a-minimal-workflow-for-beginners">A minimal workflow for beginners<a href="#a-minimal-workflow-for-beginners" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>In darktable, you can choose between many modules that allow you to do the same thing in a lot of different ways. But this is merely an <em>illusion</em> of choice, as many of them have more disadvantages than advantages (provided you want to achieve predictable results for demanding edits). If you open the code for any of the modules not recommended above, you will see that they are almost all dated 2010-2011 - the only reason we retained them was to maintain compatibility with edits performed in prior versions of darktable.</p>
<p>You can perform at least 80% of your processing with just 4
modules :</p>
<ol>
<li><strong>exposure</strong></li>
<li><strong>white balance</strong></li>
<li><strong>color balance</strong></li>
<li><strong>filmic RGB</strong></li>
</ol>
<p>The reason they’re so powerful is because they’re actually extremely simple, when you look at their equations:</p>
<ul>
<li><strong>Exposure</strong>: RGB_output = exposure × RGB_input + black level</li>
<li><strong>Color balance</strong> :<ul>
<li>Slope/Offset/Power: RGB_output = (slope × RGB_input + offset)^power</li>
<li>Contrast: RGB_output = (RGB_input / pivot)^(contrast ×
  pivot)</li>
</ul>
</li>
<li><strong>White balance</strong>: RGB_out = coefficients × RGB_in</li>
<li><strong>filmic RGB</strong> is a little more complex, but it’s still high-school level math</li>
</ul>
<p>With these 4 modules, you have everything you need to produce a correct image in terms of colorimetry, contrast, and artistic intent. <em>Remember to turn off the base curve if you use the filmic RGB module.</em> Then, if needed, finalize your edit with the following modules:</p>
<ul>
<li>To improve sharpness, the best option is the <strong>local contrast</strong> module in <strong>local laplacian mode</strong></li>
<li>To deblur the lens, you have deblur presets, more or less pronounced in the <strong>contrast equalizer</strong></li>
<li>To denoise, the best algorithm is in the <strong>denoise (profiled)</strong> module. Use <strong>non-local means auto</strong> mode if you don’t want to break your head</li>
<li>To remove haze, you have <strong>haze removal</strong></li>
<li>To convert to black and white, the easiest way is to use the
  film presets in the <strong>channel mixer</strong></li>
<li>For creative control of overall contrast and re-lighting of the scene
  a posteriori, use the <strong>tone equalizer</strong> module</li>
</ul>
<p>Some of the following modules have an underestimated power, and they are vastly underutilized:</p>
<ol>
<li>The <strong>exposure</strong> module, with its masks, can replace all the
the other methods of mapping HDR, <strong>shadows and highlights</strong>
the <strong>tone equalizer</strong>, and even the <strong>tone curve</strong> and the <strong>local contrast</strong> (to some extent, when used with blend mode multiply)</li>
<li>The <strong>channel mixer</strong> module can overcome all your
gamut problems, including problems with blue in stage lighting, without having to use a fake input profile, but
also turn grass into snow or summer trees into fall trees</li>
<li>The <strong>color balance</strong> module can allow you to emulate the colors
of a film, compensate for uneven white balance,
remove redness on the skin, accentuate the depth and shape, create
a split-toning effect, or to give an apocalyptic atmosphere to your
images</li>
</ol>
<p>Finally, to display only a minimal selection in the interface and
modules, to the right of “<strong>More modules</strong>“, open the
list of presets and select “<strong>workspace: all-purpose</strong>“.</p>
<figure>
   <img src="https://pixls.us/articles/darktable-3-rgb-or-lab-which-modules-help/moremodules.png" alt=''>
</figure>

<p>darktable is a lot simpler when you understand
that you don’t have to use all 77 of its modules at once …</p>
<p>If you have any doubts about the order of the modules, you should know that the default order for version 3.0 has been considered globally, and, apart from some uncertainties on the best position of the vignetting and monochrome modules, the rest is pretty solid, in theory and practice.</p>
<h2 id="conclusion">Conclusion<a href="#conclusion" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Pushing pixel values in either direction is one thing. Merging the corrections so they blend seamlessly together on the whole is another. We’ve seen that Lab or non-linear RGB allow the pixels to be pushed more or less
correctly, but that it is always when doing mask blending (aka occlusion) and feathering (aka blurs) that we’re paying the price. It turns out there are a lot of blurs under the hood of darktable, sometimes where you don’t expect them. It’s especially problematic when you’re <em>compositing,</em>
e.g. inlaying one image within another, to exchange
their background without touching the foreground. And it’s
precisely this kind of manipulation that led the movie industry
to migrate to a scene-referred linear workflow about twenty years ago.</p>
<p>So darktable is in transition. It’s long, it’s sometimes painful,
there are a lot of little bits to change in different places along with
grumbling users who are hungry for consistency. At least now you
know the why and the how. You also know what you have to win. I hope this helps you move forward.</p>
<p>For new users, limit yourself to the above recommended modules, and venture further when you begin to be comfortable. For older users, the new modules have a lot to offer to you, but old Lab modules are still relevant for
moderate creative effects and when used with knowledge of their dangers.</p>
<p>The linear toolbox is being expanded. On the agenda:</p>
<ul>
<li>rewriting the 100% RGB color balance (including the
 blending), with the addition of vibrance (and a vibrance equation
 home-developed to preserve the color)</li>
<li>conversion of the contrast equalizer and soften modules to
  the linear xyY space (because in fact, the Orton effect, on which the soften module is based,
  is very useful when it works correctly)</li>
<li>a color equalizer, similar to the tone equalizer, which will allow you to adjust saturation, vibrance and Abney effect according to the pixel luminance, to pep up the filmic RGB curve</li>
<li>a <a href="https://discuss.pixls.us/t/got-an-image-problem-go-see-the-image-doctor/14518">brand-new lens deconvolution</a> module, respectful of the depth of field (but for that, I need to develop a special wavelet based on the guided filter), which should turn your soft 18-55 mm into a Zeiss for much less</li>
<li>and of course the OpenCL version of the tone equalizer</li>
</ul>
<p>There is more work than people to do it, so
wish us good luck, don’t forget to <a href="https://liberapay.com/darktable.fr/">support us</a>, and Happy New Year 2020 to all of you!</p>
<p><a href="https://photo.aurelienpierre.com">Portrait photographer in Nancy-Metz</a>. Calculation specialist, modeling and numerical simulation for image processing (denoising, deblurring, colour management) and thermal engineering. Developer of filmic RGB, tone equalizer, color balance, and the new themeable interface for darktable 3.0. darktable user since 2010. darktable is my job, so <a href="https://en.liberapay.com/aurelienpierre/">help me out to develop</a>.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[G'MIC 2.7 - Process Your Images with Style!]]></title>
            <link>https://pixls.us/blog/2019/09/g-mic-2-7-process-your-images-with-style/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2019/09/g-mic-2-7-process-your-images-with-style/</guid>
            <pubDate>Fri, 06 Sep 2019 00:00:00 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2019/09/g-mic-2-7-process-your-images-with-style/kanagawa.jpg" /><br/>
                <h1>G'MIC 2.7 - Process Your Images with Style!</h1> 
                  
                <p>The <a href="https://www.greyc.fr/?page_id=443&amp;lang=en">IMAGE</a> team at the <a href="https://www.greyc.fr/?page_id=1342&amp;lang=en">GREYC</a> research laboratory is pleased to announce the release of version <strong>2.7</strong> of <strong><a href="https://gmic.eu"><em>G’MIC</em></a></strong> (<em>GREYC’s Magic for Image Computing</em>), its  free, generic, extensible, and probably a little magical, <a href="https://en.wikipedia.org/wiki/Software_framework">framework</a> for <a href="https://en.wikipedia.org/wiki/Digital_image_processing">digital image processing</a>.</p>
<p><img src="https://gmic.eu/gmic270/original/teaser.gif" alt="teaser"></p>
<p><a href="https://pixls.us/blog/2018/08/g-mic-2-3-6/">The previous PIXLS.US article</a> on this open-source framework was published a year ago, in August 2018. This new release is therefore a good opportunity to summarize the main features and milestones of the project’s life over the past twelve months.
Fasten your seat belts, the road is long and full of surprises!</p>
<!-- more -->
<hr>
<h2 id="useful-links-"><a href="#useful-links-" class="header-link-alt">Useful links:</a></h2>
<ul>
<li><a href="https://gmic.eu">The G’MIC Project</a></li>
<li><a href="https://twitter.com/gmic_ip">G’MIC Twitter Feed</a></li>
<li><a href="https://discuss.pixls.us/c/software/gmic">G’MIC Forum on PIXLS.US</a></li>
</ul>
<hr>
<h1 id="1-g-mic-in-300-words">1. <em>G’MIC</em> in 300 words</h1>
<p><a href="https://gmic.eu"><em>G’MIC</em></a> is a piece of software that has been developed for more than <a href="https://pixls.us/blog/2018/08/g-mic-2-3-6/">10 years</a> now, mainly in <a href="https://en.wikipedia.org/wiki/C%2B%2B"><em>C++</em></a>, by two members of the <a href="https://www.greyc.fr/?page_id=443&amp;lang=en">IMAGE</a> team of the <a href="https://www.greyc.fr/?page_id=1342&amp;lang=en">GREYC</a> lab: <a href="https://foureys.users.greyc.fr/index.php">Sébastien Fourey</a> and <a href="https://tschumperle.users.greyc.fr/">David Tschumperlé</a>. It is distributed under the terms of the <a href="http://www.cecill.info/index.en.html">CeCILL</a> free-software license. GREYC is a French public research laboratory located in Caen, specialized in digital sciences, under the head of three academic institutions: <a href="https://www.cnrs.fr/en">CNRS</a>, <a href="http://welcome.unicaen.fr/">University of Caen</a>, and <a href="https://www.ensicaen.fr/en/">ENSICAEN</a>.</p>
<p>The IMAGE team, one of the seven teams in the laboratory, is composed of researchers, professors, Ph.D. students and engineers, all specialized in the fields of algorithmics and mathematics of <a href="https://en.wikipedia.org/wiki/Digital_image_processing">image processing</a>.</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/logo_gmic.png">
<img src="https://gmic.eu/gmic270/thumb/logo_gmic.png" alt="G'MIC logo">
</a>
<figcaption><em>Fig.1.1: G’MIC project logo, and its mascot “Gmicky” (designed by <a href="https://www.davidrevoy.com/">David Revoy</a>).</em></figcaption>
</figure>

<p><em>G’MIC</em> is cross-platform (<em>GNU/Linux</em>, <em>MacOS</em>, <em>Windows</em>, …). It provides various user interfaces for manipulating <em>generic</em> image data, i.e. 2D or 3D hyperspectral images or sequences of images with floating-point values (which indeed includes “usual” color images). Around <a href="https://gmic.eu/reference.shtml">a thousand different processing functions</a> are already available. However, arbitrarily many features can be added thanks to an integrated scripting language.</p>
<p>The most commonly used <em>G’MIC</em> interfaces are: the <a href="https://gmic.eu/reference.shtml"><code>gmic</code></a> command, that can be accessed from the command line (which is an essential complement to <a href="https://www.imagemagick.org/">ImageMagick</a> or <a href="https://www.graphicsmagick.org">GraphicsMagick</a>), the <a href="https://gmicol.greyc.fr/"><em>G’MIC Online</em></a> Web service, but above all, the plug-in <a href="https://github.com/c-koi/gmic-qt"><em>G’MIC-Qt</em></a>, available for the well-known image editing software <a href="https://www.gimp.org">GIMP</a>, <a href="https://www.krita.org">Krita</a>, and <a href="https://www.getpaint.net">Paint.net</a>. It provides more than 500 different filters to apply on images.</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/gmic_270.png">
<img src="https://gmic.eu/gmic270/thumb/gmic_270.png" alt="G'MIC-Qt plug-in">
</a>
<figcaption><em>Fig.1.2: The G’MIC-Qt plug-in, here in version <strong>2.7</strong>, is at the moment the most downloaded user interface of the G’MIC project.</em></figcaption>
</figure>

<p>Thanks to its extensible architecture, <em>G’MIC</em> is regularly enhanced with new image processing algorithms, and it is these latest additions that will be discussed in the following sections.</p>
<h1 id="2-add-style-to-your-images-">2. Add style to your images!</h1>
<p><em>G’MIC</em> has recently implemented a neat filter for <strong>style transfer</strong> between two images, available from the <em>G’MIC-Qt</em> plug-in under the “<strong>Artistic / Stylize</strong>“ entry.
The concept of style transfer is quite simple: we try to transform an image (typically a <em>photograph</em>) by transferring the style of another image to it (for example a <em>painting</em>).</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/en_style_transfer.png">
<img src="https://gmic.eu/gmic270/thumb/en_style_transfer.png" alt="Principle of style transfer">
</a>
<figcaption><em>Fig.2.1: Principle of style transfer between two images.</em></figcaption>
</figure>

<p>The implementation of such a style transfer method is relatively complex: The algorithm must be able to recompose the original photograph by “borrowing” pixels from the style image and intelligently combining them, like a puzzle to be reconstructed, to best match the content of the data to be reproduced, in terms of contours, colors and textures. How easily this is done depends of course on the compatibility between the input image and the chosen style. In computer graphics, most existing implementations of style transfer methods are based on <a href="https://en.wikipedia.org/wiki/Convolutional_neural_network">convolutional neural networks</a>, more particularly <a href="https://en.wikipedia.org/wiki/Generative_adversarial_network">generative adversarial networks (<em>GANs</em>)</a>.</p>
<p><em>G’MIC</em> implements style transfer in a different way (without relying on neural networks, the scientific article detailing the algorithm is currently being written!). This method is parallelizable and can therefore benefit from all the processing units (cores) available on the user’s computer. The computation time naturally depends on the input image resolution, and the accuracy of the desired reconstruction. On a standard 4-cores PC, it could take tens of seconds for low resolution images (e. g. <em>800x800</em>), up to several minutes for larger pictures.</p>
<p>As one can imagine, it is a <strong>very versatile</strong> filter, since we can apply any style to any input image without hard constraints. Some famous paintings are available by default in the filter, in order to propose predefined styles to the user.</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/gmic_stylize.png">
<img src="https://gmic.eu/gmic270/thumb/gmic_stylize.png" alt="Filter'Artistic / Stylize'">
</a>
<figcaption><em>Fig.2.2: “<strong>Artistic / Stylize</strong>“ filter, as it appears in the G’MIC-Qt plug-in, with its many parameters that can be tuned !</em></figcaption>
</figure>

<p>Let us be honest, it is not always easy to obtain satisfactory results from the first draft. It is generally necessary to choose your starting images carefully, and to play with the many parameters available to refine the type of rendering generated by the algorithm. Nevertheless, the filter is sometimes able to generate quite interesting outcomes, such as those shown below (the original photo is visible at the top left, the style chosen at the top right, and the result of the style transfer at the bottom). Imagine how long it would take for a graphic designer to make these transformations “by hand”!</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/en_stylization_car_full_1.png">
<img src="https://gmic.eu/gmic270/thumb/en_stylization_car_full_1.png" alt="Mondrian Stylization">
</a>
<figcaption><em>Fig.2.3: Stylization of a car from the painting “<a href="https://en.wikipedia.org/wiki/Gray_Tree">Gray Tree</a>“ by <a href="https://en.wikipedia.org/wiki/Piet_Mondrian">Piet Mondrian</a>.</em></figcaption>
</figure>

<figure>
<a href="https://gmic.eu/gmic270/fullsize/en_stylization_car_full_2.png">
<img src="https://gmic.eu/gmic270/thumb/en_stylization_car_full_2.png" alt="Kandinsky Stylization">
</a>
<figcaption><em>Fig.2.4: Stylization of the same car from the painting “<a href="https://fr.wikipedia.org/wiki/Gelb-Rot-Blau">Gelb-Rot-Blau</a>“ by <a href="https://en.wikipedia.org/wiki/Wassily_Kandinsky">Vassily Kandinsky</a>.</em></figcaption>
</figure>

<figure>
<a href="https://gmic.eu/gmic270/fullsize/en_stylization_car_full_5.png">
<img src="https://gmic.eu/gmic270/thumb/en_stylization_car_full_5.png" alt="Hokusai Stylization">
</a>
<figcaption><em>Fig.2.5: Stylization of the same car from the painting “<a href="https://en.wikipedia.org/wiki/The_Great_Wave_off_Kanagawa">The Great Wave off Kanagawa</a>“ of <a href="https://en.wikipedia.org/wiki/Hokusai">Hokusai</a>.</em></figcaption>
</figure>

<figure>
<a href="https://gmic.eu/gmic270/fullsize/en_stylization_cat_full_7.png">
<img src="https://gmic.eu/gmic270/thumb/en_stylization_cat_full_7.png" alt="Hatch Stylization">
</a>
<figcaption><em>Fig.2.6: Stylization of a cat from a hatched drawing.</em></figcaption>
</figure>

<figure>
<a href="https://gmic.eu/gmic270/fullsize/en_stylization_bottles_full_21.png">
<img src="https://gmic.eu/gmic270/thumb/en_stylization_bottles_full_21.png" alt="Mondrian-2">
</a>
<figcaption><em>Fig.2.7: Stylization of bottles from the painting “<a href="https://en.wikipedia.org/wiki/Evening;_Red_Tree">Evening: Red Tree</a>“ by <a href="https://en.wikipedia.org/wiki/Piet_Mondrian">Piet Mondrian</a>.</em></figcaption>
</figure>

<figure>
<a href="https://gmic.eu/gmic270/fullsize/en_stylization_bottles_full_23.png">
<img src="https://gmic.eu/gmic270/thumb/en_stylization_bottles_full_23.png" alt="Picasso Stylization">
</a>
<figcaption><em>Fig.2.8: Stylization of bottles from the painting “<a href="https://lewebpedagogique.com/bourguignon/2011/02/10/le-reservoir-picasso/">Le réservoir - Horta de Ebro</a>“ by <a href="https://en.wikipedia.org/wiki/Pablo_Picasso">Pablo Picasso</a>.</em></figcaption>
</figure>

<p>Other examples of image stylization can be found on <a href="https://gmic.eu/gallery/stylization.shtml">the image gallery, dedicated to this filter</a>. To our knowledge, <em>G’MIC</em> is the only “mainstream” image processing software currently offering a <strong>generic</strong> style transfer filter, where <strong>any</strong> style image can be chosen.</p>
<p>A last funny experiment: get a <a href="https://www.google.com/search?hl=en&amp;tbm=isch&amp;source=hp&amp;biw=1920&amp;bih=1072&amp;ei=WpNWXcWzOITQaJDghfAN&amp;q=alien+roswell&amp;oq=alien+roswell&amp;gs_l=img.3..0l7j0i5i30j0i8i8i30l2.1371.3446...3664...1.0...0.51.587.14...0...0...1...gws-wiz-img.KpJUtbI9LbU&amp;ved=0ahUKEwjFyNjFyNjPpIfkAhUEKBoKHRBwAd4Q4d4dUDCAU&amp;uact=5">picture of an Alien’s head</a>, like <em>Roswell</em>, and then select a crop of the <a href="https://en.wikipedia.org/wiki/Mandelbrot_set">Mandelbrot fractal set</a> as your style image. Use the transfer filter to generate a “fractal” rendering of your alien head. Then, make the whole world believe that the Mandelbrot set contains the mathematical proof of the existence of aliens… ☺</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/alien_mandelbrot.png">
<img src="https://gmic.eu/gmic270/thumb/alien_mandelbrot.png" alt="Mandelbrot Stylization">
</a>
<figcaption><em>Fig.2.9: <strong>Breaking News!</strong> An Alien head was found in the Mandelbrot fractal set ! (if you don’t see it at first sight, tilt your head to the left…)</em></figcaption>
</figure>

<p>In short, this filter has a clear creative potential for all kind of artists!</p>
<h1 id="3-interactive-deformation-and-morphing">3. Interactive deformation and morphing</h1>
<p>This year, <em>G’MIC</em> got an implementation of <a href="https://en.wikipedia.org/wiki/Radial_basis_function_interpolation">the <em>RBFs</em> interpolation method</a> (<em>Radial Basis Functions</em>), which is able to estimate a dense interpolated function in any dimension, from a known set of scattered samples (not necessarily located on a regular grid). Thus, it gave us the idea to add distortion filters where the user interaction is focused in adding and moving keypoints over the image. In a second stage, <em>G’MIC</em> interpolates the data represented by these keypoints in order to perform the distortion on the entire image.</p>
<p>Let us start with the <strong>“Deformations / Warp [interactive]”</strong> filter which, as its name suggests, allows the user to distort an image locally by creating/moving keypoints.</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/en_warp_girl.png">
<img src="https://gmic.eu/gmic270/thumb/en_warp_girl.png" alt="Keypoint-based Distortion'">
</a>
<figcaption><em>Fig.3.1: The new <strong>“Deformations / Warp [interactive]”</strong> filter allows images to be distorted interactively, for example to quickly create caricatures from portrait photographs.</em></figcaption>
</figure>

<p>The animation below shows this interactive filter in use, and illustrates the fact that these keypoints can be considered as anchors to the image, when they are moved.</p>
<figure>
<a href="https://gmic.eu/gmic270/original/gmic_deform.gif">
<img src="https://gmic.eu/gmic270/original/gmic_deform.gif" alt="Key-point deformation - animation">
</a>
<figcaption><em>Fig.3.2: Illustration of the user interaction in the G’MIC deformation filter, based on the creation and motion of keypoints.</em></figcaption>
</figure>

<p><em>(For those who might be concerned about the portraits photos used in the figures above and below: all these portraits are totally artificial, randomly generated by GANs via the website <a href="https://thispersondoesnotexist.com/"><em>This Person Does Not Not Exist</em></a>. No moral prejudices to dread!)</em>.</p>
<p>The great advantage of using <em>RBFs</em>-based interpolation is that we do not have to explicitly manage a <em>spatial structure</em> between the keypoints, for instance by defining a <a href="https://en.wikipedia.org/wiki/Unstructured_grid">mesh</a> (i.e. a “deformation grid”). This gives a greater degree of freedom in the obtained distortion (see <em>Fig.3.3.</em> below). And at the same time, we keep a rather fine control on the local amplitude of the applied distortion, since adding more “identity” keypoints around a region naturally limits the distortion amplitude inside this region.</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/en_warp_man.png">
<img src="https://gmic.eu/gmic270/thumb/en_warp_man.png" alt="Key-point deformation - other example">
</a>
<figcaption><em>Fig.3.3: RBFs interpolation is able to create complex continuous distortions, with very few keypoints (here, by inverting the positions of the right/left eyes, and only 4 keypoints used).</em></figcaption>
</figure>

<p>A short demonstration of this distortion filter is also visible in <a href="https://youtu.be/eWoRDzhAEtw">this Youtube video</a>.</p>
<p>And why not extending this kind of distortion for two images, instead of a single one? This is precisely what the new filter <strong>“Deformations / Morph [interactive]”</strong> does. It is able to render a <a href="https://en.wikipedia.org/wiki/Morphing">morphing</a> sequence between two images (put on two separate layers), using the same interpolation technique that only asks for the user to set colored keypoints which match on both images.</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/en_morph_st.png">
<img src="https://gmic.eu/gmic270/thumb/en_morph_st.png" alt="Morphing filter - positioning of keypoints">
</a>
<figcaption><em>Fig.3.4: <strong>“Deformations / Morph [interactive]”</strong> filter asks the user to position keypoints indicating correspondences between two images.</em></figcaption>
</figure>

<p>In the example above, keypoints are placed on characteristic areas of both faces (tip of nose, lips, eyebrows, etc.). In practice, this takes no more than 5 minutes. Thanks to these keypoints, the algorithm is able to estimate a global deformation map from one image to another, and can generate temporally “mixed” frames where the elements of the face remain relatively well aligned during the whole morphing sequence.</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/en_morph_ib.png">
<img src="https://gmic.eu/gmic270/thumb/en_morph_ib.png" alt="Morphing filter - intermediate image">
</a>
<figcaption><em>Fig.3.5: One of the intermediate images generated by the morphing filter, between the two input faces.</em></figcaption>
</figure>

<p>By comparison, here is what we would obtain by simply averaging the two input images together, i.e. without correcting the displacement of the facial features between both images. Not a pretty sight indeed!</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/en_morph_avg.png">
<img src="https://gmic.eu/gmic270/thumb/en_morph_avg.png" alt="Morphing filter - simple averaging">
</a>
<figcaption><em>Fig.3.6: A simple averaging of the “Source” and “Target” images reveals the differences in the locations of the facial features.</em></figcaption>
</figure>

<p>Thus, the morphing filter is able to quickly generate a set of intermediate frames, ranging from the “Source” to the “Target” faces, a sequence that can then be saved as an animation.</p>
<figure>
<a href="https://gmic.eu/gmic270/original/morph.gif">
<img src="https://gmic.eu/gmic270/original/morph.gif" alt="Morphing filter - generated animation">
</a>
<figcaption><em>Fig.3.7: Animation resulting from the generation of all intermediate frames by the G’MIC morphing filter.</em></figcaption>
</figure>

<p>Many other use cases of this morphing filter can be considered. The following example illustrates its application to render an animation from two photographs of the same object (a garden gnome), but shot with different <a href="https://en.wikipedia.org/wiki/Depth_of_field">DOFs (Depth of Field)</a>.</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/en_morph_dwarf_st.png">
<img src="https://gmic.eu/gmic270/thumb/en_morph_dwarf_st.png" alt="Morphing filter - example of the garden dwarf">
</a>
<figcaption><em>Fig.3.8: Two photographs with different depths of field, and the location of the correspondence keypoints put by the user.</em></figcaption>
</figure>

<figure>
<a href="https://gmic.eu/gmic270/original/morph_dwarf.gif">
<img src="https://gmic.eu/gmic270/original/morph_dwarf.gif" alt="Morphing filter - garden dwarf animation">
</a>
<figcaption><em>Fig.3.9: Animation resulting from the generation of all intermediate frames by the G’MIC morphing filter.</em></figcaption>
</figure>

<p>Command line users will be pleased to know that these two filters can be tested very quickly from a <em>shell</em>, as follows:</p>
<pre><code class="lang-sh">$ gmic image.jpg x_warp
$ gmic source.jpg target.jpg x_morph
~
</code></pre>
<h1 id="4-ever-more-colorimetric-transformations">4. Ever more colorimetric transformations</h1>
<p>For several years, <em>G’MIC</em> has contained colorimetric transformation filters able to simulate the film development process, or to give particular colorimetric moods to images (sunlight, rain, fog, morning, afternoon, evening, night, etc.). In <a href="https://pixls.us/blog/2017/06/g-mic-2-0/">a previous report</a>, we already mentioned these filters, which are essentially based on the use of <a href="https://en.wikipedia.org/wiki/3D_lookup_table"><em>3D CLUTs</em></a> (<em>Color Lookup Tables</em>) for modeling the color transformation.</p>
<p>A <em>3D CLUT</em> is technically a three-dimensional array that provides for each possible <em>RGB</em> color, a replacement color to apply to the image.</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/en_whatisaclut.png">
<img src="https://gmic.eu/gmic270/thumb/en_whatisaclut.png" alt="Illustration of a 3D Color LUT">
</a>
<figcaption><em>Fig.4.1: Modeling a colorimetric transformation by a “3D Color LUT”.</em></figcaption>
</figure>

<p>The main interest of these <em>3D CLUTs</em> is the great variety of transformations they can represent: They can indeed define <em>RGB-to-RGB</em> functions with almost any kind of variations. The only “constraint” of these methods is that all image pixels having the same color will be transformed into pixels that also have an identical color.</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/en_cluts_ex.png">
<img src="https://gmic.eu/gmic270/thumb/en_cluts_ex.png" alt="Examples of CLUT-based transformations">
</a>
<figcaption><em>Fig.4.2: Illustration of the variety of colorimetric transformations that can be modeled by 3D CLUTs.</em></figcaption>
</figure>

<p>The disadvantage, however, is that these 3D <em>CLUTs</em> are relatively data intensive. When you want to embed several hundred different ones in the same piece of software (which is the case in <em>G’MIC</em>), you quickly find yourself with a large volume of data to install and manage. For instance, our friends at <a href="https://rawpedia.rawtherapee.com/Film_Simulation">RawTherapee</a> offer on their website an additional pack of <strong>294</strong> <em>CLUTs</em> functions to download. All these <em>CLUTs</em> are stored as <code>.png</code> files in a <code>.zip</code> archive with a total size of <strong>402 MB</strong>. Even if downloading and storing a few hundred _MB_ is no longer limiting nowadays, it is still quite large for things as simple as color changing filters.</p>
<p>This year, we have therefore carried out important research and development work at the GREYC lab on this topic. The result: a new lossy compression algorithm (<em>with visually imperceptible compression losses</em>) that can generate binary representations of <em>CLUTs</em> with an average compression rate <strong>of more than 99%</strong>, relative to the data already <em>loslessy compressed</em>. The general idea is to determine an optimal set of color keypoints from which the <em>CLUT</em> can be reconstructed (<em>decompression</em>), and this, with a minimal reconstruction error.</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/en_clut_compression.png">
<img src="https://gmic.eu/gmic270/thumb/en_clut_compression.png" alt="Principle of CLUT compression">
</a>
<figcaption><em>Fig.4.3: Principle of our CLUT compression technique, based on determining and storing a set of well-chosen keypoints.</em></figcaption>
</figure>

<p>As a result, this original compression method allowed us to offer no less than <strong>763 <em>CLUTs</em></strong> in <em>G’MIC</em>, all stored in a binary file that weights <strong>less than 3 MB</strong> !</p>
<p>All these color variation filters have been grouped into two separate entries in the <em>G’MIC-Qt</em> plug-in, namely <strong>“Colors / Simulate Film”</strong> (for analog film simulations), and <strong>“Colors / Color Presets”</strong> (for other color transformations). Each of these filters provides sub-categories for a structured access to the hundreds of <em>CLUTs</em> available. To our knowledge, this makes <em>G’MIC</em> one of the image processing software with the most colorimetric transformations, while keeping a reasonable size.</p>
<p>Readers interested in the mathematical details of these <em>CLUT</em> compression/decompression algorithms may refer to <a href="https://hal.archives-ouvertes.fr/hal-02066484v3/document">the scientific paper</a> we wrote about it, as well as the presentation <a href="https://gmic.eu/gmic270/talk_en.pdf">slides</a> that have been presented at the conferences <a href="http://gretsi.fr/colloque2019/">GRETSI’2019</a> (French conference, in Lille) and <a href="https://caip2019.unisa.it/">CAIP’2019</a> (International conference, in Salerno).</p>
<figure>
<a href="https://gmic.eu/gmic270/talk_en.pdf">
<img src="https://gmic.eu/gmic270/thumb/en_clut_talk.png" alt="Algorithm presentation transparencies (in French)">
</a>
<figcaption><em>Fig.4.4: Presentation slides explaining the details of the CLUT compression/decompression algorithm.</em></figcaption>
</figure>

<p>To finish with this topic, note that we have made <a href="https://framagit.org/dtschump/libclut">an open-source implementation</a> of our decompression algorithm of <em>CLUTs</em> available online (in <em>C++</em>, with 716 <em>CLUTs</em> already included). <a href="https://discuss.pixls.us/t/3d-lut-module-in-darktable-2-7-dev">Discussions have also been initiated</a> for a potential integration as a <a href="https://www.darktable.org/">Darktable</a> module for managing <em>3D CLUTs</em>.</p>
<h1 id="5-create-palettes-by-mixing-colors">5. Create palettes by mixing colors</h1>
<p>Let us now talk about the recent <strong>“Colors / Colorful Blobs”</strong> filter which is directly inspired by the original concept of <a href="https://research.adobe.com/project/playful-palette-an-interactive-parametric-color-mixer-for-artists/"><em>Playful Palette</em></a> created by the Adobe Research team in 2017. This filter is intended for illustrators (designers and digital painters). The goal: Create color palettes which contain only a few main colors (the ones you want to use in an illustration), but also a few sets of intermediate shades between these colors, in the form of color gradients. An artist is theoretically able to better preserve the color coherence of its artwork, by picking colors only from this palette.</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/colorful_blobs.png">
<img src="https://gmic.eu/gmic270/thumb/colorful_blobs.png" alt="Colors / Colorful Blobs filter in G'MIC-Qt">
</a>
<figcaption><em>Fig.5.1: <strong>“Colors / Colorful Blobs”</strong> filter allows you to create custom color palettes, by spatially mixing several colors together.</em></figcaption>
</figure>

<p>As shown on the figure above, the filter allows the artist to create and move colored “blobs” that, when merged together, create the desired color gradients. The result of the filter is thus an image that the artist can use afterward as a custom <em>2D</em> color palette.</p>
<p>From a technical point of view, this filter is based on <em>2D</em> <a href="https://en.wikipedia.org/wiki/Metaballs"><em>metaballs</em></a> to model the color blobs. Up to twelve separate blobs can be added and different color spaces can be chosen for the calculation of the color gradient (<em>sRGB</em>, <em>Linear RGB</em> or <em>Lab</em>). The filter also benefits from the recent development of the <em>G’MIC-Qt</em> plug-in that enhances the user interactivity inside the preview widget (a <a href="https://pixls.us/blog/2018/08/g-mic-2-3-6/">feature we mentioned in a previous report</a>), as seen in the animation below (see also this longer <a href="https://www.youtube.com/watch?v=M1pSn1g7sC8">video</a>).</p>
<figure>
<a href="https://gmic.eu/gmic270/original/colorful_blobs.gif">
<img src="https://gmic.eu/gmic270/original/colorful_blobs.gif" alt="Colors / Colorful Blobs filter - interactive use">
</a>
<figcaption><em>Fig.5.2: Illustration of the user interaction with the G’MIC palette creation filter, based on the creation and movement of colored “blobs”.</em></figcaption>
</figure>

<p>This filter may not be useful for most <em>G’MIC</em> users. But you have to admit, it’s pretty fun, isn’t it?</p>
<h1 id="6-some-more-filters">6. Some more filters</h1>
<p>Let us now describe a selection of a few other filters and effects added during the year, perhaps less original than the previous ones (but not completely useless anyway!).</p>
<ul>
<li><p>First of all, the <strong>“Rendering / Symmetric 2D Shape”</strong> filter is a great help when you want to draw geometric shapes having angular symmetries.</p>
<figure>
<a href="https://gmic.eu/gmic270/original/symmetric2dshape.gif">
<img src="https://gmic.eu/gmic270/original/symmetric2dshape.gif" alt="Rendering / Symmetric 2D Shape filter - interactive use">
</a>
<figcaption><em>Fig.6.1: <strong>“Rendering / Symmetric 2D Shape”</strong> filter in action, in the G’MIC-Qt plug-in.</em></figcaption>
</figure>

<p>The plane can be subdivided into up to 32 angular pieces, each of which can contain a maximum of six keypoints to define a shape profile, allowing potentially complex and varied shapes to be rendered (such as the super-<a href="https://en.wikipedia.org/wiki/Shuriken">shuriken</a> below!).</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/symmetric2dshape.png">
<img src="https://gmic.eu/gmic270/thumb/symmetric2dshape.png" alt="Rendering / Symmetric 2D Shape filter - complex example">
</a>
<figcaption><em>Fig.6.2: Example of a complex symmetrical shape obtained with the <strong>“Rendering / Symmetric 2D Shape”</strong> filter.</em></figcaption>
</figure>
</li>
<li><p>The <strong>“Degradations / Self Glitching”</strong> filter combines an image with a shifted version of itself, to create a <a href="https://en.wikipedia.org/wiki/Glitch_art"><em>Glitch-art</em></a> type image. Several bitwise operations (<em>Add</em>, <em>Mul</em>, <em>And</em>, _Or_, <em>Xor</em>,…) can be chosen and you can adjust the shift direction and amplitude, as well as various other controls.</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/self_glitching.png">
<img src="https://gmic.eu/gmic270/thumb/self_glitching.png" alt="Degradations / Self Glitching Filter">
</a>
<figcaption><em>Fig.6.3: <strong>“Degradations / Self Glitching”</strong> filter helps to ruin your photos easily!</em></figcaption>
</figure>

<p> Again, this is not a filter that will necessarily be used every day! But it may be helpful for some people. It was actually added in response to a user request.</p>
</li>
<li><p>In the same style, the <strong>“Degradations / Mess With Bits”</strong> filter applies some arithmetic operations to the pixel values, seen as binary numbers (for instance, bit shift and bit inversion). Always with the idea of rendering <em>Glitch art</em>, of course!</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/messwithbits.png">
<img src="https://gmic.eu/gmic270/thumb/messwithbits.png" alt="Degradations / Mess With Bits Filter">
</a>
<figcaption><em>Fig.6.4: <strong>“Degradations / Mess With Bits”</strong> filter, or how to transform an adorable toddler into a pustulating alien…</em></figcaption>
</figure>
</li>
<li><p>The <strong>“Degradations / Noise [Perlin]”</strong> filter implements the generation of the <a href="https://en.wikipedia.org/wiki/Perlin_noise">Perlin noise</a>, a very classical noise model in image synthesis, used for the generation of elevation maps for virtual terrains. Here we propose a multi-scale version of the original algorithm, with up to four simultaneous variation scales.</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/noise_perlin.png">
<img src="https://gmic.eu/gmic270/thumb/noise_perlin.png" alt="Degradations / Noise - Perlin filter">
</a>
<figcaption><em>Fig.6.5: <strong>“Degradations / Noise [Perlin]”</strong> filter proposes a multi-scale implementation of the Perlin noise (illustrated here with two variation scales).</em></figcaption>
</figure>
</li>
<li><p>The <strong>“Frames / Frame [Mirror]”</strong> filter is also a “tailor-made” effect, to meet the needs of a <em>G’MIC-Qt</em> plug-in user. This photographer wanted to resize his photos to obtain a precise <em>width/height</em> ratio, but without having to crop his images. The solution was instead to add image information at the edges of the picture, by symmetry, in order to obtain the desired ratio. So that’s what this filter does.</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/frame_mirror.png">
<img src="https://gmic.eu/gmic270/thumb/frame_mirror.png" alt="Frames / Frame - Mirror Filter">
</a>
<figcaption><em>Fig.6.6: The <strong>“Frames / Frame [Mirror]”</strong> filter extends the image borders by symmetry.</em></figcaption>
</figure>
</li>
<li><p>Finally, let us mention the upcoming advanced <a href="https://en.wikipedia.org/wiki/Non-local_means">image noise reduction filter</a>, by <a href="https://iainisbald.wordpress.com/">Iain Fergusson</a>, whose development is still in progress. Iain has been contributing to <em>G’MIC</em> for several years now by implementing and experimenting original denoising filters, and his latest project seems really interesting, with promising results. <a href="https://www.youtube.com/watch?v=pPj_7J4iD_U">This video</a> shows this filter in action, a good place to learn a little more about how it works.</p>
</li>
</ul>
<p>Now that we’ve looked at these new filters, it seems important for us to remind that, as in many IT projects, this visible part of the iceberg hides a set of lower-level developments done to improve the interactive possibilities of the <em>G’MIC-Qt</em> plug-in, as well as the performance of the internal scripting language interpreter (<a href="https://gmic.eu/reference.shtml">the <em>G’MIC</em> language</a>), which is how all these filters and effects are actually implemented. These improvements and incremental slight optimizations of the code base benefit to all filters (even those already available for several years) and it actually represents most of the development time we spend on <em>G’MIC</em>. So, dear users, do not be surprised if no new filters appear for a while. It is probably just because we are doing serious work on the <em>G’MIC</em> framework core!</p>
<h1 id="7-other-notable-points-in-the-project-life">7. Other notable points in the project life</h1>
<p>Here are listed some other important news that have punctuated the life of the project since August 2018.</p>
<h2 id="7-1-we-now-accept-donations-"><a href="#7-1-we-now-accept-donations-" class="header-link-alt">7.1. We now accept donations!</a></h2>
<p>This is essential news for us: since March 2019, the <em>G’MIC</em> project has been granted permission to <a href="https://libreart.info/en/projects/gmic"><strong>collect donations</strong></a> (via <em>Paypal</em>), to help in its maintenance and development!</p>
<p><a href="https://gmic.eu/gmic270/original/chat_dons.gif"><img src="https://gmic.eu/gmic270/original/chat_dons.gif" alt="Cute kitten animation"></a></p>
<p>This is a good thing, because until now, there was no simple way for a public research laboratory as the GREYC, to accept donations for supporting the development of a free software application such as <em>G’MIC</em>, an application used daily by several thousand people around the world. And we have currently no other ways to finance this piece of software in the long term.</p>
<p>Thus, we have partnered with <a href="https://libreart.info/en/">LILA</a> (<em>Libre comme l’Art</em>), a French non-profit organization promoting Arts, Artists and Free Software, who accepted to collect donations for us.</p>
<figure>
<a href="https://libreart.info/en/projects/gmic">
<img src="https://gmic.eu/gmic270/thumb/assoc_lila.png" alt="logo of the LILA association">
</a>
<figcaption><em>Fig.7.1: Logo of the LILA association, which collects donations for the G’MIC project.</em></figcaption>
</figure>

<p>In practice, this is something that has been a little long to set up, but now that the donation system is operational, we hope to benefit from it in the future to make the project development even faster (the possible use of the raised funds is detailed on <a href="https://libreart.info/en/projects/gmic">the donations page</a>, this being of course very dependent on the amount of money collected).</p>
<p>For the sake of transparency, we will <a href="https://gmic.eu/gmic270/fullsize/donations_march.png">post the monthly amount of collected donations</a> on the project website. At this point, we don’t really know what to expect in practice. We will see how these donations evolve. Of course, we would like to thank all those who have already participated (or plan to do so) in supporting our open-source framework for image processing. Our ultimate dream would be, one day, to say that the illustration below is only a distant memory!</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/en_commitstrip.png">
<img src="https://gmic.eu/gmic270/thumb/en_commitstrip.png" alt="The reality of the development of the G'MIC project">
</a>
<figcaption><em>Fig.7.2: The harsh reality of the development of the G’MIC project ☺ (<a href="https://www.commitstrip.com/fr/2014/05/07/the-truth-behind-open-source-apps/">illustration from the CommitStrip website</a>).</em></figcaption>
</figure>

<h2 id="7-2-integrating-smart-coloring-into-gimp"><a href="#7-2-integrating-smart-coloring-into-gimp" class="header-link-alt">7.2. Integrating “Smart Coloring” into GIMP</a></h2>
<p>Let us also mention the work of <a href="https://girinstud.io/about/">Jehan</a>, known to PIXLS.US readers as a regular GIMP developer. Jehan has been hired by the GREYC laboratory in September 2018, to work on <em>G’MIC</em> (for a 12-month fixed-term contract), thanks to a grant funded by the <a href="https://ins2i.cnrs.fr/">INS2I Institute of the CNRS</a> (for which we are grateful).</p>
<p>One of its first missions was to re-implement the <em>G’MIC</em> “Smart Coloring” algorithm (<a href="https://pixls.us/blog/2017/06/g-mic-2-0">that we had already talked about previously</a>) as a new interactive mode integrated into the existing GIMP “<em>Bucket Fill</em>“ tool.</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/smart_coloring.png">
<img src="https://gmic.eu/gmic270/thumb/smart_coloring.png" alt="Smart Coloring Algorithm">
</a>
<figcaption><em>Fig.7.3: G’MIC’s “Smart Coloring” algorithm, now available in GIMP, helps illustrators color their drawings more quickly.</em></figcaption>
</figure>

<p>Jehan described all his work in <a href="https://girinstud.io/news/2019/02/smart-colorization-in-gimp/">a blog post</a>, which is strongly recommended for reading. Of course, we don’t want to copy his post here, but we want to mention this activity, and to consider it as another original contribution of the <em>G’MIC</em> project to free software for graphic creation: at the GREYC laboratory, we are really happy and proud to have imagined and developed an image colorization algorithm, which artists can use through a well integrated tool into such a popular piece of software as GIMP!</p>
<p>This intelligent colorization algorithm has been the subject of <a href="https://hal.archives-ouvertes.fr/hal-01891876">scientific publications</a>, presentations at the conferences <em>GRETSI’2017</em>, <em>EuroGraphics VMV’2018</em>, as well as at the <a href="https://www.youtube.com/watch?v=3oHe0Y43dx8"><em>Libre Graphics Meeting’2019</em></a>. And it is with a great pleasure we see this algorithm is used in real life, for various realizations (as in <a href="https://www.youtube.com/watch?v=Z5THsjJGYcE&amp;feature=youtu.be">this great video</a> of <em>GDQuest</em>, for colorizing sprites for video games, for instance).</p>
<p>Scientific research carried out in a public laboratory, which becomes available for the general public, that is what we want to see!</p>
<h2 id="7-3-other-news-related-to-the-g-mic-project"><a href="#7-3-other-news-related-to-the-g-mic-project" class="header-link-alt">7.3. Other news related to the <em>G’MIC</em> project</a></h2>
<ul>
<li><p>Recently, a major improvement in the performances of <em>G’MIC</em> under <em>Windows</em> has been achieved, by recoding the random number generator (now <a href="https://en.wikipedia.org/wiki/Reentrancy_(computing">reentrant</a>)) and removing some slow <a href="https://en.wikipedia.org/wiki/Mutual_exclusion">mutex</a> which were responsible of performance drops for all filters requiring sequences of random numbers (and there were many!). As a result, some filters are accelerated by a factor of four to six under Windows!</p>
</li>
<li><p>Since December 2018, our <em>G’MIC-Qt</em> plug-in is available for <a href="https://en.wikipedia.org/wiki/Paint.net"><em>Paint.net</em></a>, a free graphic editing software application under <em>Windows</em> (not open-source though). This has been possible thanks to the work of <a href="https://github.com/0xC0000054">Nicholas Hayes</a> who wrote the <a href="https://en.wikipedia.org/wiki/Glue_code">glue code</a> allowing the interaction between our <em>G’MIC-Qt</em> plug-in and the host software. Users of Paint.net are now able to benefit from the 500+ filters offered by <em>G’MIC</em>. This plug-in, <a href="https://forums.getpaint.net/topic/113564-gmic-8-14-2019">available here</a>, has already been voted “<em>Best Plug-in of the Year 2018</em>“ by the members of the <em>Paint.net</em> forum ☺ !</p>
</li>
<li><p>Since October 2018, the <em>G’MIC-Qt</em> plug-in for GIMP has been compiled and proposed for <em>MacOS</em> by a new maintainer, <a href="https://www.patreon.com/andreaferrero">Andrea Ferrero</a>, who is also the main developer of the free software application <a href="http://photoflowblog.blogspot.com/">Photoflow</a>, a non-destructive image editor (<a href="https://discuss.pixls.us/t/pre-compiled-gimp-plug-in-for-osx-ready-for-testing/">more information here</a>). Many thanks Andrea, for this wonderful contribution!</p>
<ul>
<li>Since the announced shutdown of the <em>Google+</em> social network, we have opened two new accounts, on <a href="https://framasphere.org/people/b1132ee0b40a013639932a0000053625">Framasphere</a> and <a href="https://www.reddit.com/r/gmic">Reddit</a>, to share news about the project’s life (but the <a href="https://twitter.com/gmic_ip">Twitter feed</a> is still our most active account).</li>
</ul>
</li>
<li><p>Let us also thank <em>Santa Claus</em>, who kindly brang us a materialized version of our mascot “Gmicky” last year. That looks almost perfect!</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/gmicky_irl.png">
<img src="https://gmic.eu/gmic270/thumb/gmicky_irl.png" alt="Gmicky IRL">
</a>
<figcaption><em>Fig.7.4: The mascot “Gmicky”, brought by Santa Claus, in December 2018.</em></figcaption>
</figure>
</li>
<li><p>The <em>G’MIC</em> project was presented at the <a href="https://www.normandie.fr/feno">FENO</a>, the “<em>Fête de l’Excellence Normande</em>“, from 12 to 14 April 2019, at the Caen Exhibition Centre. We were hosted on the stand of the <a href="http://normandie.cnrs.fr/"><em>CNRS Normandie</em></a>, and we carried out demonstrations of <a href="https://gmic.eu/gmic270/fullsize/teaser_style_transfer.png">style transfer (<em>teaser</em>)</a> and <a href="https://gmic.eu/gmic270/fullsize/teaser_illumination2d.png">automatic illumination of clip arts (<em>teaser</em>)</a>, for the general public.</p>
<figure>
<a href="https://gmic.eu/gmic270/fullsize/feno.png">
<img src="https://gmic.eu/gmic270/thumb/feno.png" alt="FENO">
</a>
<figcaption><em>Fig.7.5: We were present at the CNRS stand, for G’MIC demonstrations, at the “Fête de l’Excellence Normande 2019” (FENO).</em></figcaption>
</figure>

</li>
</ul>
<ul>
<li>And to dig even deeper, here are some other external links we found interesting, and which mention <em>G’MIC</em> in one way or another:<ul>
<li>A <a href="https://youtu.be/cshL2EjFdXc">video presentation of the plug-in <em>G’MIC-Qt</em></a>, by <em>Chris’ Tutorial</em>;</li>
<li>The Youtube channel <a href="https://www.youtube.com/channel/UCPHIhisbs90ks4-4EsdXtpQ"><em>MyGimpTutorialChannel</em></a> offers a lot of videos showing how to use <em>G’MIC-Qt</em> in GIMP to achieve various effects (mostly in German);</li>
<li><a href="https://www.theclinic.cl/"><em>The Clinic</em></a>, a Chilean weekly newspaper, apparently used <em>G’MIC</em> <a href="https://twitter.com/nacecontragolpe/status/1106917303587885056/photo/1">to achieve an effect on one of its covers</a> (via the smoothing filter <strong>“Artistic / Dream Smoothing”</strong>);</li>
<li>Another <a href="https://www.youtube.com/watch?v=yv7a7R3gTFA">video tutorial</a>, showing how to use the <em>G’MIC</em> <strong>“Artistic / Rodilius”</strong> filter to create stylized animal photos.</li>
</ul>
</li>
</ul>
<h1 id="8-the-future">8. The future</h1>
<p>As you see, <em>G’MIC</em> is still an active open-source project, and with its 11 years of existence, it can be considered as mature enough to be used “in production” (whether artistic or scientific).</p>
<p>We have never defined and followed a precise roadmap for the project development: the functionalities come according to the needs of the developers and users (and the limited time we can devote to it!). At the moment, there is a lot of interest in image processing methods based on neural networks, and <a href="https://en.wikipedia.org/wiki/Deep_learning">deep learning techniques</a>. It is therefore possible that one day, some of these methods will be integrated into the software (for instance, we already have a prototyped code running in <em>G’MIC</em> that actually learns from image data with <a href="https://en.wikipedia.org/wiki/Convolutional_neural_network">convolutional neural networks</a>, but we are still at the prototyping stage…).</p>
<p>After 11 years of development (make it 20 years, if we include the development of the <a href="http://cimg.eu"><em>CImg</em></a> library on which <em>G’MIC</em> is based), we have reached a point where the core of the project is, technically speaking, sufficiently well designed and stable, so as not to have to rewrite it completely in the next years. In addition, the number of features available in <em>G’MIC</em> already covers a large part of the traditional image processing needs.</p>
<p>The evolution of this project may therefore take several paths, depending on the human and material resources that we will be able to devote to it in the future (for the development, but also in project management, communication, etc.). Achieving an increase in these resources will undoubtedly be one of the major challenges of the coming years, if we want <em>G’MIC</em> to continue its progress (and we already have plenty of ideas for it!). Otherwise, this image processing framework might end up being just maintained in its current (and functional) state. It is of course with a hope for progression that we have recently set up <a href="https://libreart.info/en/projects/gmic">the donation page</a>. We also hope that other opportunities will soon arise to enable us to make this project more visible (you are invited to share this post if you like it!)</p>
<p>That’s it for now, this long post is now over, thank you for holding on until the end, you can resume normal activity! I’ll be happy to answer any questions in the comments.</p>
<hr>
<p><strong>Post-scriptum</strong>: Note that the 3D animation displayed as the <em>teaser</em> image for this post has been actually generated by <em>G’MIC</em>, via the command <code>$ gmic x_starfield3d</code>. An opportunity to remind that <em>G’MIC</em> also has its own _3D_ rendering engine capable of displaying simple objects, which is very practical for scientific visualization! We may have the occasion to talk about it again in a future post…</p>
<p>A special thank you for reviewing and helping to translate this article to:<br>Patrick David, Sébastien Fourey, Christine Porquet, Ryan Webster.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Quick digiKam Tip: Back up digikamrc file]]></title>
            <link>https://pixls.us/blog/2019/07/quick-digikam-tip-back-up-digikamrc-file/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2019/07/quick-digikam-tip-back-up-digikamrc-file/</guid>
            <pubDate>Tue, 09 Jul 2019 00:00:00 GMT</pubDate>
            <description><![CDATA[<img src="/images/logo/pixls-atom.png" /><br/>
                <h1>Quick digiKam Tip: Back up digikamrc file</h1> 
                  
                <p>digiKam stores the current state of the application in the <em>~/.config/digikamrc</em> file. This file keeps track of pretty much everything: from the database connection profile and custom toolbar settings, to the last-used curve and sharpening parameters. So next time you install or reinstall digiKam, don’t forget to back up the <em>digikamrc</em> file. This way, you don’t have to configure a fresh digiKam installation from scratch. Simply copy the file to a safe location or external storage device, and drop the file into the <em>~/.config</em> folder before you run digiKam.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Location Tracking for Photographers with GPS Logger and Trekarta]]></title>
            <link>https://pixls.us/blog/2019/06/location-tracking-for-photographers-with-gps-logger-and-trekarta/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2019/06/location-tracking-for-photographers-with-gps-logger-and-trekarta/</guid>
            <pubDate>Fri, 28 Jun 2019 00:00:00 GMT</pubDate>
            <description><![CDATA[<img src="/images/logo/pixls-atom.png" /><br/>
                <h1>Location Tracking for Photographers with GPS Logger and Trekarta</h1> 
                  
                <p>When it comes to Android apps for photographers, we are spoiled for choice. From depth-of-field and golden hour calculators to sun position and remote control apps – there are plenty of clever tools to choose from. But there is one particular app combination that can prove to be indispensable for any photographer on the move: a GPS logger and a GPX viewer. There are two main reasons for that.</p>
<!--more-->
<ol>
<li><p>Tracking your movements and saving them in the GPX format can come in handy for geotagging photos.</p>
</li>
<li><p>The ability to attach comments to the current location allows you to use the GPS logging app to note places you either photographed or you plan to photograph later. You can then use a GPX viewer app to see and manage bookmarked locations.</p>
</li>
</ol>
<p>There are several apps that offer GPS logging and viewing, but you can’t go wrong with <a href="https://gpslogger.app/">GPS Logger for Android</a> and <a href="https://trekarta.info/">Trekarta</a>. Both apps are released under an open source license, and they are available free of charge on Google Play and F-Droid.</p>
<figure>
<img src="https://pixls.us/blog/2019/06/location-tracking-for-photographers-with-gps-logger-and-trekarta/gpstracker.png" alt="GPS TRacker for Android in all its bare-bone beauty" />
</figure>

<p>How you set up GPS Logger for Android is a matter of personal preference. One way to go is to configure the app to automatically start tracking on boot and upload tracks to the desired destination (e.g., a NAS or a file sharing service).</p>
<p>Once GPS Logger for Android is running, adding a comment to the current location is as easy as pulling down the notification drawer and tapping <strong>Comment</strong>. The app saves the tracks as GPX files in the <em>Android/data/com.mendhak.gpslogger/files</em> directory on your Android device. To view a GPX file in Trekarta, use a file manager to navigate to the directory, and use Android’s sharing functionality to send the desired GPX file to Trekarta.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Processing a nightscape in Siril]]></title>
            <link>https://pixls.us/articles/processing-a-nightscape-in-siril/</link>
            <guid isPermaLink="true">https://pixls.us/articles/processing-a-nightscape-in-siril/</guid>
            <pubDate>Tue, 16 Apr 2019 15:08:28 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/processing-a-nightscape-in-siril/resultat_03_final.jpg" /><br/>
                <h1>Processing a nightscape in Siril</h1> 
                <h2>A basic tutorial</h2>  
                <p><a href="https://www.siril.org/" title="Siril, A free astronomical image processing software">Siril</a> is a program for processing astronomical photographs.</p>
<p>In this tutorial, I’ll show you how to process a nightscape in Siril 0.9.10.</p>
<p>It doesn’t intend to be comprehensive tutorial but rather to present a basic general workflow that is a good starting point for those who want to learn Siril.</p>
<p>For this purpose, I’m sharing the raw files I used for the image I presented <a href="https://discuss.pixls.us/t/first-outing-of-the-new-year-the-creations/10658">here</a>, except that for this tutorial I limited the number of frames for the sake of bandwidth and processing speed.</p>
<figure>
<img src="https://pixls.us/articles/processing-a-nightscape-in-siril/resultat_03_final.jpg" alt="The Creations by Sebastien Guyader">
<figcaption>
The Creations, by Sebastien Guyader
</figcaption>
</figure>

<p>You can find and download the raw files <a href="https://pixls.us/files/Siril_Tutorial-20190416T142820Z-001.zip">here</a> (~1GB).</p>
<h2 id="setup">Setup<a href="#setup" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>The raw files are placed in specific sub-folders according to their use:</p>
<ul>
<li>bias/offset frames → <code>./Bias</code> (20 files)</li>
<li>dark frames → <code>./Darks</code> (15 files)</li>
<li>flats field frames → <code>./Flats</code> (15 files)</li>
<li>main subject/light frames → <code>./Lights</code> (10 files)</li>
</ul>
<p>Bias, dark, and flat field frames are also called “calibration” frames, their purpose being to improve the quality of the image by correcting the signal-to-noise ratio (in the case of bias and dark frames) and vignetting (with the flat frames). 
There are several places where you can learn more about the <a href="http://www.rawastrodata.com/pages/typesofimages.html">different types of frames for astrophotography</a>.</p>
<p>At the root of the folder, I placed two text files with the <code>.ssf</code> extension, these are scripts used by Siril for batch processing the files. Quite useful. If you want to run a script from Siril, place the <code>.ssf</code> files in <code>~/.siril/scripts</code>. Upon restarting Siril, a new Scripts menu appears in the top menu bar, allowing you to launch the installed scripts.</p>
<p>I suggest you download <a href="https://pixls.us/files/Siril_Tutorial-20190416T142820Z-001.zip">the whole folder</a> (~1GB), and move the scripts as indicated above. This way, if you set the working directory in Siril to the root of the folder, launching the script named <code>processing_from_raw.ssf</code> will automagically process the raws and create the output image in both <code>.fit</code> and <code>.tif</code> (16-bit) formats. Please note that in order to successfuly run the scripts, there must be a folder structure like the one used in this tutorial.</p>
<h2 id="step-by-step-processing">Step-by-step processing<a href="#step-by-step-processing" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>I will present the steps I used to process an image of the Milky Way. I don’t know if it’s the best way, but it’s probably close to what the developers of Siril advise to do for the general case of starting from raw files (actually, I started from one of their scripts and just slightly adapted it).</p>
<p>We will start with processing the calibration files, and then processing the lights.</p>
<h2 id="preparing-the-bias-frames">Preparing the bias frames<a href="#preparing-the-bias-frames" class="header-link"><i class="fa fa-link"></i></a></h2>
<ol>
<li>Set the working directory to the Bias sub-folder by clicking on <code>Change dir…</code>.</li>
</ol>
<figure>
<img src="https://pixls.us/articles/processing-a-nightscape-in-siril/JscqgBj.jpg">
</figure>

<ol start="2">
<li>We will use the 20 bias frames to generate a master-bias frame. To load the bias frames, click on the <code>+</code> button as shown (make sure that you select <strong>RAW DSLR Camera Files</strong> in the combo box) and select the bias frames located in the Bias subfolder.</li>
</ol>
<figure>
<img src="https://pixls.us/articles/processing-a-nightscape-in-siril/wnfTwRf.jpg">
</figure>

<ol start="3">
<li>In the “Sequence name” field, enter <code>bias</code> (or whatever you see fit) to set the prefix of the sequence and subsequent files, and click <strong>Convert</strong> to convert the files to the FITS format, which is the main format used by Siril. Note that you don’t need to demosaic the files yet, make sure the <strong>Debayer</strong> box is unchecked.</li>
</ol>
<p>When done converting the bias frames, a window will pop up showing a preview of one of the bias frames. Note that since it’s not demosaiced, it will only show as a B&amp;W channel image.</p>
<p>At this point, the bias frames are loaded and ready to be processed to make a master-bias frame.</p>
<ol start="4">
<li>In the <strong>Stacking</strong> tab, choose <strong>Average stacking with rejection</strong> as stacking method, and <strong>No normalisation</strong> under the normalisation combo box. You can leave the Sigma parameters at their default (unless you know or want to experiment for better values).</li>
</ol>
<p>It should look like this:</p>
<figure>
<img src="https://pixls.us/articles/processing-a-nightscape-in-siril/Arxr4GQ.jpg">
</figure>

<ol start="5">
<li>Click on the <strong>Start stacking</strong> button. The resulting master-bias frame will be saved as <code>bias_stacked.fit</code> in the <code>Bias</code> subfolder.</li>
</ol>
<h2 id="preparing-the-flat-field-frames">Preparing the flat field frames<a href="#preparing-the-flat-field-frames" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Since the flats also contain the sensor readout noise (contained in the bias frames), we should remove it by subtracting the master-bias.</p>
<ol>
<li><p>In the <strong>File conversion</strong> tab, remove the files already loaded by clicking on the button located just below the <strong>-</strong> (minus) button, and by clicking on the <strong>+</strong> (plus) button select and load the flat frames located in the Flats subfolder.</p>
</li>
<li><p>Set the working directory to the Flats sub-folder by clicking on “Change dir…” and set the Sequence name as “flats”.</p>
</li>
<li><p>Like for the bias frames, ensure <strong>Debayer</strong> is unchecked, then click on <strong>Convert</strong>.</p>
</li>
<li><p>In the <strong>Pre-processing</strong> tab, check only the <strong>Use offset</strong> box, click on <strong>Browse</strong> to select the <code>Bias/bias_stacked.fit</code> file, and click on <strong>Start pre-processing</strong>.</p>
</li>
</ol>
<figure>
<img src="https://pixls.us/articles/processing-a-nightscape-in-siril/spLV1F6.jpg">
</figure>

<ol start="5">
<li><p>To generate the master-flat, go to the <strong>Stacking</strong> tab, and this time set <strong>Normalisation</strong> to <em>Multiplicative</em> and the Stacking Method as <strong>Average with rejection</strong>.</p>
</li>
<li><p>Click on <strong>Start stacking</strong> to produce the <code>pp_flat_stacked.fit</code> master-flat frame in the Flats subfolder.</p>
</li>
</ol>
<figure>
<img src="https://pixls.us/articles/processing-a-nightscape-in-siril/wqpmygU.jpg">
</figure>


<h2 id="preparing-the-dark-frames">Preparing the dark frames<a href="#preparing-the-dark-frames" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>As with the bias and flats, you need to load the dark frames.</p>
<ol>
<li><p>In the <strong>File conversion</strong> tab, remove the files already loaded, select and load the dark frames located in the Darks subfolder.</p>
</li>
<li><p>Set the working directory to the Darks sub-folder by clicking on <strong>Change dir…</strong>, and set <strong>Sequence</strong> name as <code>darks</code>.</p>
</li>
<li><p><strong>Debayer</strong> should be unchecked.</p>
</li>
<li><p>Click on <strong>Convert</strong>.</p>
</li>
<li><p>The darks need to be stacked the same way as the bias frames. In the <strong>Stacking</strong> tab, choose <strong>Average with rejection</strong> and <strong>No normalisation</strong>.</p>
</li>
<li><p>Click <strong>Start Stacking</strong>.</p>
</li>
</ol>
<figure>
<img src="https://pixls.us/articles/processing-a-nightscape-in-siril/AThdxJT.jpg">
</figure>

<p>The master-dark frame is saved as <code>Darks/dark_stacked.fit</code>.</p>
<p>Note: if you take images often in the same conditions (same air temperature, same exposure settings), you can save the <code>dark_stacked</code> and <code>pp_flat_stacked</code> files, and re-use them to process future light frames faster. I read on some forums that some astrophotographers keep their calibration files and use those for around 1 year, before taking new calibration frames.</p>
<h2 id="preparing-the-light-frames">Preparing the light frames<a href="#preparing-the-light-frames" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Now it’s time to start processing the light frames, by first subtracting the darks (which also contain the bias signal) and the flats (from which bias has already been subtracted).</p>
<ol>
<li><p>Select the light frames in the <strong>File conversion</strong> tab.</p>
</li>
<li><p>Set the Sequence name to <code>lights</code>, and point the working directory to the <code>Lights (Change dir...)</code>.</p>
</li>
<li><p>Convert the files, still without debayering.</p>
</li>
<li><p>Then go to the <strong>Pre-Processing</strong> tab, check <strong>Use dark</strong>, select the <code>Darks/dark_stacked.fit</code> file, check <strong>Use flat</strong>, and select the <code>Flats/pp_flat_stacked.fit</code> file.</p>
</li>
<li><p>Make sure that the other boxes are checked as in the following screenshot.</p>
</li>
</ol>
<figure>
<img src="https://pixls.us/articles/processing-a-nightscape-in-siril/CrH8DGw.jpg">
</figure>

<p>Note that “Cosmetic Correction” can also be done from the “Image Processing” tab.</p>
<ol start="6">
<li>Click <strong>Start pre-processing</strong>.</li>
</ol>
<p>This will produce new FITS files with the prefix <code>pp_light_</code> and the corresponding <code>.seq</code> file. These files are loaded.</p>
<h2 id="demosaicing-the-files">Demosaicing the files<a href="#demosaicing-the-files" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>It’s time to demosaic our processed files. There’s something strange in the GUI, in that after pre-processing, when you uncheck “Use dark” and “Use flat” boxes, the “Debayer FITS images before saving” and the “Start pre-processing” button become grayed out.</p>
<ol>
<li>In the <strong>File conversion</strong> tab, remove the selected files and load the 10 <code>pp_light_000xx.fit</code> files.</li>
<li>Check the <strong>Debayer</strong> box and write <code>db_pp_light</code> as the sequence name.</li>
<li>Click <strong>Convert</strong>.</li>
</ol>
<figure>
<img src="https://pixls.us/articles/processing-a-nightscape-in-siril/ntji2im.jpg">
</figure>

<p>The pre-processed lights will be saved as FITS files, and the corresponding <code>db_pp_light.seq</code> file loaded. Two preview windows will open this time, one with the 3 RGB channels separated, and one with the RGB composite image.</p>
<ol start="4">
<li>In the <strong>Register</strong> tab, select <strong>Global Star Alignment (deep-sky)</strong> from the registratrion method drop-down list and click <strong>Go register</strong>.</li>
</ol>
<p>If you have more 8GB of RAM, you can try checking the <strong>Simplified Drizzle x2</strong> box (it will up-sample the images by a factor 2, increasing the RAM usage by a factor 4). Siril will detect the stars and register each of the 10 images. The preview windows will be updated. By the way, you can play with the zoom and select <strong>AutoStretch</strong> to get a better preview of the selected image.</p>
<figure>
<img src="https://pixls.us/articles/processing-a-nightscape-in-siril/L86SJx1.jpg">
</figure>

<ol start="5">
<li><p>In the <strong>Stacking</strong> tab, make sure that <strong>Average with rejection</strong> is selected as the stacking method, and that <strong>Additive with scaling</strong> is set for <em>Normalisation</em>.</p>
</li>
<li><p>Click on <strong>Start stacking</strong>.</p>
</li>
</ol>
<p>The resulting aligned and stacked image will be saved as <code>Lights\r_db_pp_light_stacked.fit</code>.</p>
<ol start="7">
<li>At this step, you can also save the resulting image as JPEG, TIFF, PNG, etc. for further processing in your favorite image editor. On the menu, just click on <em>File</em> &gt; <em>Save As</em>, and pick the image format you wish (or right-click on the RGB windows and pick the format that best suits you).</li>
</ol>
<h2 id="post-processing-the-image">Post-processing the image<a href="#post-processing-the-image" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Siril can do some more or less specialized post-processing to your image. I found it interesting to use.</p>
<ul>
<li>While the stacked image is still loaded in Siril, you can apply a log transform (it is in linear mode in Siril). I haven’t found how to do it in the GUI, but you can simply type “log” in the “Console” field at the bottom of the Output logs tab, in the main window.</li>
<li>Still in the console field, you can use the command “crop” followed by the coordinates of the bounding box in pixels, to crop the image (some auto-detection tools in Siril require the image to be cropped to remove the borders introduced by aligning the images, in order to work properly). For example, my image can be cropped by typing <code>crop 30 30 5950 3970</code>.</li>
<li>You can apply green noise removal in the “Image Processing” tab &gt; “Remove Green Noise…”.</li>
<li>Lucy-Richardson deconvolution can be applied in “Image Processing” menu option &gt; “Deconvolution…”. 10 iterations and a Sigma value of 0.6 are a good starting point.</li>
</ul>
<p>The resulting image can be saved as JPEG, TIFF, PNG, etc. for further processing in your favorite image editor or as a finished image if you’re satisfied.</p>
<h2 id="processing-for-the-foreground">Processing for the foreground<a href="#processing-for-the-foreground" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>The <em>problem</em> with this whole process, is that because the images have been aligned with the stars as reference, the foreground will be blurred because earth moved between successive frames. What I do is to reprocess the light frames from just after the calibration step (i.e. after the dark and flat frames subtraction) but only skipping the stars registration step. By doing so, the foreground will undergo the same pre- and post-processing, and the resulting image will have a sharp foreground and trailing sky.</p>
<p>I provided a script (<code>processing_from_raw_foreground.ssf</code>) which will do that for you, if you already used the first script or if you use the same file naming convention as in the script.</p>
<p>Finally, in your favorite image editor, you can combine the “sky” and “foreground” images using a mask, to get both the sky and the foreground sharp.</p>
<p>Here’s what I obtained following these steps (but using the scripts), after just combining the 2 images in Gimp:</p>
<figure>
<img src="https://pixls.us/articles/processing-a-nightscape-in-siril/5TeMlIV.jpg">
</figure>

<p>And after quick curve and saturation tweaking in Gimp:</p>
<figure>
<img src="https://pixls.us/articles/processing-a-nightscape-in-siril/o6Wpmvc.jpg">
</figure>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[New Topic Previews]]></title>
            <link>https://pixls.us/blog/2019/04/new-topic-previews/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2019/04/new-topic-previews/</guid>
            <pubDate>Tue, 02 Apr 2019 17:18:10 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2019/04/new-topic-previews/pixls-critique.jpg" /><br/>
                <h1>New Topic Previews</h1> 
                <h2>Image previews now available for some categories</h2>  
                <p>I’ve been a member of the community over at <a href="https://blenderartists.org/" title="Blender Artists">blenderartists.org</a> (previously elysiun) for a long time (it’ll be <em>15 years</em> this October according to <a href="https://blenderartists.org/u/pld/summary">my profile there</a>).
So it was nice to see when they finally transitioned to using <a href="https://www.discourse.org/" title="Discourse homepage">Discourse</a> a little while back.</p>
<!--more-->
<p>What I really liked, though, was the work that Bart did to specific pages and tag lists to display them.
Here’s what their current homepage looks like:</p>
<figure>
<img src="https://pixls.us/blog/2019/04/new-topic-previews/ba-homepage.jpg">
<figcaption>
Current blenderartists.org homepage
</figcaption>
</figure>

<p>They use a different default main page style, “Categories” view, than we do (“Latest”). This just shows the site categories as a column on the left, then the latest posts in a column on the right.</p>
<p>The row of featured images along the top is actually part of a plugin that I’ll get to in a moment.</p>
<p>If you want to change your own default main page view of the forums, you can modify it at your account <code>Preferences</code> &rarr; <code>Interface</code> (and change it to <em>Categories</em>):</p>
<figure>
<img src="https://pixls.us/blog/2019/04/new-topic-previews/pixls-prefs.png">
</figure>


<h2 id="fancy-category-views"><a href="#fancy-category-views" class="header-link-alt">Fancy Category Views</a></h2>
<p>The default landing page is neat, but what they did with their <a href="https://blenderartists.org/c/artwork/forum-gallery" title="blenderartists.org gallery page">forum gallery</a> page is much neater:</p>
<figure>
<img src="https://pixls.us/blog/2019/04/new-topic-previews/ba-gallery.jpg">
<figcaption>
blenderartists.org forum gallery page
</figcaption>
</figure>

<p>They set up the <a href="https://meta.discourse.org/t/topic-list-previews/101646" title="Topic List Previews Plugin">Topic List Previews</a> plugin so the entire category is actually viewed as a tile of images.
I think we can all agree that this is generally a much nicer way to view categories that are heavily image-based.
Of course, I thought this was a natural fit for us as well!</p>
<p>So through the magic of having an invaluable resource like a darix, he was able to make it a reality for us!</p>
<p>We’ve got it implemented now on the <a href="https://discuss.pixls.us/c/processing/playraw" title="PIXLS.US Play Raw Category">Play Raw</a> category (now its own sub-category under the <a href="https://discuss.pixls.us/c/processing" title="PIXLS.US Processing Category">Processing</a> category), the <a href="https://discuss.pixls.us/c/critique" title="PIXLS.US Critique Category">Critique</a>, and the <a href="https://discuss.pixls.us/c/showcase" title="PIXLS.US Showcase Category">Showcase</a> category.
If you haven’t had a chance to check it out yet, please do. (darix announced it in the thread <a href="https://discuss.pixls.us/t/play-raw-posts-and-you/11959">Play raw posts and you</a> so feel free to give us any further feedback in that topic.)</p>
<figure>
<img src="https://pixls.us/blog/2019/04/new-topic-previews/pixls-critique.jpg">
<figcaption>
The Critique category
</figcaption>
</figure>

<p><strong>Keep in mind</strong> that for those categories, the preview image will correspond to the first image in the first post. Try to remember to make the first image in those category topics the one you’ll want in the preview.</p>
<p>We haven’t enabled the featured row of images after some initial feedback. We may revisit it again at some point, but hopefully the Play Raw and Showcase categories will look a little better now. It certainly makes the categories a little easier and faster to navigate now that you can see the previews directly on the page.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[G'MIC Finally Accepts Donations]]></title>
            <link>https://pixls.us/blog/2019/03/g-mic-finally-accepts-donations/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2019/03/g-mic-finally-accepts-donations/</guid>
            <pubDate>Sat, 23 Mar 2019 17:06:10 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2019/03/g-mic-finally-accepts-donations/david-spooky.jpg" /><br/>
                <h1>G'MIC Finally Accepts Donations</h1> 
                <h2>Help support an awesome team!</h2>  
                <p>For years the incredible team over at <a href="https://gmic.eu" title="G&#39;MIC homepage">G’MIC</a> (GREYC’s Magic for Image Computing) have been producing an incredible image processing system <em>and</em> many awesome filters to go along with it.
They’ve got an very active and awesome community right here on <a href="https://discuss.pixls.us/c/software/gmic">their forums</a> and they’ve been producing all manner of neat processing filters for photographers, digital artists, and scientists.</p>
<p>Due to the project being under the auspices of a French Research Lab, the <a href="https://www.greyc.fr/">GREYC</a> laboratory in Caen, France, they were limited in being able to accept any donations.</p>
<p><strong>Until now!</strong></p>
<p>To avoid burying the lede, <strong>go and make a donation</strong> to the fabulous folks of the G’MIC project: <strong><a href="https://libreart.info/en/projects/gmic">https://libreart.info/en/projects/gmic</a></strong>.</p>
<!--more-->
<figure>
<a href='https://gmic.eu'>
<img src="https://pixls.us/blog/2019/03/g-mic-finally-accepts-donations/gmic-logo.jpg" width='800' height='194'>
</a>
</figure>

<p>I first heard about <a href="http://cimg.eu/greycstoration/">GREYCstoration</a> (proto-G’MIC) a long time ago (over 10 years) as the only really viable Free Software image denoising option for photographers.
It allowed me to de-noise images on par (or better in many cases) with the then popular <em>Noise Ninja</em>.
It’s been an essential part of my toolkit ever since!</p>
<p>Back then David Tschumperlé was really only looking for postcards from users as a “thank you” and maybe some occasional donations to pay for hot chocolates during the day.
I finally got to buy him a milkshake while at the <a href="https://libregraphicsmeeting.org">Libre Graphics Meeting</a> 2014 in Leipzig, Germany and for the value he has provided me with his software I owe him many, many more!</p>
<figure>
<img src="https://pixls.us/blog/2019/03/g-mic-finally-accepts-donations/david-lgm.jpg" width='640' height='494'>
<figcaption>
The man, the myth, the legend.
</figcaption>
</figure>

<p><strong>Value</strong> is exactly what I want to bring up in this post.
I’m sure many here have had an opportunity to use G’MIC in some form and any attempt at listing the wide range of filters and capabilities it provides would not do it justice.
If <em>you’ve</em> realized some value from the project, now is the time to show some love.</p>
<p>I’ve been lucky to call David a friend for a long time now and I can personally attest to his kindness and sincerity.
With that in mind I implore you: if you have a few spare dollars, euros, yen, pesos, or gold bullion <em>please</em> consider <a href="https://libreart.info/en/projects/gmic">donating to the project</a> and make sure the milkshakes (or hot chocolates) never stop flowing.
David (and Sébastien Fourey!) are a sound investment in providing high value to Free Software artists and photographers!</p>
<p><strong><a href="https://libreart.info/en/projects/gmic">Donate to the G’MIC project!</a></strong></p>
<p>(David is too modest to really come out and ask for support but “modest” isn’t really in my vocabulary - <a href="https://libreart.info/en/projects/gmic">so go donate!</a>)</p>
<h2 id="lila"><a href="#lila" class="header-link-alt">LILA</a></h2>
<p>For a long time David was unable to accept donations from community members.
There were some concerns from the research institution his lab is a part of in France.
Just recently, though, they managed to reach an agreement where the funds flow through a French non-profit called <a href="https://libreart.info/en/">Libre Comme L’Art</a>, LILA, that includes the fabulous Jehan Pagès of the <a href="https://www.gimp.org/">GIMP</a> team as a member and where the <a href="https://www.patreon.com/zemarmot">ZeMarmot</a> animated film is being produced.</p>
<figure>
<a href='https://libreart.info/en/'>
<img src="https://pixls.us/blog/2019/03/g-mic-finally-accepts-donations/LILA_logo.png" width='400' height='130'>
</a>
</figure>

<p>A big “Thank You!” to Jehan and LILA for making this possible!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[A Q&A with the CHDK Developers]]></title>
            <link>https://pixls.us/articles/a-q-a-with-the-chdk-developers/</link>
            <guid isPermaLink="true">https://pixls.us/articles/a-q-a-with-the-chdk-developers/</guid>
            <pubDate>Sun, 17 Feb 2019 20:11:42 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/a-q-a-with-the-chdk-developers/reyalp-chdk-md-fireworks-20180704-9082-c2.jpg" /><br/>
                <h1>A Q&A with the CHDK Developers</h1> 
                <h2>Adding Features to a Camera: Hacker Meets Photographer</h2>  
                <h4 id="introduction">Introduction<a href="#introduction" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><a href="https://chdk.setepontos.com/" title="CHDK website">CHDK</a> is a free, open source software add-on that runs on Canon PowerShot cameras and expands their functionality. Some of its features are:</p>
<ul>
<li>Professional control: RAW files, bracketing, manual control over exposure, zebra mode, live histogram, grids, etc.</li>
<li>Motion detection: Trigger exposure in response to motion, fast enough to catch lightning.</li>
<li>USB remote: Simple DIY remote allows you to control your camera remotely.</li>
<li>Scripting: Control CHDK and camera features using uBASIC and Lua scripts. Enables time lapse, motion detection, advanced bracketing, and more.</li>
<li>PTP: Shooting control, live view, and file transfer from Linux and Windows.</li>
</ul>
<p>I talked with the core team of developers to learn more about CHDK.</p>
<h4 id="how-did-chdk-start-who-were-the-first-developers-what-was-their-role-in-those-first-steps-do-you-have-any-information-on-who-those-people-are-where-they-come-from-or-their-professional-background-">How did CHDK start? Who were the first developers? What was their role in those first steps? Do you have any information on who those people are, where they come from, or their professional background?<a href="#how-did-chdk-start-who-were-the-first-developers-what-was-their-role-in-those-first-steps-do-you-have-any-information-on-who-those-people-are-where-they-come-from-or-their-professional-background-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>reyalp</strong>: That was before my time. The very first developer was <em>VitalyB</em>, and I don’t know much about his background.</p>
<p>In truth, CHDK is very loosely organized and informal, so I don’t know much about current contributors backgrounds either.</p>
<p><strong>waterwingz</strong>: Before my time too. The original hack seems to have taken off in 2006/2007 when it got mention on the <a href='https://www.dpreview.com/forums/thread/1836713' target='_blank'>dpreview.com</a> site. Many people shared random bits and pieces based on what they were personally interested in working on. There was no project organization and people did their own builds.</p>
<p>At some point, I believe someone with the nick <em>GrAnd</em>, got things organized around a wikia site and created a standardized set of build tools. Eventually an online discussion forum and autobuild server were added - not sure who gets credit for those. </p>
<p>But over the last 10 years, more or less, <em>reyalp</em> has coordinated the ongoing volunteer development efforts. There is still no real plan or schedule but there seems to be community consensus of how things get done and what gets added to the package.</p>
<h4 id="some-people-say-that-chdk-was-first-developed-by-andrei-gratchev-a-href-https-fr-wikipedia-org-wiki-chdk-target-_blank-here-a-and-a-href-https-www-pcmag-com-article2-0-2817-2329392-00-asp-target-_blank-here-a-i-believe-he-is-grand-right-do-you-know-something-about-it-is-it-possible-that-andrei-gratchev-is-vitalyb-">Some people say that CHDK was first developed by Andrei Gratchev (<a href='https://fr.wikipedia.org/wiki/CHDK' target='_blank'>here</a> and <a href='https://www.pcmag.com/article2/0,2817,2329392,00.asp' target='_blank'>here</a>). I believe he is <em>GrAnd</em>, right? Do you know something about it? Is it possible that Andrei Gratchev is <em>VitalyB</em>?<a href="#some-people-say-that-chdk-was-first-developed-by-andrei-gratchev-a-href-https-fr-wikipedia-org-wiki-chdk-target-_blank-here-a-and-a-href-https-www-pcmag-com-article2-0-2817-2329392-00-asp-target-_blank-here-a-i-believe-he-is-grand-right-do-you-know-something-about-it-is-it-possible-that-andrei-gratchev-is-vitalyb-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>reyalp</strong>: <em>GrAnd</em> (Andrei Gratchev) and <em>VitalyB</em> are definitely not the same person. They have separate accounts on <em><a href='https://app.assembla.com/spaces/chdk/team' target='_blank'>assembla.com</a></em>.
<em>VitalyB</em> did the very first work on what eventually became CHDK, while <em>GrAnd</em> was an early developer who played a major role in organizing the project.
It’s possible that <em>GrAnd</em> originated the name CHDK.
I don’t know that history directly, but it could explain confusion over whether he was the founder.</p>
<figure>
<img src="https://pixls.us/articles/a-q-a-with-the-chdk-developers/reyalp-chdk-md-lighting-20170911-8111-c2.jpg" width='1280' height='1707' alt='Lightning by reyalp'>
<figcaption>
Lightning captured by a Canon PowerShot G7x with CHDK motion detection script <a href='https://chdk.setepontos.com/index.php?topic=10864.0' target='_blank'>MDFB2013</a>, by reyalp, licensed under CC BY-NC 2.0.
</figcaption>
</figure>

<h4 id="canon-cameras-run-on-digic-boards-as-far-as-i-m-aware-of-at-the-time-vitalyb-did-the-first-hack-people-had-already-hacked-digic-i-compact-canon-cameras-and-could-execute-custom-programs-what-was-the-big-leap-in-terms-of-development-or-finding-hooks-in-the-digic-ii-firmware-that-vitalyb-made-">Canon cameras run on DIGIC boards. As far as I’m aware of, at the time <em>VitalyB</em> did the first hack, people had already hacked DIGIC-I compact Canon cameras and could execute custom programs. What was the big leap (in terms of development or finding hooks in the DIGIC-II firmware) that <em>VitalyB</em> made?<a href="#canon-cameras-run-on-digic-boards-as-far-as-i-m-aware-of-at-the-time-vitalyb-did-the-first-hack-people-had-already-hacked-digic-i-compact-canon-cameras-and-could-execute-custom-programs-what-was-the-big-leap-in-terms-of-development-or-finding-hooks-in-the-digic-ii-firmware-that-vitalyb-made-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>reyalp</strong>: The original Digic-I cameras were actually ROM DOS running on a 16 bit x86 clone (except for S1 IS, which is VxWorks on ARM and has a partial CHDK port developed by <em>srsa_4c</em>).</p>
<p>A hack for the DOS based cameras was developed by a <a href='http://rayer.g6.cz/hardware/a70.htm' target='_blank'>Czech developer</a>.</p>
<p>I don’t know if <em>VitalyB</em> was aware of this work, but the platforms were so different there wouldn’t likely have been much overlap.</p>
<p><strong>waterwingz</strong>: As far as I know, the big leap in hooking the DIGIC-II firmware was figuring out how to hack Canon’s firmware update process.</p>
<h4 id="it-s-stated-that-he-hacked-the-firmware-update-process-and-executed-his-own-program-instead-of-the-firmware-update-itself-that-first-program-aimed-to-make-a-copy-of-canon-firmware-how-exactly-could-he-get-a-copy-of-the-canon-firmware-by-blinking-a-led-why-did-he-need-a-copy-of-canon-firmware-">It’s stated that he hacked the firmware update process and executed his own program instead of the firmware update itself. That first program aimed to make a copy of Canon firmware. How exactly could he get a copy of the Canon firmware by blinking a LED? Why did he need a copy of Canon firmware?<a href="#it-s-stated-that-he-hacked-the-firmware-update-process-and-executed-his-own-program-instead-of-the-firmware-update-itself-that-first-program-aimed-to-make-a-copy-of-canon-firmware-how-exactly-could-he-get-a-copy-of-the-canon-firmware-by-blinking-a-led-why-did-he-need-a-copy-of-canon-firmware-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: From what I understand, he assembled a little piece of code that loaded and ran in the place of the expected firmware update code.</p>
<p>Once he could do that, by trial &amp; error he learned what memory address needed to be poked to turn one of the camera’s LEDs on &amp; off. And once he could do that, he recoded so as to dump the camera’s memory contents serially via that LED to a phototransistor interfaced to an external computer.</p>
<p>After that it was a matter of reverse assembly of the raw code to learn how the rest of the boot process and camera firmware worked.</p>
<p><strong>reyalp</strong>: I don’t know the specifics of exactly what <em>VitalyB</em> did for the very first camera, but generally to make a hack work with the existing firmware, you need a copy of the firmware code to disassemble and analyze.</p>
<p>The advantage of using LED blinking is that the code is really simple: you just need to know how to control an LED (done writing to specific address on these cameras) and a loop.</p>
<p>In contrast, writing a file to the SD card requires a whole stack with an SD driver, a filesystem driver and so on.</p>
<p>Without having already analyzed the firmware, you don’t know how to interface with those things, and on PowerShot cameras, they aren’t really available after a firmware update file is loaded.</p>
<p>Blinking was used frequently in the early days of CHDK, but around 2010 Alfredo Ortega and Oren Isacson of Core Labs worked out how to run scripts in Canon’s native scripting language (which we call Canon Basic). I wrote a script to dump the original firmware from Canon Basic, and we’ve used that as the primary way of dumping firmware ever since.</p>
<figure>
<img src="https://pixls.us/articles/a-q-a-with-the-chdk-developers/CHDK-LED-dump.jpg" alt='CHDK LED dump by Andrei Gratchev'>
<figcaption>
CHDK LED dump, by Andrei Gratchev, all rights reserved.
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/a-q-a-with-the-chdk-developers/CHDK-LED-dump II.jpg" alt='CHDK LED dump by Andrei Gratchev'>
<figcaption>
Reading CHDK LED dump, by Andrei Gratchev, all rights reserved.

1 - Spacing between bytes;

2 - Spacing between bits;

3 - Wide pulse - logical “1”;

4 - Narrow pulse - logical “0”.
</figcaption>
</figure>

<h4 id="canon-dslr-s-also-allow-this-same-kind-of-hack-why-did-vitalyb-start-with-point-and-shoot-cameras-">Canon DSLR’s also allow this same kind of hack. Why did <em>VitalyB</em> start with point-and-shoot cameras?<a href="#canon-dslr-s-also-allow-this-same-kind-of-hack-why-did-vitalyb-start-with-point-and-shoot-cameras-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: I’m guessing, but probably he started there because he happened to own a PowerShot and not a DSLR. Or maybe it was just the first device that he was able to find a backdoor into.</p>
<p><strong>reyalp</strong>: While Canon DSLRs and P&amp;S use the same basic CPU and operating systems, the rest of the code is very different.</p>
<p>Running custom code on Canon DSLRs uses different mechanisms which weren’t figured out until much later.</p>
<p>My impression is <em>VitalyB</em> started on the camera he had (PowerShot A610?).</p>
<p>The lower cost of P&amp;S also makes them more attractive to experiment with, and before the rise of smartphones, P&amp;S were much more common than DSLRs so there was a better chance of interested developers having them.</p>
<h4 id="are-there-any-ties-between-chdk-and-magic-lantern-a-chdk-equivalent-for-canon-dslrs-">Are there any ties between CHDK and Magic Lantern (a CHDK equivalent for Canon DSLRs)?<a href="#are-there-any-ties-between-chdk-and-magic-lantern-a-chdk-equivalent-for-canon-dslrs-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: There are several people who participate in both projects and some discoveries are occasionally useful to both. But there is no coordination beyond that.</p>
<p><strong>reyalp</strong>: We do share information and occasionally code, but as mentioned above, the Canon firmware up to now has been quite different. Some of the initial DSLR research took place on the CHDK forum.</p>
<figure>
<img src="https://pixls.us/articles/a-q-a-with-the-chdk-developers/Peter.Laudanski.Der.Dortmunder.Norden_01.jpg" width='2048' height='1536' alt='Der Dortmunder Norden by Peter Laudanski'>
<figcaption>
Kite aerial photograph captured by a Canon PowerShot G7X, powered by a CHDK script, by <a href='https://www.flickr.com/photos/56388614@N05/albums' target='_blank'>Peter Laudanski</a>, licensed under CC BY-NC 2.0. The script was written by <em>waterwingz</em> and is better described <a href='http://chdk.wikia.com/wiki/KAP_UAV_Exposure_Control_Script' target='_blank'>here</a>.</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/a-q-a-with-the-chdk-developers/Peter.Laudanski.Kite.Mount.jpg" alt='Kite Mount, by Peter Laudanski'>
<figcaption>
Kite mount showing, at top left, a box containing battery and a transmitter for the monitor at bottom right, that stays on the ground. By <a href='https://www.flickr.com/photos/56388614@N05/albums' target='_blank'>Peter Laudanski</a>, licensed under CC BY-NC 2.0. The script was written by <em>waterwingz</em> and is better described <a href='http://chdk.wikia.com/wiki/KAP_UAV_Exposure_Control_Script' target='_blank'>here</a>.</figcaption>
</figure>

<h4 id="chdk-was-born-as-hdk-or-hack-development-kit-and-only-later-the-c-was-added-regarding-the-idea-behind-the-name-what-exactly-does-it-mean-to-say-that-chdk-is-not-a-simple-firmware-add-on-but-a-development-kit-does-it-have-to-do-with-the-capability-of-loading-and-executing-custom-user-scripts-">CHDK was born as HDK, or Hack Development Kit, and only later the “C” was added. Regarding the idea behind the name, what exactly does it mean to say that CHDK is not a simple firmware add-on, but a development kit? Does it have to do with the capability of loading and executing custom user scripts?<a href="#chdk-was-born-as-hdk-or-hack-development-kit-and-only-later-the-c-was-added-regarding-the-idea-behind-the-name-what-exactly-does-it-mean-to-say-that-chdk-is-not-a-simple-firmware-add-on-but-a-development-kit-does-it-have-to-do-with-the-capability-of-loading-and-executing-custom-user-scripts-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: Way before my time again - I’d only be speculating on the name’s origin. I guess the kit designation means you get the source code to do development with. I don’t think it has anything to do with user scripting capabilities.</p>
<h4 id="is-this-kind-of-hack-only-possible-on-canon-cameras-why-">Is this kind of hack only possible on Canon cameras? Why?<a href="#is-this-kind-of-hack-only-possible-on-canon-cameras-why-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: The exact details and mechanism will only work on Canon PowerShot cameras. They all build on how Canon supports firmware upgrades (even though CHDK does not actually modify any of the camera’s firmware).</p>
<p>To do something similar on a different brand of camera, you’d need to find a way to exploit any firmware update method they might provide.  If there is no such mechanism, you’d need to get lucky and find some other vector.</p>
<p><strong>reyalp</strong>: As <em>waterwingz</em> says, the specifics only apply to these cameras, but in general, most embedded devices are hackable with enough effort.</p>
<p>Manufacturers put varying levels of effort into preventing it, but the success of CHDK (and later Magic Lantern) involves a lot of stuff that just lined up by chance.</p>
<p>Another important factor in these projects is that reverse engineering is additive: the more you build and understand, the easier it is to keep up with upstream changes in new models. It’s also easier for people to add useful features, which gets more people involved and keeps the whole thing going.</p>
<p>Getting to that critical point on an entirely new system requires a lot of effort and/or luck.</p>
<p>Some of the lucky things that lined up to make CHDK take off were:</p>
<ul>
<li>The cameras ran VxWorks on ARM 946E-S, both of which had significant public documentation. Canon later switched to their proprietary DryOS operating system, but by that point CHDK had enough built up knowledge to carry on.</li>
<li>Canon left a lot of diagnostic stuff in the code and didn’t put a lot of effort into stopping unauthorized code from running.</li>
<li>There were a lot of PowerShots in circulation and they were affordable, which provided more chances of a developer with the reverse engineering skills having one.</li>
<li>Canon didn’t make any effort to stop it.</li>
</ul>
<h4 id="can-authorized-warranty-repair-shops-refuse-to-service-cameras-because-of-chdk-does-chdk-use-leave-any-traces-">Can authorized warranty repair shops refuse to service cameras because of CHDK? Does CHDK use leave any traces?<a href="#can-authorized-warranty-repair-shops-refuse-to-service-cameras-because-of-chdk-does-chdk-use-leave-any-traces-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: So far it has not been a problem as CHDK runs in RAM - it makes no permanent changes to the camera. When you turn the camera off, it disappears and you have to reload it the next time you use it.</p>
<p>And while Canon has made no official statements about CHDK one way or the other, there is an email somewhere from someone in Canon tech service stating that as long as CHDK did not modify the camera in any way, there was no warranty issue.</p>
<p>If you remove the SD card containing CHDK prior to sending your camera for service, there is really no way of anyone knowing you’ve used CHDK.</p>
<p><strong>reyalp</strong>: CHDK doesn’t normally leave obvious traces, but if the camera crashes, traces of CHDK can appear in an internal crash log the Canon firmware stores in onboard flash memory.</p>
<p>It’s certainly possible that other traces could be present, the cameras have a lot of sub-components that could store their own diagnostic information.</p>
<p>I don’t recall any cases of anyone reporting having warranty service rejected for this, but that doesn’t mean it couldn’t happen.</p>
<figure>
<img src="https://pixls.us/articles/a-q-a-with-the-chdk-developers/blackhole.Jupiter.jpg" width='430' height='310' alt='Jupiter by blackhole'>
<figcaption>
Jupiter captured by a Canon PowerShot A590IS, powered by CHDK, aligned and stacked with Registax. An afocal method was used for capturing on the Newtonian 114/900 telescope. By <a href='http://astrofoto.pondi.hr/' target='_blank'>blackhole</a>, licensed under CC BY-NC 2.0.
</figcaption>
</figure>

<h4 id="could-you-give-a-brief-explanation-of-how-chdk-is-designed-do-you-have-any-kind-of-diagram-that-could-illustrate-it-">Could you give a brief explanation of how CHDK is designed? Do you have any kind of diagram that could illustrate it?<a href="#could-you-give-a-brief-explanation-of-how-chdk-is-designed-do-you-have-any-kind-of-diagram-that-could-illustrate-it-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: CHDK is a volunteer effort and most volunteers would rather code than document in detail.</p>
<p>But you can learn a lot reading the <a href='https://chdk.wikia.com/wiki/For_Developers' target='_blank'>For Developers</a> section of the CHDK Wikia.</p>
<p>A short description is that CHDK loads by hijacking the camera’s firmware update process and then intercepts some of the camera’s RTOS tasks and replaces them with its own tasks. The CHDK tasks typically replicate the functionality of the original camera tasks but add features and functionality not included in the original Canon code.</p>
<h4 id="in-what-language-is-chdk-built-">In what language is CHDK built?<a href="#in-what-language-is-chdk-built-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: A combination of ARM assembler and C.</p>
<p><strong>philmoz</strong>: I would add Lua to that - a lot of useful features are now in the scripts. <em>waterwingz</em> has built some impressive functionality with his scripts.</p>
<p>I suppose we should also include uBasic as we still include some testing scripts written in it. uBasic is very primitive compared to Lua, so Lua scripts are preferred.</p>
<p>Finally there is also the Canon Basic built into the firmware - we use this to do firmware dumping.</p>
<figure>
<img src="https://pixls.us/articles/a-q-a-with-the-chdk-developers/Garry.George.1_02.jpg" width='5472' height='3648' alt='Winchester Cathedral by Garry George'>
<figcaption>
Winchester Cathedral captured by a Canon PowerShot G7X, powered by a CHDK script, by Garry George, licensed under CC BY-NC 2.0. <a href='http://chdk.wikia.com/wiki/Landscape_Focus_Bracketing_:_perfect_near_to_far_focus_brackets' target='_blank'>The script</a> was written by Garry himself.
</figcaption>
</figure>

<h4 id="what-were-some-of-the-difficult-issues-in-making-chdk-easier-to-port-to-new-cameras-">What were some of the difficult issues in making CHDK easier to port to new cameras?<a href="#what-were-some-of-the-difficult-issues-in-making-chdk-easier-to-port-to-new-cameras-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: Ummm… who said it was easy?</p>
<p>But seriously, most of CHDK is based on guesses about how the original Canon hardware and firmware works. So much of the coding was done on a “try it and see” basis. </p>
<p>What makes it more difficult is changes to the firmware as Canon changes RTOS’s or generations of DIGIC processors.</p>
<p><strong>philmoz</strong>: The Canon firmware remained pretty stable for a while. It continued to evolve but there weren’t any real upheavals until Digic 6. This made it easier to improve the tools, and take some of the guesswork out of ports.</p>
<p>We have a “<em>sig finder</em>“ tool that analyses a firmware dump and tries to find the things needed for a port. When I first started this was pretty primitive and a lot of things had to be done manually. I spent some time improving this tool, and I think that made porting a bit quicker until Digic 6.</p>
<p><em>waterwingz</em> also created a GUI tool for disassembling the firmware in a way that could be used in a port - I added a scripting language to this to automate the generation of some of the files needed for a port, which I think helped with new ports.</p>
<p>With Digic 6 the architecture changed a lot and things slowed down quite a bit. A lot of reverse engineering on the new cameras has been done by <em>srsa_4c</em>, <em>reyalp</em>, <em>ant</em>, and others so things are getting better again.</p>
<p><strong>reyalp</strong>: One of the things that makes it difficult is how many there are. The official CHDK source supports over 150 distinct models (many with multiple firmware versions that each require distinct ports), spanning Canon releases from 2004 through 2015.</p>
<p>Essentially the same CHDK code runs on all of them, so if a new model does something different, you have to figure out how to accommodate it without breaking the existing cameras.</p>
<p>On top of that, the developers don’t own most of the cameras, so testing is difficult.</p>
<p>As <em>Philmoz</em> mentioned, with Digic 6, Canon moved to a ARMv7 architecture processor and a new display system with a TAKUMI GPU, which took a lot of work supporting. <em>srsa_4c</em> did much of the initial reverse engineering work, while I took the concepts from <em>Philmoz</em>‘s “<em>sig finder</em>“ and implemented them in a new tool based on the open source Capstone disassembly library.</p>
<p><strong>nafraf</strong>: The scripting language developed by <em>Philmoz</em> helped a lot to port until Digic 5+ cameras. Using <em>code_gen</em> tool it was possible to port new models and improve the existing ports. On release 1.3, for example, <em>code_gen</em> was the key tool to add more than 60s exposure on all cameras.</p>
<figure>
<img src="https://pixls.us/articles/a-q-a-with-the-chdk-developers/Andrew.Hazelden.CHDK.Peggys.Cove.jpg" alt='Peggys Cove by Andrew Hazelden'>
<figcaption>
This photo of the Peggy’s Cove lighthouse was taken using CHDK with a Canon PowerShot SD780IS camera mounted on a Multiplex Easystar model airplane, by <a href='https://web.archive.org/web/20120408002225/http://www.andrewhazelden.com/blog/' target='_blank'>Andrew Hazelden</a>, licensed under CC BY-NC 2.0. The script was written by Andrew himself and is better described <a href='https://web.archive.org/web/20120329192621/http://www.andrewhazelden.com/blog/2010/09/ubasic-countdown-intervalometer-script-for-canon-PowerShots-running-chdk/' target='_blank'>here</a>.
</figcaption>
</figure>

<h4 id="in-which-os-do-these-tools-run-">In which OS do these tools run?<a href="#in-which-os-do-these-tools-run-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: CHDK is built using the <em>gcc</em> compiler so I guess the tools run on anything that supports that compiler - Windows and Linux for sure.</p>
<p>I do all my work under Linux although I have a laptop somewhere that runs the Windows tools.</p>
<p>The autobuild server that rebuilds CHDK after each update and provides current downloads runs under Linux.</p>
<p>And there are quite a few other tools that people have created, some of which are Windows only (or using Wine under Linux) and some of which are Java based and will run on Windows, Linux, or MacOS.</p>
<p><strong>philmoz</strong>: I use MacOS for CHDK development.</p>
<p>I also have a Linux VM I use for testing batch builds of the entire set of supported cameras, to make sure big changes don’t break the autobuild server.</p>
<p><strong>reyalp</strong>: I use Windows on my primary development system, but all the core CHDK tools and build process have supported Linux for as long as I’ve been involved.</p>
<p>My normal CHDK development environment is MSYS shells and <em>gvim</em>.</p>
<p>I also use Linux in VMs and a Raspberry Pi for some things.</p>
<figure>
<img src="https://pixls.us/articles/a-q-a-with-the-chdk-developers/Garry.George.2.jpg" width='5472' height='3648' alt='Winchester Cathedral by Garry George'>
<figcaption>
Winchester Cathedral captured by a Canon PowerShot G7X, powered by a CHDK script, by Garry George, licensed under CC BY-NC 2.0. <a href='http://chdk.wikia.com/wiki/Landscape_Focus_Bracketing_:_perfect_near_to_far_focus_brackets' target='_blank'>The script</a> was written by Garry himself.
</figcaption>
</figure>

<h4 id="is-there-any-camera-emulator-that-allows-to-test-core-code-before-loading-it-into-the-camera-">Is there any camera emulator that allows to test core code before loading it into the camera?<a href="#is-there-any-camera-emulator-that-allows-to-test-core-code-before-loading-it-into-the-camera-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: There are a couple of clever GUI emulators for testing CHDK uBASIC and Lua scripts.</p>
<p>But I don’t believe anyone has really succeeded in making a <a href='https://chdk.wikia.com/wiki/GPL_Qemu' target='_blank'>QEMU emulator</a> for core code development. </p>
<p>All testing is done on actual cameras.</p>
<p>And to date, I don’t believe anyone in the dev community has bricked a camera which says something about the stability of the process!</p>
<p><strong>philmoz</strong>: Magic Lantern uses QEMU to run their code in an emulator.</p>
<p>In theory CHDK could do this as well, but to date no-one has invested the time to create the hardware simulation bits needed.</p>
<p>Debugging CHDK is old school - blinking LED’s, printing messages (if you have the display working), writing log files, and lots of trial and error.</p>
<p><strong>reyalp</strong>: Not emulation, but I use <em>chdkptp</em> a lot for interactive testing. Being able to dump bits of memory or call functions interactively from a PC console is very useful.</p>
<h4 id="what-s-the-easiest-way-for-someone-to-get-involved-with-chdk-">What’s the easiest way for someone to get involved with CHDK?<a href="#what-s-the-easiest-way-for-someone-to-get-involved-with-chdk-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: <a href='https://chdk.fandom.com/wiki/Downloads' target='_blank'>Download it</a> and use it on a PowerShot.</p>
<p>Learn what it does and how to run scripts.</p>
<p>Then write some scripts on your own, or modify some existing ones.</p>
<p>Finally, do a port for an unsupported camera - pretty much every CHDK dev started off porting and then got hooked on doing more.</p>
<p><strong>blackhole</strong>: The easiest way is to use CHDK for something creative. It’s nice to see when users show results that are the product of using CHDK. I think it’s the biggest reward for developers when they see that their work is well-used.</p>
<p><strong>reyalp</strong>: <a href='https://chdk.setepontos.com/' target='_blank'>The forum</a> is the best place to get involved with the community.</p>
<p>To get involved with development, it really depends on your interests. If there’s something you want to add, either dive into the code or ask for suggestions on where to start.</p>
<p>Ports of additional cameras are always welcome too, and doing one provides a good overview of how CHDK works.</p>
<figure>
<img src="https://pixls.us/articles/a-q-a-with-the-chdk-developers/Peter.Laudanski.Heuernte.in.Holthausen_01.jpg" width='2048' height='1536' alt='Heuernte in Holthausen by Peter Laudanski'>
<figcaption>
Kite aerial photograph captured by a Canon PowerShot G7X, powered by a CHDK script, by <a href='https://www.flickr.com/photos/56388614@N05/albums' target='_blank'>Peter Laudanski</a>, licensed under CC BY-NC 2.0. The script was written by <em>waterwingz</em> and is better described <a href='http://chdk.wikia.com/wiki/KAP_UAV_Exposure_Control_Script' target='_blank'>here</a>.</figcaption>
</figure>

<h4 id="what-are-some-tasks-non-programmers-can-do-to-help-the-project-">What are some tasks non-programmers can do to help the project?<a href="#what-are-some-tasks-non-programmers-can-do-to-help-the-project-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: There has been a lot of work on the <a href='https://chdk.fandom.com/wiki/CHDK' target='_blank'>CHDK wiki</a> over the years, but there is still a ton to do.</p>
<p>For example, one CHDK user finds time each month to simply correct spelling and the worst of the grammar mistakes on the more popular pages and the CHDK User Manual.</p>
<p><strong>reyalp</strong>: If you do something interesting with CHDK, share it in the forum. A lot of interesting projects start as riffs on something someone else explored years earlier.</p>
<p>Documentation always needs help, but for CHDK, a lot of it really requires careful experimentation or knowledge of the source to do well.</p>
<p><strong>nafraf</strong>: If you are a CHDK user and find a bug, or a missing function on the port of your camera, please report the bug to the forum and help to test new versions. </p>
<p>Developers don’t have access to all models, then testing and receiving feedback from the users are necessary to help the project.</p>
<figure>
<img src="https://pixls.us/articles/a-q-a-with-the-chdk-developers/reyalp-moon-saturn-stack-1-median-c2.jpg" width='2400' height='2200' alt='Moon and Saturn, by reyalp'>
<figcaption>
Moon and Saturn, 30x1/24 seconds aligned stacked with gmic and gimp. CHDK script <a href='https://chdk.fandom.com/wiki/Lua/Scripts:_Fixed_Exposure_Intervalometer' target='_blank'>fixedint.lua</a> used to capture frames, by reyalp, licensed under CC BY-NC 2.0.
</figcaption>
</figure>

<h4 id="how-healthy-is-the-production-of-user-scripts-is-it-easy-for-a-non-programmer-to-write-a-script-in-what-languages-">How healthy is the production of user scripts? Is it easy for a non-programmer to write a script? In what languages?<a href="#how-healthy-is-the-production-of-user-scripts-is-it-easy-for-a-non-programmer-to-write-a-script-in-what-languages-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: The huge improvements in mobile phone cameras have really impacted the market for all but the highest end or largest zoom P&amp;S cameras.</p>
<p>Having said that, a core of serious photographers still work on what interests them and depend on CHDK to help capture their artistic vision.</p>
<p>As far as non programmers and script writing, uBASIC is about as simple as a computer language gets and there are lots of example scripts to study.</p>
<p>Lua provides a much richer programming environment, albeit with a bit of a learning curve.</p>
<p><strong>reyalp</strong>: Easy… depends on the user.</p>
<p>Because CHDK is a reverse engineered hack on top of an undocumented system, many behaviors are not well specified or understood.</p>
<p><em>waterwingz</em> improved things a lot by creating a comprehensive reference of CHDK script functions, but developing non-trivial scripts still requires significant effort and a willingness to experiment.</p>
<p>I’ve been using CHDK for 10+ years, and still find myself greping the CHDK source and making test cases to figure out what functions actually do.</p>
<p>All that said, I don’t think learning to write modest CHDK scripts is particularly harder than starting out with javascript or batch files or that sort of thing.</p>
<p>We should note CHDK uBASIC is based on Adam Dunkels code, not the UBASIC written by Yuji Kida for mathematical computing.</p>
<h4 id="how-many-developers-work-on-chdk-how-does-it-work-">How many developers work on CHDK? How does it work?<a href="#how-many-developers-work-on-chdk-how-does-it-work-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: There have been hundreds of contributors to CHDK over the years.</p>
<p>Currently there is an active core of two or three people doing original work with low level firmware stuff, a couple of people working more on the user experience, several people generating custom scripts for unique photographic opportunities, and a core of maybe ten CHDK experts not doing much coding these days but continuing to provide support to the community.</p>
<figure>
<img src="https://pixls.us/articles/a-q-a-with-the-chdk-developers/reyalp-m31-stack-3000_1.jpg" width='1500' height='1500' alt='Andromeda, by reyalp'>
<figcaption>
Andromeda (M31) captured by a Canon PowerShot G7X with CHDK intervalometer script <a href='https://chdk.fandom.com/wiki/Lua/Scripts:_Fixed_Exposure_Intervalometer' target='_blank'>fixedint.lua</a>, by reyalp, licensed under CC BY-NC 2.0. It’s worth mentioning that this image results from three thousand five second exposure frames, captured on different days, resulting a total exposure time of 4.16 hours.
</figcaption>
</figure>

<h4 id="what-uses-are-people-making-of-chdk-is-there-any-that-should-be-highlighted-">What uses are people making of CHDK? Is there any that should be highlighted?<a href="#what-uses-are-people-making-of-chdk-is-there-any-that-should-be-highlighted-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: There is a nice list on the main page of the <a href='https://chdk.wikia.com/wiki/CHDK' target='_blank'>CHDK Wiki</a> - unique things like motion triggering, scripting, RAW/DNG, bracketing, and full manual exposure control.</p>
<p>Originally, there was a lot of interest in just getting RAW files from an inexpensive P&amp;S camera.</p>
<p>More recently the focus has been on making good multi-camera rigs with full centralized control using CHDK’s PTP capability. Everything from book scanners, “bullet time” rigs, to full 3D capture for building small replicas of people. And of course the ongoing interest in time lapse videos and kite and drone photography work.</p>
<p><strong>reyalp</strong>: I get a kick out of how many different things show up searching google scholar for <a href='https://scholar.google.com/scholar?q=chdk+hack' target='_blank'>“CHDK hack”</a></p>
<h4 id="currently-what-is-the-main-development-effort-underway-new-functionality-porting-chdk-to-new-camera-models-">Currently, what is the main development effort underway? New functionality? Porting CHDK to new camera models?<a href="#currently-what-is-the-main-development-effort-underway-new-functionality-porting-chdk-to-new-camera-models-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: A lot of the current development is focused on very detailed features that interest the core developers. Not really anything that will revolutionize the CHDK user experience right away unfortunately.</p>
<p>But there are also potentially some interesting <a href='https://chdk.setepontos.com/index.php?topic=13293.0)' target='_blank'>new things</a> in the wings if the devs working on them can ever get them finished.</p>
<p><strong>reyalp</strong>: There aren’t really any major features undergoing significant development right now.</p>
<p>There are some ideas and experimental stuff being kicked around, like the GUI concept and some work on capturing raw outside the normal shooting process, but what eventually gets added depends on developers time and interest.</p>
<p>I’m trying to wrap up a few things to release CHDK 1.5 before we start major projects in the official development branch.</p>
<h4 id="so-there-s-a-new-chdk-gui-underway-and-it-seems-very-interesting-and-more-user-oriented-once-finished-how-will-it-be-released-to-the-many-different-camera-models-that-already-run-chdk-">So there’s a new CHDK GUI underway, and it seems very interesting and more user oriented. Once finished, how will it be released to the many different camera models that already run CHDK?<a href="#so-there-s-a-new-chdk-gui-underway-and-it-seems-very-interesting-and-more-user-oriented-once-finished-how-will-it-be-released-to-the-many-different-camera-models-that-already-run-chdk-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: Oh oh - you noticed that?  Did you also see the comment about the dev working on it not being too good at it getting finished? ;)</p>
<p>For what it’s worth, there has been quite a bit of thought about keeping it generic enough so that will run on all CHDK capable cameras. That’s mostly about screen resolution issues but there will probably be other challenges. Touchscreen cameras like the PowerShot N come to mind.</p>
<figure>
<img src="https://pixls.us/articles/a-q-a-with-the-chdk-developers/blackhole.Moon.jpg" width='639' height='426' alt='The Moon, by blackhole'>
<figcaption>
The Moon captured by a Canon PowerShot A590IS, powered by CHDK, aligned and stacked with Registax. An afocal method was used for capturing on the Newtonian 114/900 telescope. By <a href='http://astrofoto.pondi.hr/' target='_blank'>blackhole</a>, licensed under CC BY-NC 2.0.
</figcaption>
</figure>

<h4 id="what-is-the-future-of-chdk-considering-the-evolution-of-technology-how-long-do-you-think-point-and-shoot-cameras-will-stay-on-the-market-given-the-rise-of-smartphones-do-you-think-the-latter-will-replace-the-former-">What is the future of CHDK, considering the evolution of technology? How long do you think point-and-shoot cameras will stay on the market, given the rise of smartphones? Do you think the latter will replace the former?<a href="#what-is-the-future-of-chdk-considering-the-evolution-of-technology-how-long-do-you-think-point-and-shoot-cameras-will-stay-on-the-market-given-the-rise-of-smartphones-do-you-think-the-latter-will-replace-the-former-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: As I mentioned earlier, smart phone cameras continue to improve and that has impacted the low end PowerShots that CHDK does so much for.</p>
<p>CHDK will continue as an interesting project as long as a people enjoy using it and creating new things with it.</p>
<p><strong>blackhole</strong>: The P&amp;S cameras with the large zoom will likely survive the competition of smartphones. Smartphones in this area will not be competitive with cameras for a long time. CHDK is likely to have a future in this area.</p>
<p><strong>philmoz</strong>: I think there will also be demand for the higher end P&amp;S cameras with larger sensors, although I don’t think the market will be huge.</p>
<p>The number of people with these cameras interested in CHDK is probably going to be pretty small.</p>
<p>Canon’s EOS-M mirrorless cameras can also run CHDK so there is some interest there.</p>
<p><strong>reyalp</strong>: Low end, mass market P&amp;S are clearly on the way out. I agree with <em>blackhole</em> and <em>philmoz</em> that higher end stuff will be around for a while to come, but the possibility of running CHDK on future cameras is always uncertain.</p>
<p>As I mentioned earlier, Canon DSLRs and P&amp;S cameras have been based on different codebases, which are different enough that it doesn’t make sense to run the same hack on both.</p>
<p>CHDK supports the EOS M3 and M10 because they are built on the P&amp;S codebase, while Magic Lantern does not support them.</p>
<p>There are signs Canon is moving to a unified codebase (likely motivated by the same market changes) in Digic 8 cameras, which may preclude CHDK as we currently know it.</p>
<p>However, with millions of CHDK capable P&amp;S in circulation, there will still be potential uses for a long time to come.</p>
<p>Separate from smartphones, the rise of things like Rasbperry Pi, dedicated UAV and action cameras etc. have reduced cases where a hacked P&amp;S is a clear win over other options.</p>
<p>In 2008, if you wanted a programmable, multi-megapixel camera with decent optics your choices were very limited and mostly expensive.</p>
<p>In 2019, you have a lot of options other than CHDK, but at the same time, a lot of these things can work well with CHDK too.</p>
<p>I think the collapse of P&amp;S has also affected the pool of potential CHDK contributors: in 2008, a developer with a casual interest in photography would have a P&amp;S, while today they would be more likely to have a smartphone. Someone who wants to tinker with camera software also has a lot more choices.</p>
<figure>
<img src="https://pixls.us/articles/a-q-a-with-the-chdk-developers/keoeeit.Drop1.jpg" alt='Drop, by keoeeit'>
<figcaption>
Shutter/flash speed test. <a href='http://chdk.wikia.com/wiki/Samples:_High-Speed_Shutter_%26_Flash-Sync' target='_blank'>Results</a> indicate an estimated shutter speed of 1/10,000 and flash firing speed of 1/60,000. By keoeeit, licensed under CC BY-NC 2.0.
</figcaption>
</figure>

<h4 id="could-chdk-benefit-from-the-current-boom-in-single-board-computers-and-single-board-microcontrollers-like-arduino-raspberry-pi-esp32-beaglebone-etc-could-those-boards-add-even-more-functionality-to-chdk-how-">Could CHDK benefit from the current boom in single board computers and single board microcontrollers, like Arduino, Raspberry Pi, Esp32, Beaglebone, etc? Could those boards add even more functionality to CHDK? How?<a href="#could-chdk-benefit-from-the-current-boom-in-single-board-computers-and-single-board-microcontrollers-like-arduino-raspberry-pi-esp32-beaglebone-etc-could-those-boards-add-even-more-functionality-to-chdk-how-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: Actually, there are quite a few successful projects out there using those little computers to control one or more Canon PowerShots running CHDK.</p>
<p>Basically anything that will support the necessary USB functionality to implement the PTP protocol and the CHDK extensions to the protocol. Applications like bookscanners, remote timelapse capture, photobooths, and multiple camera 3D image scanning.</p>
<h4 id="here-is-a-href-http-arduino-projects4u-com-chdk-target-_blank-an-example-of-arduino-and-chdk-usage-a-with-a-nice-ptp-gui-">Here is <a href='http://arduino-projects4u.com/chdk/' target='_blank'>an example of Arduino and CHDK usage</a>, with a nice PTP GUI.<a href="#here-is-a-href-http-arduino-projects4u-com-chdk-target-_blank-an-example-of-arduino-and-chdk-usage-a-with-a-nice-ptp-gui-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>reyalp</strong>: The webcams on <a href='http://escursionisticivatesi.it/webcam/' target='_blank'>this site</a> are based Raspberry Pi’s using <em>chdkptp</em> to control CHDK cameras (<em>chdkptp</em> is a tool which I maintain that allows controlling CHDK cameras over USB from Linux and Windows).</p>
<div>
    <div class='fluid-vid'>
        <iframe src="https://www.youtube-nocookie.com/embed/_cqGBN9bGw0" frameborder="0" allowfullscreen></iframe>
    </div>
</div>

<p>One of the firsts tests with a full rig (72 cameras) shooting at same time. By <em>nafraf</em>, licensed under CC BY-NC 2.0.</p>
<div>
    <div class='fluid-vid'>
        <iframe src="https://www.youtube-nocookie.com/embed/2egiBmt321k" frameborder="0" allowfullscreen></iframe>
    </div>
</div>

<p>A simple test to show the detail of a segment of the rig, how the cameras were mounted and their response after sending the turn off command using <em>chdkptp</em>. By <em>nafraf</em>, licensed under CC BY-NC 2.0.</p>
<h4 id="do-you-developers-have-time-to-play-with-chdk-what-are-your-preferred-use-">Do you developers have time to play with CHDK? What are your preferred use?<a href="#do-you-developers-have-time-to-play-with-chdk-what-are-your-preferred-use-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: There are a ton of features in CHDK and I’ve had fun playing with most of them.  But for me, it’s mostly about Lua scripting when I’m actually using CHDK.</p>
<p><strong>blackhole</strong>: Unfortunately my real life does not allow me to play with CHDK as much as I want.</p>
<p>My favorite use is when popularizing astronomy among children. It is a priceless experience when you see the glow in their eyes when they see the image of the planet they have taken themselves. For me, this is the highest value of CHDK.</p>
<p><strong>reyalp</strong>: I use CHDK raw for general shooting, and scripts to do timelapse.</p>
<p>I’ve used motion detection for lightning and fireworks.</p>
<p>I also do some lo-fi astrophotography using CHDK scripts to take lots of exposures to guide and stack in software.</p>
<p>But what I do most is take test shots of my desk while working on the code ;)</p>
<figure>
<img src="https://pixls.us/articles/a-q-a-with-the-chdk-developers/reyalp-desk.jpg" width='300' height='225' alt="reyalp's desk by reyalp">
<figcaption>
reayalp’s desk, by reyalp, licensed under CC BY-NC 2.0.
</figcaption>
</figure>

<h4 id="when-did-you-joined-chdk-as-a-developer-why-what-is-your-background-what-is-your-role-">When did you joined CHDK as a developer? Why? What is your background? What is your role?<a href="#when-did-you-joined-chdk-as-a-developer-why-what-is-your-background-what-is-your-role-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: July 2010 according to the records on the CHDK forum.  But it seems like it was only nine years ago.</p>
<p>I got started because it combined two of my hobbies - computers and cameras.</p>
<p>Since then I’ve ported several cameras, contributed some original code, helped fix some bugs, coded many scripts, written a lot of documentation on the wiki, and helped a few newbies on the <a href='https://chdk.setepontos.com/index.php' target='_blank'>CHDK forum</a>.</p>
<p><strong>philmoz</strong>: I started with the G12 port in November 2010.</p>
<p>I’ve been a software developer for nearly 40 years and like <em>waterwingz</em>, photography is a hobby. CHDK looked like fun and I wanted something to keep me programming - my day job was more management than development.</p>
<p>I was pretty active until 2016, when real-life got in the way.</p>
<p>I now do mobile app development full time, so don’t spend much time on CHDK coding these days.</p>
<p><strong>blackhole</strong>: I joined in August 2010.</p>
<p>Prior to that I was just reading the forum as a guest and using CHDK on the old A530 and A590 cameras. At that time I was looking for a better solution for cheap-modified webcams, which were then popular in amateur astronomy. The logical solution was to switch to something cheap with a CCD sensor, so the decision fell on Canon cameras and CHDK.</p>
<p>I became interested in the programming, so I started to collect knowledge on the forum and in the end I made my first port.</p>
<p><strong>reyalp</strong>: Around 2008, I happened to get a Canon A540 and google “firmware hack” or something like that just for kicks. I had a background in C and assembly, and a somewhat neglected interest in photography going back to film days, so it seemed like a fun thing to play with.</p>
<p>For me, CHDK development was a nice change of pace, a throwback to the early PC days where if you want to draw something, you write directly to video memory instead of going through a bunch of APIs.</p>
<p>As people came and went from the project I somehow ended up being the chief cat herder.</p>
<p><strong>nafraf</strong>: I started in June 2012. My first port was A810. I was using CHDK for a multi camera project and it was difficult to find ports of recent cameras.</p>
<div>
    <div class='fluid-vid'>
        <iframe src="https://www.youtube-nocookie.com/embed/dsEw2cKN9KQ" frameborder="0" allowfullscreen></iframe>
    </div>
</div>

<p>With CHDK, the exposure time and ISO values can change in 1/96 EV steps. The first part of this video was made using a standard timer with exposure changes of 1/3 EV steps. The second part was made using script <a href='https://chdk.setepontos.com/index.php?topic=12165.0' target='_blank'>isoinc.lua</a>, no post-processing. By <a href='https://www.youtube.com/channel/UCrTH0tHy9OYTVDzWIvXEMlw' target='_blank'>c_joerg</a>, licensed under CC BY-NC 2.0.</p>
<h4 id="is-there-anything-that-you-would-like-to-add-">Is there anything that you would like to add?<a href="#is-there-anything-that-you-would-like-to-add-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p><strong>waterwingz</strong>: Getting involved in CHDK is a bit of a trap. Once you get in, your free time just disappears. But it can be a lot of fun!</p>
<p><strong>philmoz</strong>: The people who have worked on CHDK over the years are an amazingly talented, fun, and helpful group. I have learned a lot from this project and really appreciate the willingness to help, and assistance I’ve received.</p>
<p><strong>reyalp</strong>: I’d like to thank all the people who have contributed over the years, and Canon for turning a blind eye to it for so long.</p>
<p><strong>blackhole</strong>: CHDK is a very fun and creative project. I invite all photographers and programmers to join us and express their creativity through this project and share their experiences with us. In the end, I would like to thank the entire CHDK community for a pleasant companionship for the last ten years.</p>
<p><strong>nafraf</strong>: Thanks to all people involved with this project. I have learned a lot during these years.</p>
<div>
    <div class='fluid-vid'>
        <iframe src="https://www.youtube-nocookie.com/embed/z6PyjmPYtck" frameborder="0" allowfullscreen></iframe>
    </div>
</div>

<p>This video was created by changing the zoom levels between 24mm and 1200mm (35mm). For each zoom level, 2 images were taken at a time. The camera was a Canon SX50 with 200 zoom levels and CHDK. A <a href='https://chdk.setepontos.com/index.php?topic=13403.0' target='_blank'>special script</a> was used. By <a href='https://www.youtube.com/channel/UCrTH0tHy9OYTVDzWIvXEMlw' target='_blank'>c_joerg</a>, licensed under CC BY-NC 2.0.</p>
<h2 id="thank-you-chdk-devs-">Thank You CHDK devs!<a href="#thank-you-chdk-devs-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>I want to thank CHDK devs again for taking the time and being patient enough to chat with us, as well as sharing images of their CHDK use!</p>
<p>I also want to thank CHDK users Garry George, Peter Laudanski, Andrew Hazelden, <em>c_joerg,</em> and <em>keoeeit</em> for having kindly shared some images and answered questions about how they shoot them!</p>
<p>Finally, I want to thank <em>Pixls</em> members <em><a href='https://discuss.pixls.us/u/paperdigits' target='_blank'>paperdigits</a></em> and <em><a href='https://discuss.pixls.us/u/afre' target='_blank'>afre</a></em> for their invaluable support, without which this interview wouldn’t have been possible.</p>
<figure>
<img src="https://pixls.us/articles/a-q-a-with-the-chdk-developers/chdk_logo.png" width="387" height="387">
</figure>

<p>The CHDK community gathers around <a href='https://chdk.setepontos.com/' target='_blank'>https://chdk.setepontos.com/</a> and all official CHDK documentation can be found on <a href='http://chdk.wikia.com/wiki/CHDK' target='_blank'>http://chdk.wikia.com/wiki/CHDK</a>.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Libre Graphics Meeting 2019]]></title>
            <link>https://pixls.us/blog/2019/01/libre-graphics-meeting-2019/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2019/01/libre-graphics-meeting-2019/</guid>
            <pubDate>Sun, 06 Jan 2019 15:59:40 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2019/01/libre-graphics-meeting-2019/Saarbrucken-Panorama_01.jpg" /><br/>
                <h1>Libre Graphics Meeting 2019</h1> 
                <h2>Let's participate!</h2>  
                <p>It’s that time of year again: <a href="https://libregraphicsmeeting.org/2019/" title="Libre Graphics Meeting 2019">Libre Graphics Meeting 2019</a> is fast approaching!</p>
<p>This year the meeting will be May 29 to June 2 in <a href="https://libregraphicsmeeting.org/2019/travel/" title="LGM Travel">Saarbrücken, Germany</a>.
This is extra exciting because Saarbrücken is centrally located enough that we should have a nice representation from projects and community members.
Members of both <a href="https://www.rawtherapee.com" title="RawTherapee Website">RawTherapee</a> and <a href="https://www.darktable.org" title="darktable Website">darktable</a> live nearby and will be in attendance (along with others from those projects and many others).</p>
<!--more-->
<h2 id="participate-"><a href="#participate-" class="header-link-alt">Participate!</a></h2>
<p>I’m hoping to have a good representation this year, so first and foremost - <em>please</em>, <em>please</em> consider participating by giving a presentation, leading a workshop, or even a quick lightning talk!
The <a href="https://libregraphicsmeeting.org/2019/call-for-participation/" title="LGM 2019 Call for Participation">Call for Participation</a> page is here:</p>
<p><a href="https://libregraphicsmeeting.org/2019/call-for-participation/">https://libregraphicsmeeting.org/2019/call-for-participation/</a></p>
<p>I will make myself available to help in any way I can. If you want a hand with the presentation, design, graphics or whatever please feel free to ping me (also - remember that we try to archive all of our presentations and material <a href="https://github.com/pixlsus/Presentations">in our Github repo</a> so you can grab any of the assets from there as well)!
This is a great opportunity to spread the word about what we’re up to and the many, many awesome projects everyone has created, maintained, and contributed to for Free Software photography.</p>
<p><strong>The deadline for submittal is coming up on January 15<sup>th</sup>!</strong><br>If you think you’d like to present or host a workshop please submit as soon as possible.</p>
<h2 id="cheers-"><a href="#cheers-" class="header-link-alt">Cheers!</a></h2>
<div class='fluid-vid'>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/7KtAgAMzaeg" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope;" allowfullscreen></iframe>
</div>

<p>I am particularly excited about this meeting because a) I don’t get to attend every year so this is the first time I’m able to make it in a few years, and b) this is a great opportunity to really get a bunch of community members to come together!
Saarbrücken is on the high-speed rail network so it’s readily accessible from many places (now you have less excuses to not make it).</p>
<figure>
<img src="https://pixls.us/blog/2019/01/libre-graphics-meeting-2019/LGM London GIMP.jpg">
<figcaption>
GIMP, darktable, and others all getting together in LGM/London!
</figcaption>
</figure>

<p>I love hanging out with y’all.
It’s great to nerd out about photography and catch up.
Sometimes it really helps to be able to speak face-to-face and this is the perfect opportunity to also be exposed to all manner of other Free Software projects (or to expose ourselves to others?).</p>
<p>Besides, how else am I going to capture some fun photos of y’all?</p>
<figure>
<img src="https://pixls.us/blog/2019/01/libre-graphics-meeting-2019/niko.jpg" alt='Nikolaikirche in Leipzig'>
<figcaption>
Notice a tiny houz in the bottom right!
</figcaption>
</figure>


<h2 id="what-s-going-on"><a href="#what-s-going-on" class="header-link-alt">What’s Going On</a></h2>
<p>We have a few things planned for sure at the meeting and you’re going to be really sad if you miss them by not coming!</p>
<ol>
<li>PIXLS.US BoF (Birds of a Feather)<br> This is a special session set aside for a couple of hours for the entire community to get together and chat about what’s going on, what we’d like to do, and what’s coming up.</li>
<li>Photowalk<br> I’ve been doing photowalks every year that I’ve been because this is almost the ultimate way for me to spend time with folks (unless we can do a beer drinking photowalk all at the same time…).</li>
<li>PIXLS.US Presentation/Update<br> I’ll (We’ll?  Anyone from the community is more than welcome to help me present on this) present on the community and what we’ve done so far and what we’d like to accomplish moving forward.
 This is our primary way of reporting out to the wider community who we are and what we’re doing.</li>
<li>State of the LGM<br> This is a couple of slides that are included at the beginning of the program giving an overview of the state of the entire libre graphics ecosystem.</li>
</ol>
<p>I really, <em>really</em>, <strong>really</strong> want to be able to add to this list with other presentations (or lightning talks, etc) that the community will give!  I may even submit a couple of presentations to talk not just about the community but maybe about our technical work coordinating the forum and providing services for the projects (as well as invite other related projects to come join us).</p>
<p>We can make this an incredible and memorable meeting with a fantastic opportunity to meet friends in person and have a wonderful time!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Goodbye Google Analytics]]></title>
            <link>https://pixls.us/blog/2018/12/goodbye-google-analytics/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2018/12/goodbye-google-analytics/</guid>
            <pubDate>Mon, 31 Dec 2018 21:16:40 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2018/12/goodbye-google-analytics/laffland.jpg" /><br/>
                <h1>Goodbye Google Analytics</h1> 
                <h2>A little less tracking for the new year</h2>  
                <p>Over on my personal website I decided to <a href="https://patdavid.net/2018/05/goodbye-google-analytics/">stop using third party trackers and assets</a> to keep from exposing visitors to unintended tracking.
Third party assets expose a user to being tracked and analyzed by those third (or fourth, or more) parties and honestly this is something the web could use a little (lot) less of.
I loved having stats early on when we started this crazy idea for a community and as I mentioned on my blog post, it’s a Faustian bargain to get stats at the expense of allowing Google to track what all the users of the site are doing.
<strong>No thanks.</strong></p>
<!--more-->
<p>I figure it’s the eve of a new year so why not start it out right and reduce the tracking footprint of the site?</p>
<p>This all started by noticing that some new browser feature strips referer information from requests (thanks @darix) and we were using them to target specific areas of websites that we manage comments for.
It came to my attention when I was reading the release announcement for <a href="https://www.digikam.org/news/2018-12-30-6.0.0-beta3_release_announcement/">digiKam 6.0.0 beta 3</a>.</p>
<p>While fixing that problem, I found that once we fixed the referer requirement problem I was still seeing issues with <a href="https://www.eff.org/privacybadger" title="EFF Privacy Badger Website">Privacy Badger</a> blocking our embed code.
On <a href="https://github.com/EFForg/privacybadger/issues/2257" title="EFF Privacy Badger Issue Tracker">further inspection</a> it boiled down to using Google Analytics on our base domain (pixls.us) and having a cookie set by Google, which then got sent with embed requests from other websites (<a href="https://www.digikam.org" title="digiKam website">digiKam</a> and <a href="https://darktable.org/" title="darktable website">darktable</a>).
This triggered the heuristic blocking by Privacy Badger.</p>
<p>Honestly, we derive very little value from the analytics for the price (<em>privacy</em>) we pay to use it.
Better to simply remove it.</p>
<p>We <em>still</em> do analytics but we own the stack ourselves (thank you so much andabata!).
If you want to block our own analytics the domain is: <code>piwik.pixls.us</code>.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[2018 PlayRaw Calendar]]></title>
            <link>https://pixls.us/blog/2018/11/2018-playraw-calendar/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2018/11/2018-playraw-calendar/</guid>
            <pubDate>Sun, 25 Nov 2018 15:10:06 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2018/11/2018-playraw-calendar/monkey-business-dimitrios.jpg" /><br/>
                <h1>2018 PlayRaw Calendar</h1> 
                <h2>Chris creates a new calendar for the community</h2>  
                <p>Last year I got an amazing surprise in the mail.
It was an <em>awesome</em> calendar of a handpicked selection of results from the years <a href="https://discuss.pixls.us/tags/play_raw" title="Play Raw posts on Discuss">PlayRaw</a> images.</p>
<p>Chris (@chris) put together another fantastic calendar for this year (while juggling kids, too) and it’s too nice to not have a post about it!</p>
<figure>
<img src="https://pixls.us/blog/2018/11/2018-playraw-calendar/playraw-0.jpg" alt='Play Raw Calendar 2019'>
<figcaption>
Yep, that’s the back side.<br><em>Monkey Business</em> by Dimitrios Psychogios (<a href='https://creativecommons.org/licenses/by-sa/4.0/' title='Creative Commons Attribution-ShareAlike'><span class='cc'>cba</span></a>)
</figcaption>
</figure>


<!--more-->
<p>It was a really awesome surprise to recieve my calendar last year - and I wish I would have planned a little better to be able to grab a photo of the calendar hanging in my office (it’s my work desk calendar - it never fails to remind me that there’s more fun things to life than work - also that I need to up my processing game… ).</p>
<p>This year Chris has done it again by assembling a wonderfully curated collection of images and edits from the various Play Raws that were posted this year.
I’ve plagiarized <a href="https://discuss.pixls.us/t/playraw-calendar-2019/">his post on the forums</a> to put together this post and get some more publicity for his time and effort!</p>
<p>If you get a moment, please thank Chris for his work putting this together!</p>
<p>You can download the PDF: <a href="https://pixls-discuss.s3.dualstack.us-east-1.amazonaws.com/original/3X/0/4/04c0007ef0f0c315037c7bafb37947bb5d5a6553.pdf">2018 Play Raw Calendar</a></p>
<p>Here are the images he chose for the calendar and the edits he included:</p>
<table>
<thead>
<tr>
<th>month</th>
<th>image title</th>
<th>photographer</th>
<th>editor</th>
<th>license</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td><a href="https://discuss.pixls.us/t/play-raw-monkey-business/7145">Monkey Business</a></td>
<td>jinxos</td>
<td>andrayverysame</td>
<td>CC BY-SA</td>
</tr>
<tr>
<td>1</td>
<td><a href="https://discuss.pixls.us/t/play-raw-glaciers-birds-and-seals-at-jokulsarlon-iceland/9206">Glaciers, Birds, and Seals at Jökulsárlón/Iceland</a></td>
<td>BayerSe</td>
<td>McCap</td>
<td>CC BY-NC-SA</td>
</tr>
<tr>
<td>2</td>
<td><a href="https://discuss.pixls.us/t/play-raw-shooting-into-the-sun/8713">Shooting Into the Sun</a></td>
<td>davidvj</td>
<td>Adlatus</td>
<td>CC BY-SA</td>
</tr>
<tr>
<td>3</td>
<td><a href="https://discuss.pixls.us/t/playraw-the-rail-bridge-north-queensferry/6243">The Rail Bridge, North Queensferry</a></td>
<td>Brian_Innes</td>
<td>Jean-Marc_Digne</td>
<td>CC BY-SA</td>
</tr>
<tr>
<td>4</td>
<td><a href="https://discuss.pixls.us/t/play-raw-sunset-sea/7103">Sunset sea</a></td>
<td>Thanatomanic</td>
<td>sls141</td>
<td>CC BY-NC-SA</td>
</tr>
<tr>
<td>5</td>
<td><a href="https://discuss.pixls.us/t/play-raw-vulcan-stone-sunset/9618">Vulcan stone sunset</a></td>
<td>asn</td>
<td>kazah7</td>
<td>CC BY-NC-SA</td>
</tr>
<tr>
<td>6</td>
<td><a href="https://discuss.pixls.us/t/playraw-venise-la-serenissime/8571">Venise la sérénissime</a></td>
<td>sguyader</td>
<td>Thomas_Do</td>
<td>CC BY-NC-SA</td>
</tr>
<tr>
<td>7</td>
<td><a href="https://discuss.pixls.us/t/play-raw-dockland-side-view-at-night/8237">Dockland side view at night</a></td>
<td>gRuGo</td>
<td>CriticalConundrum</td>
<td>CC BY-NC-SA</td>
</tr>
<tr>
<td>8</td>
<td><a href="https://discuss.pixls.us/t/playraw-eating-cicchetti-with-ghosts-in-venezia/5805">Eating cicchetti with ghosts in Venezia</a></td>
<td>sguyader</td>
<td>msd</td>
<td>CC BY-NC-SA</td>
</tr>
<tr>
<td>9</td>
<td><a href="https://discuss.pixls.us/t/play-raw-maritime-museum/8969">maritime museum</a></td>
<td>wiegemalt</td>
<td>yteaot</td>
<td>CC BY-SA</td>
</tr>
<tr>
<td>10</td>
<td><a href="https://discuss.pixls.us/t/playraw-alfreds-vision/5574">Alfred’s Vision</a></td>
<td>jinxos</td>
<td>msd</td>
<td>CC BY-SA</td>
</tr>
<tr>
<td>11</td>
<td><a href="https://discuss.pixls.us/t/playraw-crescent-moon-through-silhouetted-fern-fronds/8052">Crescent Moon through silhouetted fern fronds</a></td>
<td>martin.scharnke</td>
<td>gRuGo</td>
<td>CC BY-NC-SA</td>
</tr>
<tr>
<td>12</td>
<td><a href="https://discuss.pixls.us/t/play-raw-everything-frozen/6855">Everything frozen</a></td>
<td>asn</td>
<td>McCap</td>
<td>CC BY-NC-SA</td>
</tr>
</tbody>
</table>
<p>A preview (also shamelessly lifted from Chris’s forum post):</p>
<p><img src="https://pixls.us/blog/2018/11/2018-playraw-calendar/small-playraw-Seite001.jpg" alt="small-playraw-Seite001" width="690" height="474">
<img src="https://pixls.us/blog/2018/11/2018-playraw-calendar/small-playraw-Seite002.jpg" alt="small-playraw-Seite002" width="690" height="474">
<img src="https://pixls.us/blog/2018/11/2018-playraw-calendar/small-playraw-Seite003.jpg" alt="small-playraw-Seite003" width="690" height="474">
<img src="https://pixls.us/blog/2018/11/2018-playraw-calendar/small-playraw-Seite004.jpg" alt="small-playraw-Seite004" width="690" height="474">
<img src="https://pixls.us/blog/2018/11/2018-playraw-calendar/small-playraw-Seite005.jpg" alt="small-playraw-Seite005" width="690" height="474">
<img src="https://pixls.us/blog/2018/11/2018-playraw-calendar/small-playraw-Seite006.jpg" alt="small-playraw-Seite006" width="690" height="474">
<img src="https://pixls.us/blog/2018/11/2018-playraw-calendar/small-playraw-Seite007.jpg" alt="small-playraw-Seite007" width="690" height="474">
<img src="https://pixls.us/blog/2018/11/2018-playraw-calendar/small-playraw-Seite008.jpg" alt="small-playraw-Seite008" width="690" height="474">
<img src="https://pixls.us/blog/2018/11/2018-playraw-calendar/small-playraw-Seite009.jpg" alt="small-playraw-Seite009" width="690" height="474">
<img src="https://pixls.us/blog/2018/11/2018-playraw-calendar/small-playraw-Seite010.jpg" alt="small-playraw-Seite010" width="690" height="474">
<img src="https://pixls.us/blog/2018/11/2018-playraw-calendar/small-playraw-Seite011.jpg" alt="small-playraw-Seite011" width="690" height="474">
<img src="https://pixls.us/blog/2018/11/2018-playraw-calendar/small-playraw-Seite012.jpg" alt="small-playraw-Seite012" width="690" height="474">
<img src="https://pixls.us/blog/2018/11/2018-playraw-calendar/small-playraw-Seite013.jpg" alt="small-playraw-Seite013" width="690" height="474"></p>
<p>These <a href="https://discuss.pixls.us/tags/play_raw" title="Play Raw posts on Discuss">Play Raws</a> are a ton of fun and one of the great aspects of having such a generous community to share the images and allowing everyone to practice and play.
I am constantly humbled by the amazing work our community produces and <em>shares with everyone</em>.</p>
<p><strong>Thank you</strong> to everyone who shared image and participated in processing (and sharing how you achieved your results)!  I have really learned some neat things based on others work and look forward to even more opportunities to play (pun intended).</p>
<p><em>Fun side note:</em> the Play Raws are actually something that began on the old <a href="https://www.rawtherapee.com">RawTherapee</a> forums.  When they moved their official forums here with us it was one of those awesome things I’m glad they brought over with them (the people were pretty great too… :)).</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Giving More Thanks]]></title>
            <link>https://pixls.us/blog/2018/11/giving-more-thanks/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2018/11/giving-more-thanks/</guid>
            <pubDate>Thu, 22 Nov 2018 16:20:28 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2018/11/giving-more-thanks/Rockwell-Thanksgiving-Simpsons.jpg" /><br/>
                <h1>Giving More Thanks</h1> 
                <h2>For an awesome community</h2>  
                <p>It is a <a href="https://pixls.us/blog/2017/11/giving-thanks/">yearly</a> <a href="https://pixls.us/blog/2016/11/giving-thanks/">tradition</a> for us to post something giving thanks around this holiday.
I think it’s because this community has become such a large part of our lives.
Also, I think it helps to remind ourselves once in a while of the good things that happen to us. So in that spirit…</p>
<!-- more -->
<h2 id="financial-supporters"><a href="#financial-supporters" class="header-link-alt">Financial Supporters</a></h2>
<p>We are lucky enough (for now) to not have huge costs, but they are costs none-the-less. We have been very fortunate that so many of you have stepped up to help pay those costs.</p>
<h3 id="the-goliath-of-givers"><a href="#the-goliath-of-givers" class="header-link-alt">The Goliath of Givers</a></h3>
<p>For the last several years, <a href="https://plus.google.com/+DimitriosPsychogios" title="Dimitrios Psychogios on Google+"><strong>Dimitrios Psychogios</strong></a> has graciously covered our server expenses (<em>and then some</em>). On behalf of the community, thank you so much! You keep the servers up and running.
Your generosity will cover infrastructure costs for the year and give us room to grow as the community does.</p>
<p>We also have some awesome folks who support us through monthly donations (which are nice because we can plan better if we need to). Together they cover the costs of data storage + transfer in/out of Amazon AWS S3 storage (basically the storage and transfer of all of the attachments and files in the forums).
So <strong>thank you</strong>, you cool friends, you make the cogs turn:</p>
<ul>
<li>Jonas Wagner</li>
<li>elGordo</li>
<li>Chris</li>
<li>Christian</li>
<li>Claes</li>
<li>Thias</li>
<li>Stephan Vidi</li>
<li>ukbanko</li>
<li>Bill Z</li>
<li>Damon Hudac</li>
<li>Luka Stojanovic (a multi-year contributor!)</li>
<li>Moises Mata</li>
<li>WoodShop Artisans</li>
<li>Barrie Minney (He’s a long time monthly contributor!)</li>
<li>Mica</li>
</ul>
<p>It is so amazing not to have to worry about finding the capital to support our growing community; we just expand things as necessary. It is super great.</p>
<p>If you’d like to join them in supporting the site financially, check out the <a href="https://pixls.us/support">support page</a>.</p>
<h2 id="growth"><a href="#growth" class="header-link-alt">Growth</a></h2>
<p>As of today, we have 3135 users, so we’ve continued to grow at a very good rate! Welcome to all the new users.</p>
<p>As you can see from our discuss stats, we’re approaching 500k page views per month:</p>
<p><img src="https://pixls.us/blog/2018/11/giving-more-thanks/monthly-stats.png" alt="PIXLS.US monthly stats"></p>
<p>And our yearly community health is very positive:</p>
<p><img src="https://pixls.us/blog/2018/11/giving-more-thanks/yearly-stats.png" alt="PIXLS.US yearly stats"></p>
<h2 id="gphoto"><a href="#gphoto" class="header-link-alt">gPhoto</a></h2>
<p>This year we added the <a href="http://gphoto.org/">gphoto</a> project to our list of supported applications! gPhoto is an awesome library for interfacing with your camera. It is used by darktable and entangle to allow you to shoot with your camera attached to your laptop or other device. We’re thrilled that they’ve joined us on the forums!</p>
<h2 id="natron"><a href="#natron" class="header-link-alt">Natron</a></h2>
<p><a href="https://natron.fr">Natron</a> is a compositing application mostly used in 3D/video compositing. The main developer was looking to give the project more of a community focus, so of course we were happy to provide them their own spot in the forum for their users to communicate and collaborate.</p>
<h2 id="darix"><a href="#darix" class="header-link-alt">darix</a></h2>
<p>For another year, @darix continues to keep our stuff up and running! Do you ever notice outages? No?! Me either, and that is due to his daily diligence. We can’t thank him enough for his dedication to our community.</p>
<h2 id="patdavid-or-pat-david"><a href="#patdavid-or-pat-david" class="header-link-alt">patdavid or Pat David</a></h2>
<p>The originator of it all, thank you for the initial push to create this community where we are not divided by which application we use. And for your continued good will towards everyone here, your welcoming spirit, and passion. We’d never have done it without you! And for all the great things to come!</p>
<h2 id="all-of-you"><a href="#all-of-you" class="header-link-alt">All of You</a></h2>
<p>The community is the sum of parts + all the extra love that comes from all of you! Thank you so much continuing to stick around, share you knowledge, and spread the great community. It keeps me motivated, creative, and challenged and for that I am very thankful.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Create lens calibration data for lensfun]]></title>
            <link>https://pixls.us/articles/create-lens-calibration-data-for-lensfun/</link>
            <guid isPermaLink="true">https://pixls.us/articles/create-lens-calibration-data-for-lensfun/</guid>
            <pubDate>Thu, 15 Nov 2018 13:00:00 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/create-lens-calibration-data-for-lensfun/lede_vignetting.jpg" /><br/>
                <h1>Create lens calibration data for lensfun</h1> 
                <h2>Adding support for your lens</h2>  
                <p>[Article updated on: 2019-12-09]</p>
<h2 id="introduction">Introduction<a href="#introduction" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>All photographic lenses have several types of errors. Three of them can be
corrected by software almost losslessly:
<a href="http://en.wikipedia.org/wiki/Distortion_&#40;optics&#41;">distortion</a>, <a href="http://en.wikipedia.org/wiki/Chromatic_aberration">transverse
chromatic aberration (TCA)</a>,
and <a href="http://en.wikipedia.org/wiki/Vignetting">vignetting</a>. The
<a href="http://lensfun.sourceforge.net/">Lensfun</a> library provides code to do these
corrections. Lensfun is not used by the photographer directly. Instead, it is
used by a photo raw development software such as darktable or RawTherapee. For
example, if you import a RAW into darktable, darktable detects the lens model,
focal length, aperture and focal distance used for the picture, and it then
calls Lensfun to automatically correct the photograph.</p>
<figure>
<img src="https://pixls.us/articles/create-lens-calibration-data-for-lensfun/distortion_example/01_distortion_before.jpg" data-swap-src="distortion_example/02_distortion_after.jpg" alt="Photo with lens distortion" title="Photo with lens distortion" width="760" height="507">
<figcaption>
<b>Figure 1:</b> 16mm lens showing distortion (<strong>click on the image to show the distortion corrected image</strong>)
</figcaption>
</figure>

<p>Lensfun uses a database to know all the parameters needed to do the lens
corrections. This database is filled by photographers like you, who took time
to calibrate their lenses and to submit their findings back to the Lensfun
project. If you’re lucky, your lens models are already included. If not, please
use this tutorial to do the calibration and contribute your results.</p>
<p>Let us assume your lens isn’t covered by Lensfun yet, or the corrections are
either not good enough or incomplete. The following sections will explain how
to take pictures for calibration. It will also show you how to create an entry
of your own. The best is to provide information for all three errors but maybe
you only need distortion then this is fine too.</p>
<h2 id="checking-if-your-lens-is-already-supported">Checking if your lens is already supported<a href="#checking-if-your-lens-is-already-supported" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Before you start to calibrate new lenses or report missing cameras please check
the <a href="https://wilson.bronger.org/lensfun_coverage.html">lens database</a> first!
The list is updated daily. If your lens is already support then everything is
fine and you just have to update your database.</p>
<p>If the lens is not supported or doesn’t provide all corrections you could add
the missing data following this tutorial.</p>
<h2 id="taking-pictures">Taking pictures<a href="#taking-pictures" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Before we start you need to take a lot of images for the three errors we are
able to correct. This section will explain how to take them and what you need
to pay attention to.</p>
<p>For all pictures you should use a tripod, turn off all image correction and
disable image stabilization in the camera and in the lens itself!  Also make
sure to that all High Dynamic Range (HDR) or Dynamic Range Optimizer (DRO)
features are turned off. All those options could mess up your calibration.</p>
<h3 id="distortion">Distortion<a href="#distortion" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>For distortion you can to take pictures of a building with several parallel
straight lines. You need at least two lines, one should be at the top of the
image (Nearly touching the top of frame) and the other line at about a third
down from the first line. The following example demonstrates this.</p>
<figure>
<img src="https://pixls.us/articles/create-lens-calibration-data-for-lensfun/distortion_example/01_distortion_before.jpg" alt="Photo with lens distortion" title="Photo with lens distortion" width="760" height="507">
<figcaption>
<b>Figure 2:</b> Parking house with straight lines
</figcaption>
</figure>

<!--
<figure>
<figcaption>
<b>Figure 3:</b> Another example taken for distortion corrections
</figcaption>
-->
<p>The lines must be <em>perfectly</em> straight and aligned. You can twist and rotate
the camera, but the lines must have no imperfections. Common mistakes are using
tiles or bricks: to your eye they may be “straight”, but it will cause
calibration defects. The best buildings turn out to be parking houses (US:
parking lot, EN: garages, car parks) or modern glass buildings like
fruit-technology stores.</p>
<p>For a fixed focal length lens, you only will require one image. For a zoom lens
it is recommended to take 5 to N pictures where N is max focal length minus min
focal length. You must take an image at the minimum focal length, and the
maximum focal length. You can move (step backward on forward) between shots to
keep the 1/3rd rule above consistent.</p>
<p>You should shoot at your lenses sharpest aperture - this is often f/8 to f/11.
Setup your camera on a tripod. Shoot at the lowest ISO (without extended
values). This will be 100 or 200. Disable any inbody lens corrections.  Every
vendor has a different name for this process (Fuji is modular lens optimization
for example). Check your camera manual and menus.</p>
<h3 id="chromatic-aberrations-tca-">Chromatic aberrations (TCA)<a href="#chromatic-aberrations-tca-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>For TCA images look for a large object with sharp high-contrast edges
throughout the image. Preferably, the edges should be black–white but anything
close to that is sufficient. Make sure that you have hard edges from the center
throughout to one of the edges. The best buildings, for taking photos, have dark
windows with white or gray frames.</p>
<p>Here are some example pictures:</p>
<figure>
<img src="https://pixls.us/articles/create-lens-calibration-data-for-lensfun/tca_example/01_tca.jpg" alt="Photo with grey framed windows" title="Photo with grey framed windows" width="760" height="507">
<figcaption>
<b>Figure 4:</b> Building with gray framed windows
</figcaption>
</figure>

<!--
<figure>
<figcaption>
<b>Figure 5:</b> Another example taken for distortion corrections
</figcaption>
-->
<p>You should take your pictures being at least 8 meters away. For zoom lenses,
take pictures at the same focal lengths as for distortion (5 to N). Make sure
to capture really sharp photos using at least f/8. The best is to use aperture
control, f/8 and ISO 100 on a tripod to avoid any color noise.</p>
<p>You can use e.g. a streetview service to find the right building in your town
(big buildings, dark windows with white or grey frames).</p>
<h3 id="vignetting">Vignetting<a href="#vignetting" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>To create pictures for vignetting you need a diffuser in front of the lens. This
may be translucent milk glass, or white plastic foil on glass. Whatever, as
long as it is opaque enough so that nothing can be seen through it, yet
transparent enough so that light can pass through it. It must not be thicker
than 3 mm and shouldn’t have a noticeable texture. It must be perfectly flush
with the lens front, and it mustn’t be bent. It must be illuminated
<em>homogeneously</em>.</p>
<p>I ordered a piece of <a href="https://www.amazon.de/Metall-Acrylglas-Milchglas-Lichtdurchlässigkeit-beidseitig/dp/B00W3KSO0I/">acryl glass, opal white (milky), smoothly polished, 78%
translucency, 3mm thick, 20 x 20 cm</a>,
which is about 8 Euro on Amazon.</p>
<p>However white plastic foil taped on a piece of ordinary glass for stability
might be enough, if the plastic doesn’t have any texture.</p>
<p>I normally wait for a cloudy day with no sun, then the sky is  <em>homogeneously</em>
lit. Put the camera on a tripod and point it to the sky. Put the glass directly
on the lens (remove any filters). In some places where sunlight is different
you may need to shoot indoors. You should experiment to make sure your images
are evenly lit (except for vignetting obviously). </p>
<figure>
<img src="https://pixls.us/articles/create-lens-calibration-data-for-lensfun/vignetting_example/01_vignetting.jpg" alt="Photo showing a camera with milky glass" title="Photo showing a camera with milky glass" width="760" height="507">
<figcaption>
<b>Figure 6:</b> Camera setup to take pictures for vignetting correction
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/create-lens-calibration-data-for-lensfun/vignetting_example/02_vignetting.jpg" alt="Photo showing lens vignetting" title="Photo showing lens vignetting" width="760" height="507">
<figcaption>
<b>Figure 6:</b> Image showing vignetting of a wide angle lens at 16mm
</figcaption>
</figure>

<p>Make sure that <strong>no corrections</strong> are applied by the camera (some models do
this even for RAWs). Set the camera to <em>aperture priority</em> and the
lowest real ISO (this is normally 100 or 200, don’t use extended ISO values). </p>
<p>Switch to manual focus and focus to infinity. This is the most critical step!</p>
<p>For zoom lenses, you need to take pictures at five different focal lengths. You
only need pictures for five focal lengths because for the other steps it gets
interpolated. For a prime lens you need to take only pictures for the single
focal length.</p>
<p>Take the pictures as RAW at the fastest aperture (e.g. f/2.8) and at three more
closed apertures at 1 EV distance, and also at the most closed aperture (e.g.
f/22.0). These are often marked on your lens’ aperture ring, or on your
electronic display.</p>
<p>If you have for example a 16-35mm lens with aperture f/2.8 - f/22, you need to
take pictures at 16mm, 20mm, 24mm, 28mm and 35mm focal length (Remember you
require the min and max zoom values). For each for those focal lengths you need
to take five pictures at f/2.8, f/4.0, f/5.6, f/8.0 and f/22.0. This makes 25
pictures in total.</p>
<p>For a 50mm prime lens with f/1.4 - f/16 you need to take 5 pictures at f1.4, f/2.0,
f/2.8, f/4.0, and f/16.0.</p>
<h4 id="-exposing-the-picture-correctly"><strong>**Exposing the picture correctly</strong><a href="#-exposing-the-picture-correctly" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>For taking the picture the middle of the area needs to be as bright as
possible, but not overexposed. This can be easily +1.7 to +2.0 EV.</p>
<p>If your camera has a zebra setting, turn it on and set the zebra mode to ‘100’.
If you start to see the zebra, then take the picure. The profile is created on
a RAW file and the zebra is shown for a developed picture. So we aren’t
overexposed yet. In the control mode in my camera which shows overexposure
everything was fine.</p>
<h4 id="vignetting-correction-for-the-professionals">Vignetting correction for the professionals<a href="#vignetting-correction-for-the-professionals" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The following steps are to get really fine grained vignetting corrections. The
gain in accuracy is really very small! I probably only makes sense for prime
lenses used for portrait or macro photography. However this is not required,
the above it absolutely enough.</p>
<p>Lensfun is able to correct vignetting depending on focal distance. Thus, you
can achieve a bit more accuracy by shooting at different focal distances.  This
means you will have to take pictures at 4 different focal distances.</p>
<p>The first focus on the near point (The near point is the closest distance that
can be brought in focus). The next focal distances are the near point
multiplied by 2 and by 6 and finally focus at infinity.</p>
<p>Example: For a 85mm prime lens with the near point at 0.8 m. You have to take
pictures at 0.8 m, 1.6 m, 4.8 m and infinity.</p>
<h2 id="create-calibration-data">Create calibration data<a href="#create-calibration-data" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>There are two ways to perform the calibration.</p>
<p>Lensfun allows an upload of data to the project, and they’ll do the program
work for you. They’ll also review your images to make sure they are correctly
taken.</p>
<p>Or you can do it yourself with the lens calibration script from the lensfun
project.</p>
<p>The script needs the following dependencies to be installed on your system:</p>
<ul>
<li>python3</li>
<li>python3-exiv2 (<a href="http://py3exiv2.tuxfamily.org/">py3exiv2</a> &gt;= 0.2.1)</li>
<li>python3-numpy</li>
<li>python3-scipy</li>
<li>darktable-cli (<a href="https://darktable.org">darktable</a> &gt;= 2.4.0)</li>
<li>tca_correct (<a href="http://hugin.sourceforge.net">hugin</a> &gt;= 2018)</li>
<li>convert (<a href="https://www.imagemagick.org/script/index.php">ImageMagick</a>)</li>
</ul>
<p>You can download the lens calibration script
<a href="https://gitlab.com/cryptomilk/lens_calibrate">HERE</a> or get it as a package
for the major distributions
<a href="https://software.opensuse.org/download.html?project=graphics:darktable&amp;package=lens_calibrate">HERE</a>.</p>
<p>Once you have downloaded the tool create a folder for your lens calibration
data, change to the directory and run:</p>
<pre><code>$ lens_calibrate.py init
The following directory structure has been created in the local directory

1. distortion - Put RAW file created for distortion in here
2. tca        - Put chromatic abbreviation RAW files in here
3. vignetting - Put RAW files to calculate vignetting in here
</code></pre><p>Follow the instructions and copy your raw files in the corresponding
directories.</p>
<h4 id="vignetting-correction-for-the-professionals">Vignetting correction for the professionals<a href="#vignetting-correction-for-the-professionals" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>For each focal distance you captures pictures you have to create a folder.</p>
<p>Lets pick up the example from above. For a 85mm prime lens we took pictures at
0.8 m, 1.6 m, 4.8 m and infinity. For this lens you would have to create the
following folder structure in the vignetting directory:</p>
<pre><code>vignetting/0.8
vignetting/1.6
vignetting/4.8
vignetting/inf
</code></pre><p>The folder <code>inf</code> is for the focal distance at infinity.</p>
<h3 id="distortion">Distortion<a href="#distortion" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Once you copied the files in place it is time to generate the pictures (tif
files) for distortion calculations. You can do this with the
‘distortion’ option:</p>
<pre><code>$ lens_calibrate.py distortion
Running distortion corrections ...
Converting distortion/_7M32376.ARW to distortion/exported/_7M32376.tif ... DONE
A template has been created for distortion corrections as lenses.conf.
</code></pre><p>Once the tif files has been created, you can start Hugin.</p>
<p>Torsten Bronger created a screen cast to give an overview about the distortion
process in Hugin. He uses an old Hugin version in the video. The following
section of this tutorial explains how to do it with Hugin 2018. You can watch
the screen cast first if you want, you can do it
<a href="https://vimeo.com/51999287/">here (Vimeo)</a>.</p>
<p>If you start Hugin the first time, the windows you will get should look like in
Figure 8.</p>
<figure>
<img src="https://pixls.us/articles/create-lens-calibration-data-for-lensfun/hugin/01_startup.png" alt="Hugin start screen" title="Hugin start screen" width="760" height="507">
<figcaption>
<b>Figure 8:</b> Hugin start screen
</figcaption>
</figure>

<p>First select on the menu bar <em>Interface -&gt; Expert</em> to switch to the <strong>Expert
mode</strong>. You will get a windows which should look like as in Figure 9.</p>
<figure>
<img src="https://pixls.us/articles/create-lens-calibration-data-for-lensfun/hugin/02_expert_mode.png" alt="Hugin expert mode" title="Hugin expert mode" width="760" height="507">
<figcaption>
<b>Figure 9:</b> Hugin expert mode
</figcaption>
</figure>

<p>Once in the export mode click on <em>Add images</em> (Figure 10) and load the first
tiff from the <em>distortion/exported</em> folder.</p>
<figure>
<img src="https://pixls.us/articles/create-lens-calibration-data-for-lensfun/hugin/03_load_image.png" alt="Hugin add image" title="Hugin add image" width="760" height="507">
<figcaption>
<b>Figure 10:</b> Adding images and setting the focal length and crop factor
</figcaption>
</figure>

<p>By default the lens type should be set to <em>Normal (rectiliniar)</em> for normal
standard lenses. Make sure that the focal length is correct and set the <em>Focal
length multiplier</em>, which is the crop factor of your camera. For full frame
bodies this value should be <em>1</em>. If you have a crop camera you need to set the
correct crop value you can find in the specifications. Next click on the
<em>Control Points</em> tab (Figure 11).</p>
<figure>
<img src="https://pixls.us/articles/create-lens-calibration-data-for-lensfun/hugin/04_control_points.png" alt="Hugin control points tab" title="Hugin control points tab" width="760" height="507">
<figcaption>
<b>Figure 11:</b> The control points tab
</figcaption>
</figure>

<p>This is the tab to set the control points so that we can tell the software what
are our straight lines we are interested in. In this tab you have to make sure
that <em>auto fine-tune</em> is disabled, <em>auto add</em> is enabled and <em>auto-estimate</em> is
disabled! If this is the case zoom the image to 200% (you can also do this by
pressing ‘2’ on the keyboard).</p>
<p>In the zoomed images you have start at the top edges. On the left go to the top
left corner and to the top right corner on the right. The first straight line,
from left to right, should be visible. Select the first control point on
the left edge of the picture on the left page and the right edge on the
right (Figure 12).</p>
<figure>
<img src="https://pixls.us/articles/create-lens-calibration-data-for-lensfun/hugin/05_control_points_line3.png" alt="Adding control points" title="Adding control points" width="760" height="507">
<figcaption>
<b>Figure 12:</b> Setting the first two control points for the line to add
</figcaption>
</figure>

<p><strong>IMPORTANT</strong>: Once you have the first control point selected in both images.
Select <em>Add new Line</em> in the <em>mode</em> dropdown menu! This will add the two
control points as <em>line 3</em>! Now continue adding corresponding control points in
both pictures till you’re in the middle on both sides.</p>
<p><strong>Tip</strong>: The easiest and fasted is to set control points in the middle at the
tiling line. This reduces the required mouse movements.</p>
<p>Now zoom out by pressing ‘0’ and check it if everything has been added
correctly (Figure 13).</p>
<figure>
<img src="https://pixls.us/articles/create-lens-calibration-data-for-lensfun/hugin/06_control_points_line3_done.png" alt="Control points for line 3" title="Control points for line 3" width="760" height="507">
<figcaption>
<b>Figure 13:</b> Control points for line3
</figcaption>
</figure>

<p>While you are zoomed out, find a line which is about 3rd into the image from
the top to repeat adding a line. Zoom to 200% again, select the first control
points and again <em>Add a new line</em> which will result in <em>line4</em> (Figure 14)!</p>
<figure>
<img src="https://pixls.us/articles/create-lens-calibration-data-for-lensfun/hugin/07_control_points_line4_done.png" alt="Control points for line 3 and 4" title="Control points for line 3 and 4" width="760" height="507">
<figcaption>
<b>Figure 14:</b> Control points for line 3 and line 4
</figcaption>
</figure>

<p>Zoom out by pressing ‘0’ and check that you have two lines, line3 and line4. Now move on to the <em>Stitcher</em> tab (Figure 14).</p>
<figure>
<img src="https://pixls.us/articles/create-lens-calibration-data-for-lensfun/hugin/08_stitcher_rectelinear.png" alt="Selecting the projection" title="Selecting the projection" width="760" height="507">
<figcaption>
<b>Figure 15:</b> The stitcher tab, select the correct projection here.
</figcaption>
</figure>

<p>In the <em>Stitcher</em> tab you need to select the correct <em>Projection</em> for your
lens. This is <strong>Rectilinear</strong> for standard lenses. Once done switch to the
<em>Photos</em> tab (Figure 16).</p>
<figure>
<img src="https://pixls.us/articles/create-lens-calibration-data-for-lensfun/hugin/09_photos_custom_optimizer.png" alt="Optimizer tab" title="Optimizer tab" width="760" height="507">
<figcaption>
<b>Figure 16:</b> Enable the Optimizer tab.
</figcaption>
</figure>

<p>At the bottom under <em>Optimize</em> select <strong>Custom parameters</strong> for <em>Geometric</em>.
This will add an <em>Optimizer</em> tab. Switch to it once it appears (Figure 17).</p>
<figure>
<img src="https://pixls.us/articles/create-lens-calibration-data-for-lensfun/hugin/10_optimizer_select_abc.png" alt="Optimizer: Select a b c" title="Optimizer: Select a b c" width="760" height="507">
<figcaption>
<b>Figure 17:</b> Optimizer tab: Select a b c for barrel distortion correction
</figcaption>
</figure>

<p>Select the ‘a’, ‘b’ and ‘c’ lens parameters and click on <em>Optimize now!</em>.
Accept the calculation with yes. Now the values for ‘a’, ‘b’ and ‘c’ will
change (Figure 18).</p>
<figure>
<img src="https://pixls.us/articles/create-lens-calibration-data-for-lensfun/hugin/11_optimizer_done.png" alt="Optimizer: Calculated a b c" title="Optimizer: Calculated a b c" width="760" height="507">
<figcaption>
<b>Figure 18:</b> Calculated distortion correction ‘a’, ‘b’ and ‘c’.
</figcaption>
</figure>

<p>The calculated correction values for ‘a’, ‘b’ and ‘c’ you can find in the tab
need to be added to the lenses.conf. Open The file and fill out the missing
options. Here is an example:</p>
<p>Example:</p>
<pre><code>[FE 85mm F1.4 GM]
maker = Sony
mount = Sony E
cropfactor = 1.0
aspect_ratio = 3:2
type = normal
</code></pre><ul>
<li><code>maker</code> is should be the lens manufacturer e.g. <em>Sony</em></li>
<li><code>mount</code> is the mount system for the lens, check the lensfun database</li>
<li><code>cropfactor</code> is 1.0 for full frame cameras, if you have a crop camera find out the correct crop factor for it.</li>
<li><code>aspect_ratio</code> is the aspect ratio for the pictures which is normally 3:2.</li>
<li><code>type</code> is the type of the lens, e.g. ‘normal’ for standard rectilinear lenses. Other values are: <em>stereographic</em>, <em>equisolid</em>, <em>stereographic</em>, <em>panoramic</em> or <em>fisheye</em>.</li>
</ul>
<p>If you have e.g. a 85mm there should be an entry for the focal length which is
set to: 0.0, 0.0, 0.0. You need to change the values in the lenses.conf for
your focal length with the calculated corrections from the Optimizer tab
(Figure 16).</p>
<pre><code>[FE 85mm F1.4 GM]
maker = Sony
mount = Sony E
cropfactor = 1.0
aspect_ratio = 3:2
type = normal
distortion(85mm) = 0.002, 0.001, -0.009
</code></pre><h4 id="but-i-don-t-want-to-do-distortion-corrections-">But I don’t want to do distortion corrections!<a href="#but-i-don-t-want-to-do-distortion-corrections-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>No problem, if you want to skip this step then you can created the lenses.conf
manually. It should look like the following example:</p>
<pre><code>[lens model]
maker =
mount =
cropfactor = 1.0
aspect_ratio = 3:2
type = normal
</code></pre><p>The section name is the <em>lens model</em>. You can find it out by running:</p>
<pre><code>exiv2 -g LensModel -pt &lt;raw image file&gt;
</code></pre><p>The other options are:</p>
<ul>
<li><code>maker</code> is should be the lens manufacturer e.g. <em>Sony</em></li>
<li><code>mount</code> is the mount system for the lens, check the lensfun database</li>
<li><code>cropfactor</code> is 1.0 for full frame cameras, if you have a crop camera find out the correct crop factor for it.</li>
<li><code>aspect_ratio</code> is the aspect ratio for the pictures which is normally 3:2.</li>
<li><code>type</code> is the type of the lens, e.g. ‘normal’ for standard rectilinear lenses. Other values are: <em>stereographic</em>, <em>equisolid</em>, <em>stereographic</em>, <em>panoramic</em> or <em>fisheye</em>.</li>
</ul>
<h3 id="tca">TCA<a href="#tca" class="header-link"><i class="fa fa-link"></i></a></h3>
<p><em>You can skip this step if you don’t want to do TCA corrections.</em></p>
<p>This step is fully automatic, all you have to do is to run the following
command and wait:</p>
<pre><code>$ lens_calibrate tca
Running TCA corrections for tca/exported/_7M32375.ppm ... DONE
</code></pre><p>However it possible to calculate more complex TCA corrections. For this you
need to run the step it with an additional command line argument, like this:</p>
<pre><code>$ lens_calibrate --complex-tca tca
Running TCA corrections for tca/exported/_7M32375.ppm ... DONE
</code></pre><!-- TODO: Difference between normal and complex TCA? -->
<!-- TODO: Explain plots? -->
<h3 id="vignetting">Vignetting<a href="#vignetting" class="header-link"><i class="fa fa-link"></i></a></h3>
<p><em>You can skip this step if you don’t want to do vignetting corrections.</em></p>
<p>To calculate the vignetting corrections it is also a very simple step. All you
have to do is to run the following command and wait:</p>
<pre><code>$ lens_calibrate vignetting
</code></pre><!-- TODO: Explain plots? -->
<h3 id="generating-the-xml">Generating the XML<a href="#generating-the-xml" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>To get corrections for lensfun you need a lenses.conf with the required options
to be filled out (maker, mount, cropfactor, aspect_ratio, type). And at least
one of the corrections steps done. If you have this you can generate the XML
file which can be consumed by lensfun. You can do it with the following
command:</p>
<pre><code>$ lens_calibrate generate_xml
Generating lensfun.xml
</code></pre><p>You can redo this step as many times as you want. And you can just rerun it if
you add an additional correction.</p>
<h2 id="using-the-lensfun-xml">Using the lensfun.xml<a href="#using-the-lensfun-xml" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>You may want to fine-tune the lens model name in the generated lensfun.xml
file. Lensfun normalises names before any matching, so you have some freedom.
For example, upper/lowercase can be changed arbitrarily. Any single <code>f</code> is
ignored, so you may change <code>16-35mm 2.8</code> into <code>16-35mm f/2.8</code>. If there was a
tele converter involved, you must add “converter” into the name so that Lensfun
does not try to derive allowed focal lengths from the lens name.
Ordering of parts in the lens name is completely unimportant for matching. As
are single punctuation characters. You may even add things (e.g. <code>16-35</code> into
<code>16-35mm</code>) but be conservative here. <strong>Never</strong> drop anything what exiv2 reports!</p>
<p>If you want to use the generated lensfun.xml file to test if the calibration
you created works, you can copy to the local lensfun config folder in your home
directory.</p>
<pre><code>cp lensfun.xml ~/.local/share/lensfun
</code></pre><p>Make sure your camera is recognized by lensfun or you need to add an entry to
the lensfun.xml file too.</p>
<h2 id="contributing-your-lensfun-xml">Contributing your lensfun.xml<a href="#contributing-your-lensfun-xml" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>To contribute your calibration data data for your lens to the lensfun project,
execute the script with the following command:</p>
<pre><code>$ lens_calibrate ship
Created lensfun_calibration.tar.xz
Open a bug at https://github.com/lensfun/lensfun/issues/ with the data.
</code></pre><p>This will create a tarball with all the required data. Now go to</p>
<ul>
<li><a href="https://github.com/lensfun/lensfun/issues/">https://github.com/lensfun/lensfun/issues/</a></li>
</ul>
<p>and open a bug using the following subject:</p>
<pre><code>Calibration data for &lt;lens model&gt;
</code></pre><p>And for the description just use:</p>
<pre><code>Please add the attached lens data to the lensfun data base.
</code></pre><p>Attach the lensfun_calibration.tar.xz to the bugreport.</p>
<h2 id="feedback">Feedback<a href="#feedback" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Feedback for this article is very welcome. If you’re a lensfun developer and
read this please contact me. I would like to contribute the script to lensfun
and further improve the article. I still have unanswered questions.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Support Andrea Ferrero on Patreon!]]></title>
            <link>https://pixls.us/blog/2018/09/support-andrea-ferrero-on-patreon/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2018/09/support-andrea-ferrero-on-patreon/</guid>
            <pubDate>Wed, 26 Sep 2018 18:22:32 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2018/09/support-andrea-ferrero-on-patreon/af-lede.jpg" /><br/>
                <h1>Support Andrea Ferrero on Patreon!</h1> 
                <h2>Andrea is developing Photo Flow, GIMP AppImage, Hugin AppImage, and more!</h2>  
                <p>Andrea Ferrero, or as we know him <a href="https://discuss.pixls.us/u/carmelo_drraw/summary">Carmelo_DrRaw</a>, has been contributing to the PIXLS.US community since April of 2015. A self described <em>developer and photography enthusiast</em>, Andrea is the developer of the <a href="https://github.com/aferrero2707/PhotoFlow">PhotoFlow</a> image editor, and is producing AppImages for:</p>
<ul>
<li>The <a href="https://www.gimp.org/">GIMP</a> image manipulation program - weekly <a href="https://github.com/aferrero2707/gimp-appimage/releases/tag/continuous">AppImage packages</a> from stable releases and development branches.</li>
<li>The <a href="https://rawtherapee.com/">RawTherapee</a> editor - nightly <a href="https://github.com/Beep6581/RawTherapee/releases/tag/nightly">AppImage</a> and&nbsp;<a href="https://github.com/aferrero2707/rt-win64/releases/tag/continuous">Windows</a> packages from stable and development branches</li>
<li><a href="http://qtpfsgui.sourceforge.net/">LuminanceHDR</a> - nightly&nbsp;<a href="https://github.com/aferrero2707/lhdr-appimage/releases/tag/continuous">AppImage</a> packages for Linux</li>
<li><a href="http://jcelaya.github.io/hdrmerge/">HDRMerge</a> -&nbsp;nightly <a href="https://github.com/jcelaya/hdrmerge/releases/tag/nightly">AppImage</a> packages for Linux</li>
<li>The <a href="http://hugin.sourceforge.net/">Hugin</a> panorama photo stitcher -&nbsp;<a href="https://gist.github.com/aferrero2707/d676fea46f3d91fcd4c7fb7b2c83a885">AppImages</a> for stable releases and development branches</li>
</ul>

<p>Andrea is the best sort of community member, contributing six different projects (including his own)! He is always thoughtful in his responses, does his own support for PhotoFlow, and is kind and giving. He has finally started a <a href="https://www.patreon.com/andreaferrero/overview">Patreon page to support his all of his hard work</a>. Support him now!</p>
<!--more-->
<p>He was also kind enough to answer a few questions for us:</p>
<p>PX: <strong>When did you get into photography? What’s your favorite subject matter?</strong></p>
<p>AF: I think I was about 15 when I got my first reflex, and I was immediately fascinated by macro-photography. This is still what I like to do the most, together with taking pictures of my kids. ;-)
By the way, you can visit my personal free web gallery on GitHub: <a href="http://aferrero2707.github.io/photorama/gallery/">http://aferrero2707.github.io/photorama/gallery/</a> (adapted from <a href="https://github.com/sunbliss/photorama">this project</a>).</p>
<p>It is still a work in progress, but you are welcome to fork it and adapt it to your needs if you find it useful!</p>
<p>PX: <strong>What brought you to using and developing Free/Open Source Software?</strong></p>
<p>AF: I started to get interested in programming when I was at the university, in the late 90’s. At that time I quickly realized that the easiest way to write and compile my code was to throw Linux into my hard drive. Things were not as easy as today but I eventually managed to get it running, and the adventure began.</p>
<p>A bit later I started a scientific career (nothing related to image processing or photography, so I won’t bother with more details about my daily job), and since then I have been a user of Linux-based computing clusters for almost 20 years at the time of writing… A large majority of the software tools I use at work are free and open sourced and this definitely has marked my way of thinking and developing.</p>
<p>PX: <strong>What are some new/exciting features you develop in Photo Flow?</strong></p>
<p>AF: Currently I am mostly focusing on HDR processing and high-quality Dynamic Range compression - what is also commonly called shadows/highlights compression.</p>
<p>More generally, there is still a lot of work to do on the performances side. The software is already usable and quite stable, but some of the image filters are still a bit too slow for real-time feedback, especially when combined together.</p>
<p>The image exporting module is also currently in a state of work in progress. It is already possible to select either Jpeg or TIFF (8, 16 or floating-point 32 bits bit depth) as the output format, to resize the image and add some post-resize sharpening, and to select the output ICC profile.
What is still missing is a real-time preview of the final result, with a possibility to soft-proof the output profile. The same options need to be included in the batch processor as well.</p>
<p>On a longer term, and if there is some interest from the community, I am thinking about porting the code to Android in a simplified form that would be suitable for tablets and the like. The small memory footprint of the program could be an important advantage on such systems.</p>
<p>PX: <strong>What other applications would you like to make an AppImage for? Have you explored Snaps or Flatpaks?</strong></p>
<p>AF: I am currently developing and refining AppImage packages for GIMP, RawTherapee, LuminanceHDR and HDRMerge, in addition to PhotoFlow. All packages are automatically built and deployed through Travis CI, for better reproducibility and increased security. Hugin is the next application that I plan to package as an AppImage.</p>
<p>All the AppImage projects are freely available on GitHub. That’s also the best place for any feedback, bug report, or suggestion.</p>
<p>There is an ongoing <a href="https://github.com/aferrero2707/gimp-appimage/issues/9">discussion</a> with the GIMP developers about the possibility to provide the AppImage as an official download.</p>
<p>In addition to the AppImage packages, I am also working with the RawTherapee developers on cross-compiled Windows packages that are also automatically built on Travis CI. The goal is to help them provide up-to-date packages from the main development branches, so that more users can test them and provide feedback.</p>
<p>I’m also open to any suggestions for additional programs that could be packaged as AppImages, so do not hesitate to express your wishes!</p>
<p>Personally I am a big fan of the AppImage idea, mostly because, unlike Snap or Flatpack packages, it is not bound to any specific distribution or run-time environment. The packager has full control over the contents of the AppImage package, pretty much like MacOS bundles.</p>
<p>Moreover, I find the community of developers around the AppImage format very active and open-minded. I am currently collaborating to improve the packaging of GTK applications. For those who are interested in the details, the discussion can be followed here: <a href="https://github.com/linuxdeploy/linuxdeploy/issues/2">https://github.com/linuxdeploy/linuxdeploy/issues/2</a></p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[G'MIC 2.3.6]]></title>
            <link>https://pixls.us/blog/2018/08/g-mic-2-3-6/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2018/08/g-mic-2-3-6/</guid>
            <pubDate>Wed, 29 Aug 2018 15:39:18 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/lede_happy-birthday.jpg" /><br/>
                <h1>G'MIC 2.3.6</h1> 
                <h2>10 Years of Open Source Image Processing!</h2>  
                <p>The <a href="https://www.greyc.fr/?page_id=443&amp;lang=en">IMAGE</a> team of the <a href="https://www.greyc.fr/?page_id=27&amp;lang=en">GREYC</a> laboratory is happy to celebrate the 10th anniversary of <a href="http://gmic.eu"><em>G’MIC</em></a> with you, an open-source (<a href="http://www.cecill.info/">CeCILL</a>), generic and extensible framework for <a href="https://en.wikipedia.org/wiki/Digital_image_processing">image processing</a>.
GREYC is a public research laboratory on digital technology located in Caen, Normandy/France, under the supervision of 3 research institutions: the <a href="http://www.cnrs.fr">CNRS</a> (UMR 6072), the <a href="http://www.unicaen.fr/home-578581.kjsp?RH=1291198060074&amp;RF=UNIV-EN">University of Caen Normandy</a> and the <a href="http://www.ensicaen.fr/">ENSICAEN</a>  engineering school.</p>
<!--more-->
<figure>
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_234.png" alt="G’MIC-Qt"/>
<figcaption>
G’MIC-Qt, the main user interface of the G’MIC project. 
</figcaption>
</figure>

<p>This celebration gives us the perfect opportunity to announce the release of a new version (<a href="https://gmic.eu/download.shtml"><strong>2.3.6</strong></a>) of this free software and to share with you a summary of the latest notable changes since our <a href="https://pixls.us/blog/2018/02/g-mic-2-2/">last G’MIC report</a>, published on <a href="https://pixls.us/blog/2018/02/g-mic-2-2/"><em>PIXLS.US</em> in February 2018</a>.</p>
<hr>
<p><strong>Related links:</strong></p>
<ul>
<li><a href="https://gmic.eu">The G’MIC project</a></li>
<li><a href="https://twitter.com/gmic_ip">Twitter feed</a></li>
<li><a href="https://linuxfr.org/news/gmic-un-nouvel-outil-libre-de-manipulation-dimages">Announcement of the first version of G’MIC on LinuxFr.org </a> [fr]</li>
<li><a href="https://pixls.us/blog/2018/02/g-mic-2-2/">Previous article about G’MIC on PIXLS.US</a></li>
</ul>
<hr>
<p>(<em>Click on the images of the report to display them in full resolution</em>)</p>
<h2 id="1-looking-back-at-10-years-of-development"><a href="#1-looking-back-at-10-years-of-development" class="header-link-alt">1. Looking back at 10 years of development</a></h2>
<p><em>G’MIC</em> is a multiplatform framework (GNU/Linux, macOS, Windows…) providing various user interfaces for manipulating <em>generic</em> image data, such as 2D or 3D hyperspectral images or image sequences with float values (thus including “normal” color images). More than <a href="http://gmic.eu/reference.shtml">1000 different operators</a> for image processing are included, a number that is extensible at will since users can add their own functions by using the embedded script language.</p>
<p>It was at the end of July 2008 that the first lines of <em>G’MIC</em> code were created (in <em>C++</em>).
At that time, I was the main developer involved in <a href="http://cimg.eu"><em>CImg</em></a>, a lightweight <em>open source</em> <em>C++</em> library for image processing, when I made the following observation:</p>
<ul>
<li>The initial goal of <em>CImg</em>, which was to propose a “minimal” library of functions to help <em>C++</em> developers to develop image processing algorithms, was broadly achieved; most of the algorithms I considered as <em>essential</em> in image processing were integrated. <em>CImg</em> was initially meant to stay lightweight, so I didn’t want to include new algorithms <em>ad vitam æternam</em>, which would be too heavy or too specific, thus betraying the initial concept of the library.</li>
<li>However, this would only cater to a rather small community of people with both <em>C++</em> knowledge <strong>and</strong> image processing knowledge! One of the natural evolutions of the project, creating <a href="https://en.wikipedia.org/wiki/Language_binding"><em>bindings</em></a> of <em>CImg</em> to other programming languages, didn’t appeal much to me given the lack of interest I had in writing the code. And these potential <em>bindings</em> still only concerned an audience with some development expertise.</li>
</ul>
<p>My ideas were starting to take shape: I needed to find a way to provide <em>CImg</em> processing features for <strong>non-programmers</strong>. Why not attempt to build a tool that could be used on the command line (like the famous <a href="https://www.imagemagick.org/script/convert.php"><em>convert</em></a> command from <a href="https://www.imagemagick.org"><em>Imagemagick</em></a>)? A first attempt in June 2008 (<em>inrcast</em>, presented on the French news site <a href="https://linuxfr.org/users/dtschump/journaux/inrcast-un-autre-outil-de-manipulation-dimages">LinuxFR</a>), while unsuccessful, allowed me to better understand what would be required for this type of tool to  easily process images from the command line.</p>
<p>In particular, it occurred to me that <strong>conciseness</strong> and <strong>coherence</strong> of the command syntax were the two most important things to build upon. These were the aspects that required the most effort in research and development (the actual image processing features were already implemented in <em>CImg</em>). In the end, the focus on conciseness and coherence took me much further than originally planned as G’MIC got an <a href="https://en.wikipedia.org/wiki/Interpreter_(computing">interpreter</a>) of <a href="https://gmic.eu/tutorial/basics.shtml">its own scripting language</a>, and then a <a href="https://en.wikipedia.org/wiki/Just-in-time_compilation"><em>JIT</em> compiler</a> for the evaluation of mathematical expressions and image processing algorithms working at the pixel level.</p>
<p>With these ideas, by the end of July 2008, I was happy to announce the <a href="https://linuxfr.org/news/gmic-un-nouvel-outil-libre-de-manipulation-dimages">first draft of <em>G’MIC</em></a>. The project was officially up and running!</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/logo_gmic.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/logo_gmic.png" alt="G’MIC logo"/>
</a>
<figcaption>
Fig. 1.1: Logo of the G’MIC project, libre framework for image processing, and its cute mascot &ldquo;Gmicky&rdquo; (illustrated by <a href="http://www.davidrevoy.com/">David Revoy</a>).
</figcaption>
</figure>

<p>A few months later, in January 2009, enriched by my previous development experience on <a href="http://cimg.eu/greycstoration"><em>GREYCstoration</em></a> (a free tool for nonlinear image denoising and interpolation, from which a plug-in was made for <a href="http://www.gimp.org"><em>GIMP</em></a>), and in the hopes of reaching an even larger public, I published a <a href="https://linuxfr.org/news/traitement-dimages-quand-gmic-130-sinvite-dans-gimp"><em>G’MIC</em> <em>GTK</em> plug-in for <em>GIMP</em></a>.
This step proved to be a defining moment for the <em>G’MIC</em> project, giving it a significant boost in popularity as seen below (the project was hosted on <a href="https://sourceforge.net/projects/gmic/"><em>Sourceforge</em></a> at the time).</p>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/stats_plugin.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/stats_plugin.png" alt="Download statistics"/>
</a>
<figcaption>
Fig.1.2: Monthly downloads statistics of G’MIC, between July 2008 and May 2009 (release of the GIMP plug-in happened in January 2009).
</figcaption>
</figure>

<p>The sudden interest in the plugin from different users of <em>GIMP</em> (photographers, illustrators and other types of artists) was indeed a real launchpad for the project, with the rapid appearance of various contributions and external suggestions (for the code, management of the forums, web pages, writing of tutorials and realization of videos, etc.). The often idealized community effect of free software finally began to take off! Users and developers began to take a closer look at the operation of the original <em>command-line interface</em> and its associated scripting language (which admittedly did not interest many people until that moment!). From there, many of them <a href="https://github.com/dtschump/gmic-community">took the plunge</a> and began to implement new image processing filters in the <em>G’MIC</em> language, continuously integrated them into the <em>GIMP</em> plugin. Today, these contributions represent almost half of the filters available in the plugin.</p>
<p>Meanwhile, the important and repeated contributions of <a href="https://foureys.users.greyc.fr/Fr/index.php"><em>Sébastien Fourey</em></a>, colleague of the <em>GREYC IMAGE</em> team (and experienced C++ developer) significantly improved the user experience of <em>G’MIC</em>. <em>Sébastien</em> is indeed at the heart of the main graphical interface development of the project, namely:</p>
<ul>
<li>The <a href="https://gmicol.greyc.fr/"><em>G’MIC Online</em></a> web service (which was later re-organised by <em>GREYC’s</em> Development Department).</li>
<li>Free Software <a href="https://github.com/c-koi/zart"><em>ZArt</em></a>, a graphical interface - based on the <a href="https://www.qt.io/">_Qt_</a> library - for the application of <em>G’MIC</em> filters to video sequences (from files or digital camera streams).</li>
<li>And above all, at the end of 2016, Sébastien tackled a complete rewrite of the <em>G’MIC</em> plugin for <em>GIMP</em> in a more <strong>generic</strong> form called <a href="https://github.com/c-koi/gmic-qt"><em>G’MIC-Qt</em></a>. This component, also based on the _Qt_ library (as the name suggests), is a single plugin that works equally well with both <a href="http://www.gimp.org"><em>GIMP</em></a> and <a href="http://krita.org"><em>Krita</em></a>, two of the leading free applications for photo retouching/editing and digital painting. <em>G’MIC-Qt</em> has now completely supplanted the original <em>GTK</em> plugin thanks to its many features: built-in filter search engine, better preview, superior interactivity, etc. Today it is the most successful interface of the <em>G’MIC</em> project and we hope to be able to offer it in the future for other host applications (contact us if you are interested in this subject!).</li>
</ul>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/gui_seb.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gui_seb.png" alt="Interfaces graphiques de G’MIC"/>
</a>
<figcaption>
Fig.1.3: Different graphical interfaces of the G’MIC project, developed by Sébastien Fourey: G’MIC-Qt, G’MIC Online and ZArt.
</figcaption>
</figure>

<p>The purpose of this article is not to go into too much detail about the history of the project. Suffice it to say that we have not really had time to become bored in the last ten years!</p>
<p>Today, <em>Sébastien</em> and I are the two primary maintainers of the <em>G’MIC</em> project (<em>Sébastien</em> mainly for the interface aspects, myself for the development and improvement of filters and the core development), in addition to our main professional activity (research and teaching/supervision).</p>
<p>Let’s face it, managing a free project like <em>G’MIC</em> takes a considerable amount of time, despite its modest size (~120k lines of code). But the original goal has been achieved: thousands of non-programming users have the opportunity to freely and easily use our image processing algorithms in many different areas: <a href="https://en.wikipedia.org/wiki/Image_editing">image editing</a>, <a href="https://en.wikipedia.org/wiki/Photo_manipulation">photo manipulation</a>, illustration and <a href="https://en.wikipedia.org/wiki/Digital_painting">digital painting</a>, <a href="https://en.wikipedia.org/wiki/Video_editing_software">video processing</a>, scientific illustration, <a href="https://en.wikipedia.org/wiki/Procedural_generation">procedural generation</a>, <a href="https://en.wikipedia.org/wiki/Glitch_art">glitch art</a>…</p>
<p>The milestone of <em>3.5 million total downloads</em> was exceeded last year, with a current average of about 400 daily downloads from the official website (figures have been steadily declining in recent years as <em>G’MIC</em> is becoming more commonly downloaded and installed via alternative external sources).</p>
<p>It is sometimes difficult to keep a steady pace of development and the motivation that has to go with it, but we persisted, thinking back to the happy users who from time to time share their enthusiasm for the project!</p>
<p>Obviously we can’t name all the individual contributors to <em>G’MIC</em> whom we would like to thank, and with whom we’ve enjoyed exchanging during these ten years, but our heart is with them! Let’s also thank the <em>GREYC</em> laboratory and <a href="http://www.cnrs.fr/ins2i/"><em>INS2I</em> institute of <em>CNRS</em></a> for their strong support for this free project. A big thank you also to all the community of <em>PIXLS.US</em> who did a great job supporting the project (hosting the forum and  publishing our <a href="https://pixls.us/blog/">articles on <em>G’MIC</em></a>).</p>
<p>But let’s stop reminiscing and get down to business: new features since our last article about the release of version 2.2!</p>
<h2 id="2-automatic-illumination-of-flat-colored-drawings"><a href="#2-automatic-illumination-of-flat-colored-drawings" class="header-link-alt">2. Automatic illumination of flat-colored drawings</a></h2>
<p><em>G’MIC</em> recently gained a quite impressive new filter named « <strong>Illuminate 2D shape</strong> », the objective of which is to automatically add lit zones and clean shadows to flat-colored 2D drawings, in order to give a 3D appearance.</p>
<p>First, the user provides an object to illuminate, in the form of an image on a transparent background (typically a drawing of a character or animal). By analyzing the shape and content of the image, G’MIC then tries to deduce a concordant 3D elevation map (“ bumpmap “). The map of elevations obtained is obviously not exact, since a 2D drawing colored in solid areas does not contain explicit information about an associated 3D structure! From the estimated 3D elevations it is easy to deduce a map of normals (“ normalmap “) which is used in turn to generate an illumination layer associated with the drawing (following a <a href="https://en.wikipedia.org/wiki/Phong_shading">Phong Shading model</a>).</p>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/gmic_illuminate2d.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_illuminate2d.png" alt="Illuminate 2D shape"/>
</a>
<figcaption>
Fig. 2.1: G’MIC’s “<strong>Illuminate 2D shape</strong>“ filter in action, demonstrating automatic shading of a beetle drawing (shaded result on the right).
</figcaption>
</figure>

<p>This new filter is very flexible and allows the user to have a fairly fine control over the lighting parameters (position and light source rendering type) and estimation of the 3D elevation. In addition the filter gives the artist the opportunity to rework the generated illumination layer, or even directly modify the elevation maps and estimated 3D normals. The figure below illustrates the process as a whole; using the solid colored beetle image (<em>top left</em>), the filter fully automatically estimates an associated 3D normal map (<em>top right</em>). This allows it to generate renditions based on the drawing (<em>bottom row</em>) with two different rendering styles: smooth and quantized.</p>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/bug_all.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/bug_all.png" alt="Normalmap estimation"/>
</a>
<figcaption>
Fig. 2.2: The process pipeline of the G’MIC “<strong>Illuminate 2D shape</strong>“ filter involves the estimation of a 3D normal map to generate the automatic illumination of a drawing.
</figcaption>
</figure>

<p>Despite the difficulty inherent in the problem of converting a 2D image into 3D elevation information, the algorithm used is surprisingly effective in a good many cases. The estimation of the 3D elevation map obtained is sufficiently consistent to automatically generate plausible 2D drawing illuminations, as illustrated by the two examples below - obtained in just a few clicks!</p>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/gmic_snake.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_snake.png" alt="Shading example 1"/>
</a>
<a href="http://gmic.eu/gmic234/fullsize/gmic_tiger.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_tiger.png" alt="Shading example 2"/>
</a>
<figcaption>
Fig. 2.3: Two examples of completely automatic shading of 2D drawings, generated by G’MIC
</figcaption>
</figure>

<p>It occurs, of course, that the estimated 3D elevation map does not always match what one might want. Fear not, the filter allows the user to provide “guides” in the form of an additional layer composed of colored lines, giving more precise information to the algorithm about the structure of the drawing to be analyzed. The figure below illustrates the usefulness of these guides for illuminating a drawing of a hand (<em>top left</em>); the automatic illumination (<em>top right</em>) does not account for information in the lines of the hand. Including these few lines in an additional layer of “guides” (<em>in red, bottom left</em>) helps the algorithm to illuminate the drawing more satisfactorily.</p>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/gmic_hand4.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_hand4.png" alt="Using additional guides"/>
</a>
<figcaption>
Fig. 2.4: Using a layer of “guides” to improve the automatic illumination rendering generated by G’MIC.
</figcaption>
</figure>

<p>If we analyze more precisely the differences obtained between estimated 3D elevation maps with and without guides (illustrated below as symmetrical 3D objects), there is no comparison: we go from a very round boxing glove to a much more detailed 3D hand estimation!</p>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/gmic_hand3d_anim_all.gif">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_hand3d_anim_all.gif" alt="Estimated 3D elevations with and without guides"/>
</a>
<figcaption>
Fig. 2.5: Estimated 3D elevations for the preceding drawing of a hand, with and without the use of “guides”.
</figcaption>
</figure>

<p>Finally, note that this filter also has an interactive preview mode, allowing the user to move the light source (with the mouse) and have a preview of the drawing illuminated in real time. By modifying the position parameters of the light source, it is thus possible to obtain the type of animations below in a very short time, which gives a fairly accurate idea of the 3D structure estimated by the algorithm from the original drawing.</p>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/gmic_hand.gif">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_hand.gif" alt="light animation"/>
</a>
<figcaption>
Fig. 2.6: Modification of the position of the light source and associated illumination renderings, calculated automatically by G’MIC.
</figcaption>
</figure>

<p>A video showing the various possible ways to edit the illumination allowed by this filter is <a href="https://www.youtube.com/watch?v=G1wYSJTsVtI">visible here</a>. The hope is this new feature of G’MIC allows artists to accelerate the illumation and shading stage of their future drawings!</p>
<h2 id="3-stereographic-projection"><a href="#3-stereographic-projection" class="header-link-alt">3. Stereographic projection</a></h2>
<p>In a completely different genre, we have also added a filter implementing <a href="https://en.wikipedia.org/wiki/Stereographic_projection">stereographic projection</a>, suitably named “<strong>Stereographic projection</strong>“. This type of cartographic projection makes it possible to project planar defined image data onto a sphere. It should be noted that this is the usual projection used to generate images of “mini-planets” from equirectangular panoramas, like the one illustrated in the figure below.</p>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/gmic_stereographic0.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_stereographic0.png" alt="equirectangular panorama"/>
</a>
<figcaption>
Fig. 3.1: Example of equirectangular panorama (created by <a href="https://www.flickr.com/photos/gadl">Alexandre Duret-Lutz</a>).
</figcaption>
</figure>

<p>If we launch the <em>G’MIC</em> plugin with this panorama and select the filter “<strong>Stereographic projection</strong>“, we get:</p>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/gmic_stereographic.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_stereographic.png" alt="Filter 'Stereographic projection'"/>
</a>
<figcaption>
Fig. 3.2: The “<strong>Stereographic projection</strong>“ filter of G’MIC in action using the plugin for GIMP or Krita.
</figcaption>
</figure>

<p>The filter allows precise adjustments of the projection center, the rotation angle, and the radius of the sphere, all interactively displayed directly on the preview window (we will come back to this later). In a few clicks, and after applying the filter, we get the desired “mini-planet”:</p>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/gmic_stereographic3.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_stereographic3.png" alt="Mini-planet"/>
</a>
<figcaption>
Fig. 3.3: “Mini-planet” obtained after stereographic projection.
</figcaption>
</figure>

<p>It is also intruiging to note that simply by reversing the vertical axis of the images, we transform a “mini-planet” into a “max-tunnel”!</p>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/gmic_tunnel.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_tunnel.png" alt="Max-tunnel"/>
</a>
<figcaption>
Fig. 3.4: “Maxi-tunnel” obtained by inversion of the vertical axis then stereographic projection.
</figcaption>
</figure>

<p>Again, we made <a href="https://www.youtube.com/watch?v=5BYV1lwuF3w">this short video</a> which shows this filter used in practice. Note that <em>G’MIC</em> already had a similar filter (called “<strong>Sphere</strong>“), which could be used for the creation of “mini-planets”, but with a type of projection less suitable than the stereographic projection now available.</p>
<h2 id="4-even-more-possibilities-for-color-manipulation"><a href="#4-even-more-possibilities-for-color-manipulation" class="header-link-alt">4. Even more possibilities for color manipulation</a></h2>
<p>Manipulating the colors of images is a recurring occupation among photographers and illustrators, and <em>G’MIC</em> already had several dozen filters for this particular activity - grouped in a dedicated category (the originally named “<strong>Colors</strong>“ category!). This category is still growing, with two new filters having recently appeared:</p>
<ul>
<li>The “<strong>CLUT from after-before layers</strong>“ filter tries to model the color transformation performed between two images. For example, suppose we have the following pair of images:</li>
</ul>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/wc_trophy01.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/wc_trophy01.png" alt="Image pair"/>
</a>
<figcaption>
Fig. 4.1: Pair of images where an unknown colorimetric transformation has been applied to the top image to obtain the bottom one.
</figcaption>
</figure>

<p><strong>Problem</strong>: we do not remember at all how we went from the the original image to the modified image, but we would like to apply the same process to another image. Well, no more worries, call <em>G’MIC</em> to the rescue! The filter in question will seek to better model the modification of the colors in the form of a <a href="http://www.quelsolaar.com/technology/clut.html"><em>HaldCLUT</em></a>, which happens to be a classic way to represent any colorimetric transformation.</p>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/gmic_clut_from_ab.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_clut_from_ab.png" alt="Filter 'CLUT from after-before layers'"/>
</a>
<figcaption>
Fig. 4.2: The filter models the color transformation between two images as a HaldCLUT.
</figcaption>
</figure>

<p>The <em>HaldCLUT</em> generated by the filter can be saved and re-applied on other images, with the desired property that the application of the <em>HaldCLUT</em> on the original image produces the target model image originally used to learn the transformation.
From there, we are able to apply an equivalent color change to any other image:</p>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/pink_car_all.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/pink_car_all.png" alt="HaldCLUT applied on another image"/>
</a>
<figcaption>
Fig. 4.3: The estimated color transformation in the form of HaldCLUT is re-applied to another image.
</figcaption>
</figure>

<p>This filter makes it possible in the end to create <em>HaldCLUT</em> “by example”, and could therefore interest many photographers (in particular those who distribute compilations of <em>HaldCLUT</em> files, <a href="https://rawpedia.rawtherapee.com/Film_Simulation">freely</a> or otherwise!).</p>
<ul>
<li>A second color manipulation filter, named “<strong>Mixer [PCA]</strong>“ was also recently integrated into <em>G’MIC</em>. It acts as a classic <a href="https://docs.gimp.org/en/plug-in-colors-channel-mixer.html">color channel mixer</a>, but rather than working in a predefined color space (like sRGB, HSV, Lab…), it acts on the “natural” color space of the input image, obtained by <a href="https://en.wikipedia.org/wiki/Principal_component_analysis">principal component analysis</a> (PCA) of its <em>RGB</em> colors. Thus each image will be associated with a different color space. For example, if we take the “lion” image below and look at the distribution of its colors in the <em>RGB</em> cube (<em>right image</em>), we see that the main axis of color variation is defined by a straight line from dark orange to light beige (axis symbolized by the <em>red arrow</em> in the figure).</li>
</ul>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/gmic_mix_pca2.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_mix_pca2.png" alt="PCA of RGB colors"/>
</a>
<figcaption>
Fig. 4.4: Distribution of colors from the “lion” image in the RGB cube, and associated main axes (colorized in red, green and blue).
</figcaption>
</figure>

<p>The secondary axis of variation (<em>green arrow</em>) goes from blue to orange, and the tertiary axis (<em>blue arrow</em>) from green to pink. It is these axes of variation (rather than the <em>RGB</em> axes) that will define the color basis used in this channel mix filter.</p>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/gmic_mix_pca.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_mix_pca.png" alt="Filter 'Mixer [PCA]'"/>
</a>
<figcaption>
Fig. 4.5: The “<strong>Mixer [PCA]</strong>“ filter is a channel mixer acting on the axes of “natural” color variations of the image.
</figcaption>
</figure>

<p>It would be wrong to suggest that it is always better to consider the color basis obtained by <em>PCA</em> for the mixing of channels, and this new filter is obviously not intended to be the “ultimate” mixer that would replace all others. It simply exists as an alternative to the usual tools for mixing color channels, an alternative whose results proved to be quite interesting in tests of several images used during the development of this filter. It does no harm to try in any case…</p>
<h2 id="5-filter-mishmash"><a href="#5-filter-mishmash" class="header-link-alt">5. Filter mishmash</a></h2>
<p>This section is about a few other filters improved or included lately in <em>G’MIC</em> which deserve to be talked about, without dwelling too much on them.</p>
<ul>
<li><p>Filter &ldquo;<strong>Local processing</strong>&rdquo; applies a color normalization or equalization process on the local image neighborhoods (with possible overlapping). This is an additional filter to make details pop up from under or over-exposed photographs, but it may create strong and unpleasant halo artefacts with non-optimal parameters.</p>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/gmic_local_processing.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_local_processing.png" alt="Filter 'Local processing'"/>
</a>
<figcaption>
Fig. 5.1: The new filter &ldquo;<strong>Local processing</strong>&rdquo; enhances details and contrast in under or over-exposed photographs.
</figcaption>
</figure>
</li>
<li><p>If you think that the number of layer blending modes available in <em>GIMP</em> or <em>Krita</em> is not enough, and dream about defining your own blending mode formula, then the recent improvement of the <em>G’MIC</em> filter « <strong>Blend [standard]</strong> » will please you! This filter now gets a new option « <em>Custom formula</em> » allowing the user to specify their own <a href="http://www.pegtop.net/delphi/articles/blendmodes/">mathematical formula</a> when blending two layers together. All of your blending wishes become possible!</p>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/gmic_blend_custom.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_blend_custom.png" alt="Filter 'Blend (standard)''"/>
</a>
<figcaption>
Fig. 5.2: The “<strong>Blend [standard]</strong>“ filter now allows definition of mathematical formulas for layer merging.
</figcaption>
</figure>
</li>
<li><p>Also note the complete re-implementation of the nice “<strong>Sketch</strong>“ filter, which had existed for several years but could be a little slow on large images. The new implementation is much faster, taking advantage of multi-core processing when possible.</p>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/gmic_sketch.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_sketch.png" alt="Filter 'Sketch'"/>
</a>
<figcaption>
Fig. 5.3: The “<strong>Sketch</strong>“ filter has been re-implemented and now exploits all available compute cores.
</figcaption>
</figure>
</li>
<li><p>A large amount of work has also gone into the re-implementation of the “<strong>Mandelbrot - Julia sets</strong>“ filter, since the navigation interface has been entirely redesigned, making exploration of the <a href="https://en.wikipedia.org/wiki/Mandelbrot_set">Mandelbrot set</a> much more comfortable (as illustrated by this <a href="https://youtu.be/wZv3BQF00gA">video</a>). New options for choosing colors have also appeared.</p>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/gmic_mandelbrot.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_mandelbrot.png" alt="Filtre Mandelbrot - Julia sets"/>
</a>
<figcaption>
Fig. 5.4: The “<strong>Mandelbrot - Julia sets</strong>“ filter and its new navigation interface in the complex space.
</figcaption>
</figure>
</li>
<li><p>In addition, the “<strong>Polygonize [Delaunay]</strong>“ filter that generates polygonized renderings of color images has a new rendering mode, using linearly interpolated colors in the <a href="https://en.wikipedia.org/wiki/Delaunay_triangulation">Delaunay triangles</a> produced.</p>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/delaunay_all.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/delaunay_all.png" alt="Filtre 'Polygonize (Delaunay)'"/>
</a>
<figcaption>
Fig. 5.5: The different rendering modes of the “<strong>Polygonize [Delaunay]</strong>“ filter.
</figcaption>
</figure>

</li>
</ul>
<h2 id="6-other-important-highlights"><a href="#6-other-important-highlights" class="header-link-alt">6. Other important highlights</a></h2>
<h3 id="6-1-improvements-of-the-plug-in"><a href="#6-1-improvements-of-the-plug-in" class="header-link-alt">6.1. Improvements of the plug-in</a></h3>
<p>Of course, the new features in <em>G’MIC</em> are not limited to just image processing filters! For instance, a lot of work has been done on the graphical interface of the plug-in <em>G’MIC-Qt</em> for <em>GIMP</em> and <em>Krita</em>:</p>
<ul>
<li>Filters of the plug-in are now allowed to define a new parameter type <code>point()</code>, which displays as a small colored circle over the preview window. The user can drag this circle and move it with the mouse. As a result this can give the preview widget a completely new type of user interaction, which is no small thing! A lot of filters now use this feature, making them more pleasant to use and intuitive (look at <a href="https://www.youtube.com/watch?v=iQ0ZEmsDErY">this video</a> for some examples). The animation below shows for instance how these new interactive points has been used in the filter « <strong>Stereographic projection</strong> » described in previous sections.</li>
</ul>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/gmic_point_anim.gif">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_point_anim.gif" alt="Interactive preview window"/>
</a>
<figcaption>
Fig. 6.1: The preview window of the G’MIC-Qt plug-in gets new user interaction abilities.
</figcaption>
</figure>

<ul>
<li>In addition, introducing these interactive points has allowed improving the split preview modes, available in many filters to display the « <em>before/ after</em> » views side by side when setting the filter parameters in the plug-in. It is now possible to move this « <em>before/ after</em> » separator, as illustrated by the animation below. Two new splitting modes (« <em>Checkered</em> » and « <em>Inverse checkered</em> » ) have been also included alongside it.</li>
</ul>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/gmic_preview_anim.gif">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_preview_anim.gif" alt="Division de prévisualisation interactive"/>
</a>
<figcaption>
Fig. 6.2: The division modes of the preview now have a moveable “before / after” boundary.
</figcaption>
</figure>

<p>A lot of other improvements have been made to the plug-in: the support of the most recent version of <em>GIMP</em> (<strong>2.10</strong>), of <em>Qt 5.11</em>, improved handling of the error messages displayed over the preview widget, a cleaner designed interface, and other small changes have been made under the hood, which are not necessarily visible but slightly improve the user experience (e.g. an image cache mechanism for the preview widget). In short, that’s pretty good!</p>
<h3 id="6-2-improvements-in-the-software-core"><a href="#6-2-improvements-in-the-software-core" class="header-link-alt">6.2. Improvements in the software core</a></h3>
<p>Some new refinements of the <em>G’MIC</em> computational core have been done recently:</p>
<ul>
<li><p>The &ldquo;standard library&rdquo; of the <em>G’MIC</em> script language was given new commands for computing the inverse hyperbolic functions (<code>acoss</code>, <code>asinh</code> and <code>atanh</code>), as well as a command <code>tsp</code> (<em><strong>t</strong>ravelling <strong>s</strong>alesman <strong>p</strong>roblem</em>) which estimates an acceptable solution to the well-known <a href="https://en.wikipedia.org/wiki/Travelling_salesman_problem">Travelling salesman problem</a>, and this, for a point cloud of any size and dimension.</p>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/tsp_lena.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/tsp_lena.png" alt="Travelling salesman problem in 2D"/>
</a>
<figcaption>
Fig. 6.3: Estimating the shortest route between hundreds of 2D points, with the G’MIC command <code>tsp</code>.
</figcaption>
</figure>

<figure>
<a href="http://gmic.eu/gmic234/fullsize/tsp3d.gif">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/tsp3d.gif" alt="Travelling salesman problem in 2D"/>
</a>
<figcaption>
Fig. 6.4: Estimating the shortest route between several colors in the RGB cube (thus in 3D), with the G’MIC command <code>tsp</code>.
</figcaption>
</figure>
</li>
<li><p>The demonstration window, which appears when <code>gmic</code> is run without any arguments from the command line, has been also redesigned from scratch.</p>
</li>
</ul>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/gmic_demo.gif">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_demo.gif" alt="Demonstration window"/>
</a>
<figcaption>
Fig. 6.5: The new demonstration window of <code>gmic</code>, the command line interface of G’MIC.
</figcaption>
</figure>

<ul>
<li>The embedded <em>JIT</em> compiler used for the evaluation of mathematical expressions has not been left out and was given new functions to draw polygons (function <code>polygon()</code>) and ellipses (function <code>ellipse()</code>) in images. These mathematical expressions can in fact define small programs (with local variables, user-defined functions and control flow). One can for instance easily generate synthetic images from the command line, as shown by the two examples below.</li>
</ul>
<h4 id="example-1"><a href="#example-1" class="header-link-alt">Example 1</a></h4>
<pre><code class="lang-sh">$ gmic 400,400,1,3 eval &quot;for (k = 0, k&lt;300, ++k, polygon(3,u([vector10(0),[w,h,w,h,w,h,0.5,255,255,255])))&quot;
</code></pre>
<p><strong>Result</strong>:</p>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/gmic_polygon.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_polygon.png" alt="Function 'polygon()''"/>
</a>
<figcaption>
Fig. 6.6: Using the new function <code>polygon()</code> from the G’MIC JIT compiler, to render a synthetic image made of random triangles.
</figcaption>
</figure>

<h4 id="example-2"><a href="#example-2" class="header-link-alt">Example 2</a></h4>
<pre><code class="lang-sh">$ gmic 400,400,1,3 eval &quot;for (k=0, k&lt;20, ++k, ellipse(w/2,h/2,w/2,w/8,k*360/20,0.1,255))&quot;
</code></pre>
<p><strong>Result</strong>:</p>
<figure>
<a href="http://gmic.eu/gmic234/fullsize/gmic_ellipse.png">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_ellipse.png" alt="Function 'ellipse()''"/>
</a>
<figcaption>
Fig. 6.7: Using the new function <code>ellipse()</code> from the G’MIC JIT compiler, to render a synthetic flower image.
</figcaption>
</figure>

<ul>
<li>Note also that <a href="https://en.wikipedia.org/wiki/NaN"><code>NaN values</code></a> are now better managed when doing calculus in the core, meaning <em>G’MIC</em> maintains coherent behavior even when it has been compiled with the optimisation <code>-ffast-math</code>. Thus, <em>G’MIC</em> can be flawlessly compiled now the maximum optimization level <code>-Ofast</code> supported by the compiler <code>g++</code>, whereas we were restricted to the use of <code>-O3</code> before. The improvement in computation speed is clearly visible for some of the offered filters !</li>
</ul>
<h3 id="6-3-distribution-channels"><a href="#6-3-distribution-channels" class="header-link-alt">6.3. Distribution channels</a></h3>
<p>A lot of changes have also been made to the distribution channels used by the project:</p>
<ul>
<li><p>First of all, the project web pages (which are now using secured <code>https</code> connections by default) have a new <a href="http://gmic.eu/gallery">image gallery</a>. This gallery shows both filtered image results from <em>G’MIC</em> and the way to reproduce them (from the command line). Note that these gallery pages are automatically generated by a dedicated <em>G’MIC</em> script, which ensures the displayed command syntax is correct.</p>
<figure>
<a href="https://gmic.eu/gallery">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/gmic_gallery.png" alt="galerie d'image"/>
</a>
<figcaption>
Fig. 6.8: The new image gallery on the G’MIC web site.
</figcaption>
</figure>

</li>
</ul>
<p>This gallery is split into several sections, depending on the type of processing done (<em>Artistic, Black &amp; White, Deformations, Filtering, etc.</em>). The last section <a href="https://gmic.eu/gallery/codesamples.shtml">« <strong>Code sample</strong> »</a> is my personal favorite, as it exhibits small animations (shown as looping animated <em>GIFs</em>) which have been completely generated from scratch by short scripts, written in the <em>G’MIC</em> language. Quite a surprising use of <em>G’MIC</em> that shows its potential for <a href="https://en.wikipedia.org/wiki/Generative_art">generative art</a>.</p>
<figure>
<a href="https://gmic.eu/gallery/codesamples_full_3.gif">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/codesamples_thumb_3.gif" alt="Code sample1"/>
</a>
<a href="https://gmic.eu/gallery/codesamples_full_4.gif">
<img src="https://pixls.us/blog/2018/08/g-mic-2-3-6/codesamples_thumb_4.gif" alt="Code sample2"/>
</a>
<figcaption>
Fig. 6.9: Two small GIF animations generated by G’MIC_ scripts that are visible in the new image gallery._
</figcaption>
</figure>

<ul>
<li>We have also moved the main <em>git</em> source repository of the project to <a href="https://framagit.org/dtschump/gmic">Framagit</a>, still keeping one synchronized mirror on <em>Github</em> at the same place as before (to benefit from the fact that a lot of developers have already an account on <em>Github</em> which makes it easier for them to fork the project and write bug reports).</li>
</ul>
<h2 id="7-conclusions-and-perspectives"><a href="#7-conclusions-and-perspectives" class="header-link-alt">7. Conclusions and Perspectives</a></h2>
<p>Voilà! Our tour of news (and the last six months of work) on the G’MIC project comes to an end.</p>
<p>We are happy to be celebrating 10 years with the creation and evolution of this Free Software project, and to be able to share with everyone all of these advanced image processing techniques. We hope to continue doing so for many years to come!</p>
<p>Note that next year, we will also be celebrating the <em>20th anniversary</em> of <a href="http://cimg.eu"><em>CImg</em></a>, the <em>C++</em> image processing library (started in November 1999) on which the <em>G’MIC</em> project is based, proof that interest in free software is enduring.</p>
<p>As we wait for the next release of <em>G’MIC</em>, don’t hesitate to test the current version. Freely and creatively play with and manipulate your images to your heart’s content!</p>
<p><strong>Thank you, Translators:</strong> (ChameleonScales, Pat David)</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[From Russia with Love]]></title>
            <link>https://pixls.us/articles/from-russia-with-love/</link>
            <guid isPermaLink="true">https://pixls.us/articles/from-russia-with-love/</guid>
            <pubDate>Mon, 23 Jul 2018 22:03:11 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/from-russia-with-love/PA150205.jpg" /><br/>
                <h1>From Russia with Love</h1> 
                <h2>An Interview with Photographer Ilya Varivchenko</h2>  
                <p><a href="http://www.varivchenko.com/" title="Ilya Varivchenko&#39;s Website">Ilya Varivchenko</a> is a fashion and portrait photographer from Ivanovo, Russian Federation.  He’s a UNIX administrator with a long-time passion for photography that has now become a second part-time job for him.  Working on location and in his studio, he’s been producing <a href="http://varivchenko.com/blog/" title="Ilya Varivchenko&#39;s Blog">a wonderful body of work</a> specializing in portraiture, model tests, and more.</p>
<p>He’s a member of the community here (@viv), and he was kind enough to spare some time and answer a few questions (plus it gives me a good excuse to showcase some of his great work!).</p>
<figure>
<img src="https://pixls.us/articles/from-russia-with-love/P8170097.jpg" width='760' height='705' alt='by Ilya Varivchenko'>
</figure>


<h3 id="much-of-your-work-feels-very-classical-in-posing-and-light-particularly-your-studio-portraits-what-would-you-say-are-your-biggest-influences-">Much of your work feels very classical in posing and light, particularly your studio portraits.  What would you say are your biggest influences?<a href="#much-of-your-work-feels-very-classical-in-posing-and-light-particularly-your-studio-portraits-what-would-you-say-are-your-biggest-influences-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I am influenced by several classical painters and great modern photographers.   Some of them are: <a href="https://en.wikipedia.org/wiki/Patrick_Demarchelier">Patrick Demarchelier</a>, <a href="https://en.wikipedia.org/wiki/Steven_Meisel">Steven Meisel</a> and  <a href="https://en.wikipedia.org/wiki/Peter_Lindbergh">Peter Lindbergh</a>.
The general mood defines what I see around me. Russia is a very neglected but beautiful country and women around are an inexhaustible source of inspiration.</p>
<figure>
<img src="https://pixls.us/articles/from-russia-with-love/P8120159-1.jpg" width='760' height='1013' alt='by Ilya Varivchenko'>
</figure>


<h3 id="how-would-you-describe-your-own-style-overall-">How would you describe your own style overall?<a href="#how-would-you-describe-your-own-style-overall-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>My style is certainly a classic portrait in its modern performance.</p>
<h3 id="what-motivates-you-when-deciding-who-how-you-shoot-">What motivates you when deciding who/how you shoot?<a href="#what-motivates-you-when-deciding-who-how-you-shoot-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I usually plan shooting in advance. The range of models is rather narrow and it’s not so easy to get there. However, I am constantly looking for new faces.  I choose the style and direction of a particular shooting based on my vision of the model and the current mood.</p>
<h3 id="why-portraits-what-about-portraiture-draws-you-to-it-">Why portraits?  What about portraiture draws you to it?<a href="#why-portraits-what-about-portraiture-draws-you-to-it-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I shoot portraits because people interest me. For me, photography is an instrument of knowing people and a means of communication.</p>
<figure>
<img src="https://pixls.us/articles/from-russia-with-love/PB120214.jpg" width='760' height='818' alt='by Ilya Varivchenko'>
</figure>


<h3 id="if-you-had-to-pick-your-own-favorite-3-photographs-of-your-work-which-ones-would-you-choose-and-why-">If you had to pick your own favorite 3 photographs of your work, which ones would you choose and why?<a href="#if-you-had-to-pick-your-own-favorite-3-photographs-of-your-work-which-ones-would-you-choose-and-why-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>It’s difficult to choose only three photographs, but maybe these:</p>
<figure>
<img src="https://pixls.us/articles/from-russia-with-love/991755.jpg" width='760' height='615' alt='by Ilya Varivchenko'>
<figcaption>
This photo was chosen by Olympus as a logo for their series of photo events in Russia 2017.
</figcaption>
</figure>

<figure>
<img src='http://35photo.ru/photos_series/893/893928.jpg' width='760' height='602' alt='by Ilya Varivchenko'>
<figcaption>
This is one of my most reproducible photos. ;)
</figcaption>
</figure>

<figure>
<img src='http://35photo.ru/photos_main/129/647634.jpg' width='760' height='572' alt='by Ilya Varivchenko'>
<figcaption>
This photo has a perfect mood in my opinion.
</figcaption>
</figure>


<h3 id="if-you-had-to-pick-3-favorite-images-from-someone-else-which-ones-would-you-choose-and-why-">If you had to pick 3 favorite images from someone else, which ones would you choose and why?<a href="#if-you-had-to-pick-3-favorite-images-from-someone-else-which-ones-would-you-choose-and-why-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>It is very difficult to choose only three photos. The choice in any case will be incomplete, but here’s the first ones that comes to mind:</p>
<ol>
<li>The portrait of Heather Stewart-Whyte by <a href="http://friedemannhauss.eu/">Friedemann Hauss</a>:  </li>
</ol>
<figure>
<img src="https://pixls.us/articles/from-russia-with-love/heather-stewart-whyte.jpg" width='454' height='650' alt='by Ilya Varivchenko'>
</figure>


<ol start="2">
<li>The portrait of Monica Bellucci by <a href="http://www.chicobialas.com/">Chico Bialas</a>:</li>
</ol>
<figure>
<img src="https://pixls.us/articles/from-russia-with-love/monica-bellucci.jpg" width='564' height='834' alt='by Ilya Varivchenko'>
</figure>

<p>3) The portrait of Nicole Kidman by Patrick Demarchelier</p>
<figure>
<img src="https://pixls.us/articles/from-russia-with-love/nicole-kidman.jpg" width='497' height='720' alt='by Ilya Varivchenko'>
</figure>




<h3 id="how-do-you-find-your-models-usually-">How do you find your models usually?<a href="#how-do-you-find-your-models-usually-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Via social media which is the best means for model searching, but if I meet a girl I really like in the street, I can try and talk to her straight away.
In fact, the problem is not to find a model, but to choose how to reject a request without offending a prospect model that is of no interest to me.</p>
<h3 id="do-you-pre-visualize-and-plan-your-shoots-ahead-of-time-usually-or-is-there-a-more-organic-interaction-with-the-model-and-the-space-you-re-shooting-in-">Do you pre-visualize and plan your shoots ahead of time usually, or is there a more organic interaction with the model and the space you’re shooting in?<a href="#do-you-pre-visualize-and-plan-your-shoots-ahead-of-time-usually-or-is-there-a-more-organic-interaction-with-the-model-and-the-space-you-re-shooting-in-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>It’s always good to have a plan. It is also very good to have a spare plan.</p>
<p>Usually I discuss some common points with the model and stylist before shooting. But these plans are more connected with the mood  and the general idea of the session. So when the magic of shooting begins, usually all the plans fly to hell. ;)</p>
<figure>
<img src="https://pixls.us/articles/from-russia-with-love/P7290039.jpg" width='760' height='1089' alt='by Ilya Varivchenko'>
</figure>



<h3 id="do-you-have-a-shooting-assistant-with-you-or-is-normally-just-you-and-the-model-">Do you have a shooting assistant with you, or is normally just you and the model?<a href="#do-you-have-a-shooting-assistant-with-you-or-is-normally-just-you-and-the-model-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The preparatory stage of shooting often requires participation of many people: a makeup artist, a hair stylist, etc., but shooting itself goes better when only two persons are involved. This is a fairly intimate process. Just like sex. :)</p>
<p>On the other hand, if we do a fashion shoot on order, then the presence of the customer representatives is a must.</p>
<h3 id="many-shots-have-a-strong-editorial-fashion-feel-to-them-are-those-works-for-magazine-editorial-use-or-were-they-personal-works-you-were-planning-to-be-that-way-">Many shots have a strong editorial fashion feel to them: are those works for magazine/editorial use - or were they personal works you were planning to be that way?<a href="#many-shots-have-a-strong-editorial-fashion-feel-to-them-are-those-works-for-magazine-editorial-use-or-were-they-personal-works-you-were-planning-to-be-that-way-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I take pictures for local magazines and advertising agencies sometimes. Maybe it somehow influenced my other work.</p>
<figure>
<img src="https://pixls.us/articles/from-russia-with-love/DSCF2590.jpg" width='760' height='555' alt='by Ilya Varivchenko'>
<img src="https://pixls.us/articles/from-russia-with-love/P2100235.jpg" width='760' height='1011' alt='by Ilya Varivchenko'>
<img src="https://pixls.us/articles/from-russia-with-love/xP2220139.jpg" width='760' height='958' alt='by Ilya Varivchenko'>
</figure>


<h3 id="what-do-you-do-with-the-photos-you-shoot-">What do you do with the photos you shoot?<a href="#what-do-you-do-with-the-photos-you-shoot-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Most of my works are for personal use. 
However, I often print them in a large format and I’ve also had two solo exhibitions.   Prints with my works are sold and they can always be ordered. I also publish in photo magazines sometimes, but these magazines are Russian ones so they are hardly known to you.</p>
<p>By the way: I periodically take part in the events held by <a href="https://www.olympus.com.ru/">Olympus Russia</a>, where I demonstrate my workflow. </p>
<p>This video shows that I use the RawTherapee as a raw converter  :)</p>
<div>
    <div class='fluid-vid'>
        <iframe src="https://www.youtube-nocookie.com/embed/t3QUVFkO0lU" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
    </div>
</div>

<h3 id="you-re-shooting-on-olympus-gear-quite-a-bit-are-you-officially-affiliated-with-olympus-in-some-way-">You’re shooting on Olympus gear quite a bit, are you officially affiliated with Olympus in some way?<a href="#you-re-shooting-on-olympus-gear-quite-a-bit-are-you-officially-affiliated-with-olympus-in-some-way-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>On occasions I hold workshops as a part of the Olympus company marketing activities. Sometimes the Olympus company provides me with their products for testing and I am expected to follow up with a review.</p>
<h3 id="is-your-choice-to-use-free-software-for-pragmatic-reasons-or-more-idealistic-">Is your choice to use Free Software for pragmatic reasons, or more idealistic?<a href="#is-your-choice-to-use-free-software-for-pragmatic-reasons-or-more-idealistic-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The choice was dictated by purely practical considerations. I found a tool, the results of which I am almost completely satisfied with. Detail for example is outstanding, comfortable work with color grading, excellent black and white conversion, and much more.</p>
<p>The fact that the product is free and (which is more important to me) I have an opportunity to communicate with its developers is a huge plus!</p>
<p>For example, with the output of Fuji X-T20, when it was required to add a new DCP profile to the converter I simply contacted the developers, shot the test target and got what I wanted.</p>
<figure>
<img src="https://pixls.us/articles/from-russia-with-love/DSCF7742.jpg" width='760' height='643' alt='by Ilya Varivchenko'>
</figure>


<h3 id="would-you-describe-your-workflow-a-bit-which-projects-do-you-use-regularly-">Would you describe your workflow a bit? Which projects do you use regularly?<a href="#would-you-describe-your-workflow-a-bit-which-projects-do-you-use-regularly-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>My workflow is quite simple:</p>
<ol>
<li><p>Shooting.
I try to shoot in a way which will not require heavy postprocessing at all. It is much easier to set up light properly than to fix it in Photoshop later.</p>
</li>
<li><p>Raw development with <a href="http://rawtherapee.com/" title="RawTherapee website">RawTherapee</a>.
My goal is to develop the image in a way which makes it as close to final as possible.
Sometimes this is the end of my workflow. ;)</p>
</li>
<li><p>Color correction (if necessary) with 3DLutCreator.
In rare cases, it is more convenient to make complex color correction with the help of LUTs.</p>
</li>
<li><p>Retouching with Adobe Photoshop. 
Nothing special. Removal of skin and hair defects, etc. Dodge and burn technique with a Wacom Intuos Pro.</p>
</li>
</ol>
<figure>
<img src="https://pixls.us/articles/from-russia-with-love/PA030154.jpg" width='760' height='569' alt='by Ilya Varivchenko'>
</figure>

<h3 id="speaking-of-gear-what-are-you-shooting-with-currently-">Speaking of gear, what are you shooting with currently?<a href="#speaking-of-gear-what-are-you-shooting-with-currently-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I have two systems now: Micro Four Thirds system from Olympus and X Series from Fujifilm.
Typical setups are:</p>
<p>Studio: Olympus PEN-F + Panasonic G 42.5/1.7
Planair: Olympus PEN-F + M.Zuiko 75/1.8 or FujiFilm X-T20 + Fujinon 35/1.4</p>
<h3 id="many-of-your-images-appear-make-great-use-of-natural-light-for-your-studio-lighting-setup-what-type-of-lighting-gear-are-you-using-">Many of your images appear make great use of natural light. For your studio lighting setup, what type of lighting gear are you using?<a href="#many-of-your-images-appear-make-great-use-of-natural-light-for-your-studio-lighting-setup-what-type-of-lighting-gear-are-you-using-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>My studio equipment is a mix of Aurora Codis and Bowens studio lights + a lot of modifiers from large 2 meters parabolic octobox to narrow 40x150 strip boxes and so on.</p>
<figure>
<img src="https://pixls.us/articles/from-russia-with-love/PA010019.jpg" width='760' height='1062' alt='by Ilya Varivchenko'>
<img src="https://pixls.us/articles/from-russia-with-love/P1040535.jpg" width='760' height='1012' alt='by Ilya Varivchenko'>
</figure>


<h3 id="is-there-something-outside-your-comfort-zone-you-wish-you-could-try-shoot-more-of-">Is there something outside your comfort zone you wish you could try/shoot more of?<a href="#is-there-something-outside-your-comfort-zone-you-wish-you-could-try-shoot-more-of-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>It is definitely landscape photography. And macro photography also attracts me - ants and snails are all great models in fact. :)</p>
<h3 id="what-is-one-piece-of-advice-you-would-offer-to-another-photographer-">What is one piece of advice you would offer to another photographer?<a href="#what-is-one-piece-of-advice-you-would-offer-to-another-photographer-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Find in yourself what you want to share with others. Beauty is in the eye of the beholder. No beautiful models will help if you are empty inside.</p>
<div>
    <div class='fluid-vid'>
        <iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/f98-mSyyCRM" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
    </div>
</div>

<figure>
    <img src="https://pixls.us/articles/from-russia-with-love/P4300037.jpg" width='760' height='572' alt='by Ilya Varivchenko'>
</figure>

<hr>
<p>I want to thank Ilya for taking the time to chat with me!
Take some time to have a look through <a href="http://varivchenko.com/blog/">his blog and work</a> (it’s chock full of wonderful work)!</p>
<p><small>All images copyright Ilya Varivchenko and used with permission.</small></p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Welcoming the gPhoto Project to the PIXLS.US community!]]></title>
            <link>https://pixls.us/blog/2018/07/welcoming-the-gphoto-project-to-the-pixls-us-community/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2018/07/welcoming-the-gphoto-project-to-the-pixls-us-community/</guid>
            <pubDate>Wed, 11 Jul 2018 18:22:32 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2018/07/welcoming-the-gphoto-project-to-the-pixls-us-community/carvac_cables.jpg" /><br/>
                <h1>Welcoming the gPhoto Project to the PIXLS.US community!</h1> 
                <h2>Helping the community one project at a time</h2>  
                <p>A major goal of the PIXLS.US effort is to do whatever we can do to help developers of projects unburden themselves from administrating their project. We do this, in part, by providing forum hosting, participating in support, providing web design, and doing community outreach. With that in mind, we are excited to welcome the <a href="http://gphoto.org/">gPhoto Projects</a> to our <a href="https://discuss.pixls.us/c/software/gphoto">discuss forum</a>!
<!--more--></p>
<p><img src="https://pixls.us/blog/2018/07/welcoming-the-gphoto-project-to-the-pixls-us-community/entangle-interface.png" alt="The Entangle interface, which makes use of libgphoto">
<em>The Entangle interface, which makes use of <code>libgphoto</code>.</em></p>
<p>You may not have heard of gPhoto, but there is a high chance that you’ve used the project’s software. At the heart of the project is <code>libgphoto2</code>, a portable library that gives application access to <a href="http://www.gphoto.org/proj/libgphoto2/support.php">hundreds of digital cameras</a>. On top of the foundational library is <code>gphoto2</code>, a command line interface to your camera that supports almost everything that the library can do. The library is used in a bunch of awesome photography applications, such as <a href="https://digikam.org">digiKam</a>, <a href="https://darktable.org">darktable</a>, <a href="https://entangle-photo.org/">entangle</a>, and <a href="https://gimp.org">GIMP</a>. There is even a <a href="http://www.gphoto.org/proj/gphotofs/">FUSE module</a>, so you can mount your camera storage as a normal filesystem.</p>
<p>gPhoto was recruited to the PIXLS.US community when <a href="https://discuss.pixls.us/u/darix/summary">@darix</a> was sitting next to gPhoto developer Marcus. Marcus was using darix’s Fuji camera to test integration into <code>libgphoto</code>, then the magic happened! Not only will some Fuji models be supported, but our community is growing larger. This is also a reminder that one person can make a huge difference. Thanks darix!</p>
<p>Welcome, gPhoto, and thank you for the years and years of development! </p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[(NSFW) What Stefan Sees]]></title>
            <link>https://pixls.us/articles/nsfw-what-stefan-sees/</link>
            <guid isPermaLink="true">https://pixls.us/articles/nsfw-what-stefan-sees/</guid>
            <pubDate>Fri, 04 May 2018 20:11:42 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/nsfw-what-stefan-sees/JP_SAS_4429.jpg" /><br/>
                <h1>(NSFW) What Stefan Sees</h1> 
                <h2>An Interview with Photographer Stefan Schmitz</h2>  
                <p><a href="https://whatstefansees.com/" title="what stefan sees - sensual &amp; nude photography, Hauts de France">Stefan Schmitz</a> is a photographer living in Northern France and specializing in sensual and nude portraits.
I stumbled upon his work during one of my searches for photographers using Free Software on <a href="https://www.flickr.com" title="Flickr">Flickr</a>, and as someone who loves shooting portraits his work was an instant draw for me.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/nsfw-what-stefan-sees/FS_SAS_6724.jpg" width='1020' height='684' alt='Franzi Skamet by Stefan  Schmitz'>
<figcaption>
<a href="https://www.flickr.com/photos/stefanschmitz/25163948263/">Franzi Skamet</a> by Stefan Schmitz
</figcaption>
</figure>

<figure class='big-vid'>
<img src="https://pixls.us/articles/nsfw-what-stefan-sees/KG_SAS_0277.jpg" width='1020' alt='Khiara Gray by Stefan  Schmitz'>
<figcaption>
<a href="https://www.flickr.com/photos/stefanschmitz/29264777344/">Khiara Gray</a> by Stefan Schmitz
</figcaption>
</figure>


<p>He’s a member of the forums here (@beachbum) and was gracious enough recently to spare some time chatting with me.  Here is our conversation (edited for clarity)…</p>
<h3 id="are-you-shooting-professionally-">Are you shooting professionally?<a href="#are-you-shooting-professionally-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Nope, I’m not a professional photographer, and I think I’m quite happy about that. I do happen to photograph my surroundings for &plusmn;40 years now, and I have a basic idea about camera-handling and light. Being a pro is about paying invoices by shooting photos, and I fear that the pressure at the end of some months or quarters can easily take the fun out of photography. I’m an engineer and photography is my second love behind wife and kids.</p>
<p>Every now and then some of my pictures are requested and published by some sort of magazine, press or web-service, and I appreciate the attention and exposure, but there is no (or very little) money in the kind of photography I specialize in, so … everything’s OK the way it is.</p>
<figure>
<img src="https://pixls.us/articles/nsfw-what-stefan-sees/KG_SAS_0338.jpg" width='760' height='1134' alt='Khiara Gray by Stefan Schmitz'>
<figcaption>
<a href="https://www.flickr.com/photos/stefanschmitz/28484512994/">Khiara Gray</a> by Stefan Schmitz
</figcaption>
</figure>


<h3 id="what-would-you-say-are-your-biggest-influences-">What would you say are your biggest influences?<a href="#what-would-you-say-are-your-biggest-influences-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Starting with photographers: <a href="https://en.wikipedia.org/wiki/Andreas_Feininger" title="Andreas Feininger on Wikipedia">Andreas Feininger</a>, <a href="https://en.wikipedia.org/wiki/Peter_Lindbergh" title="Peter Lindbergh on Wikipedia">Peter Lindbergh</a> and <a href="https://en.wikipedia.org/wiki/Alfred_Stieglitz" title="Alfred Stieglitz on Wikipedia">Alfred Stieglitz</a>. Check out the portrait of <a href="https://de.wikipedia.org/wiki/Georgia_O%E2%80%99Keeffe" title="Georgia O&#39;Keeffe on Wikipedia">Georgia O’Keeffe</a> by Alfred Stieglitz: it’s 100 years old and it’s all there. Pose, light, intensity, personality - nobody has invented anything [like it] afterwards. We all just try to get close. I feel the same when I look at images taken by Peter Lindbergh, but my eternal #1 is Andreas Feininger. </p>
<figure>
<img src="https://pixls-discuss.s3.amazonaws.com/original/2X/6/60735c944175a51790319262278edc6b8acf2224.jpg" width="546" height="671">
<figcaption>
Georgia O’Keeffe by Alfred Stieglitz
</figcaption>
</figure>


<p>I got the photo-virus from my father and I learned nearly everything from daddy’s well-worn copy of <em>The Complete Photographer</em> <sup><a href="https://www.amazon.com/Complete-Photographer-Andreas-Feininger/dp/0131622145/ref=as_li_ss_tl?s=books&keywords=the+complete+photographer&ie=UTF8&qid=1525725015&sr=1-4&ref_=nav_ya_signin&_encoding=UTF8&linkCode=ll1&tag=httpblogpatda-20&linkId=7b777d7283db5795f369187731d437c5" title="Amazon affiliate link">[amzn]</a></sup> (Feininger) from 1965. Every single photo in that book is a masterpiece, even the strictly “instructional” ones. You measure every photo-book in the world against this one and they all finish second. Get your copy!</p>
<h3 id="how-would-you-describe-your-own-style-overall-">How would you describe your own style overall?<a href="#how-would-you-describe-your-own-style-overall-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I shoot portraits of women and most of the time they don’t wear clothes. The <em>portrait</em>-part is very important for me: the model must connect with the viewer and ideally the communication goes beyond skin-deep. I want to see (and show) more than just the surface, and when that happens, I just press the shutter-button and try to get out of the way of the model’s performance.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/nsfw-what-stefan-sees/JP_SAS_7470.jpg" alt='Jennifer Polska by Stefan Schmitz' width='1020' height='683'>
<figcaption>
<a href="https://www.flickr.com/photos/stefanschmitz/38138319342/">Jennifer Polska</a> by Stefan Schmitz
</figcaption>
</figure>

<figure class='big-vid'>
<img src="https://pixls.us/articles/nsfw-what-stefan-sees/FS_SAS_6693.jpg" alt='Franzi Skamet by Stefan Schmitz' width='1020' height='683'>
<figcaption>
<a href="https://www.flickr.com/photos/stefanschmitz/25163948263/">Franzi Skamet</a> by Stefan Schmitz
</figcaption>
</figure>



<h3 id="what-motivates-you-when-deciding-what-how-who-to-shoot-">What motivates you when deciding what/how/who to shoot?<a href="#what-motivates-you-when-deciding-what-how-who-to-shoot-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I like women, so I take photos of women. If I were interested in beetles, I’d buy a macro lens and shoot beetles. All kidding aside, I think it’s a natural thing to do. I am married to a beautiful woman, an ex-model, and when she got fed-up with my eternal “can we do one more shoot” requests, we discussed things and she allowed me to go ahead and shoot models. Her support is very important to me, but her taste is very different from mine.
I really never asked myself “why” I shoot sensual portraits and nudes. It just feels like “I want to do that” and I feel comfy with it. Does there have to be a reason?</p>
<p>The location is very important for me. Nothing is more boring than blinding a person with a flashlight in front of a gray wallpaper. A room, a window-sill, a landmark - there’s a lot of inspiration out there, and I often think “this is where I want to shoot”. Sometimes my wife tells me of some place she has been to or seen, and I check that out.</p>
<h3 id="if-you-had-to-pick-your-own-favorite-3-images-of-your-work-which-ones-would-you-choose-and-why-">If you had to pick your own favorite 3 images of your work, which ones would you choose and why?<a href="#if-you-had-to-pick-your-own-favorite-3-images-of-your-work-which-ones-would-you-choose-and-why-" class="header-link"><i class="fa fa-link"></i></a></h3>
<figure class='big-vid'>
<img src="https://pixls.us/articles/nsfw-what-stefan-sees/JP_SAS_7274.jpg" width='1020' height='683' alt='Jennifer Polska by Stefan Schmitz'>
<figcaption>
<a href="https://www.flickr.com/photos/stefanschmitz/38138319342/">Jennifer Polska</a> by Stefan Schmitz
</figcaption>
</figure>

<p>Jennifer is a very professional and inspiring model. We’ve worked together quite a number of times and while you may think that this shot was inspired by The Who’s “Pinball Wizard”, I’d answer “right band, wrong song”.  It’s The Who, alright, but the song’s “A quick one while he’s away”.
I chose this photo because it’s all about Jennifer’s pose and facial expression. It’s sensual, even sexy, but looking at Jennifer’s face you forget about the naked skin and all. There’s beauty, there’s depth … that’s what I’m after.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/nsfw-what-stefan-sees/Alice_SAS_6541.jpg" width='1020' height='1523' alt='Alice by Stefan Schmitz'>
<figcaption>
<a href="https://www.flickr.com/photos/stefanschmitz/35542014503">Alice</a> by Stefan Schmitz
</figcaption>
</figure>

<p>This shot of Alice is an example for the importance of natural light. There are photographers out there who can arrange light in a similar way, but I doubt that Alice would express this natural serenity in a studio setup with cables and stands and electric-transformers humming. 
She’s at ease, the light is perfect - I just try to be invisible because I don’t want to ruin the moment.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/nsfw-what-stefan-sees/KG_SAS_0318.jpg" alt='Khiara Gray by Stefan  Schmitz' width='1020' height='684'>
<figcaption>
<a href="https://www.flickr.com/photos/stefanschmitz/29264777344/">Khiara Gray</a> by Stefan Schmitz
</figcaption>
</figure>

<p>Try to escape Khiara’s eyes. Go, do it. It’s all there, the pose, the room, the ribbon-chair and the little icon, but those eyes make the picture. I did NOT whiten the eyeballs nor did I dodge the iris, and of course it’s all natural/available light.  </p>
<h3 id="if-you-had-to-pick-3-favorite-images-from-someone-else-which-ones-would-you-choose-and-why-">If you had to pick 3 favorite images from someone else, which ones would you choose and why?<a href="#if-you-had-to-pick-3-favorite-images-from-someone-else-which-ones-would-you-choose-and-why-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I already named Stieglitz’ Georgia O’Keeffe as an inspiration further up - next to that there’s Helmut Newton’s <em>Big Nude III, Henrietta</em> and Kim Basinger’s striptease in 9 <sup>1</sup>&frasl;<sub>2</sub> weeks (white silk nighty and  all).  Each one a masterpiece, each one very influential for me. Imagine the truth and depth of Georgia with the force and pride of Henrietta and the erotic playfulness of Kim Basinger. That photo would rule the world.</p>
<figure>
<img src='https://pixls-discuss.s3.amazonaws.com/original/2X/3/3aeac6cf999ad6efb06476da9f34457af86fb134.jpg' width='318' height='470'>
<figcaption>
<em>Big Nude III, Henrietta</em>, <a href="https://en.wikipedia.org/wiki/Helmut_Newton">Helmut Newton</a>
</figcaption>
</figure>



<h3 id="is-there-something-outside-of-your-comfort-zone-you-wish-you-could-try-shoot-more-of-">Is there something outside of your comfort zone you wish you could try/shoot more of?<a href="#is-there-something-outside-of-your-comfort-zone-you-wish-you-could-try-shoot-more-of-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I would like to work more with women above the age of 35, but it’s hard to find them. In general they stop modeling nude when the kids arrive.</p>
<p>Shooting more often outdoors would be cool, too, but that’s not easy here in northern France - there is no guarantee for good weather, and it’s frustrating when you organize a shoot two weeks in advance just to call it off in the very last minute due to bad weather.</p>
<p>Last but not least there’s a special competition among photographers; it’s totally unofficial and called “the white shirt contest”. Shoot a woman in a white shirt and make everybody “feel” the texture of that shirt. I give it a try on every shoot and very few pictures come out the way I wish. Go for it - it’s way harder than I thought!</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/nsfw-what-stefan-sees/Alice_SAS_6352.jpg" width='1020' width='684' alt='Alice by Stefan Schmitz'>
<figcaption>
<a href="https://www.flickr.com/photos/stefanschmitz/35953665920/">Alice</a> by Stefan Schmitz
</figcaption>
</figure>



<h3 id="how-do-you-find-your-models-usually-">How do you find your models usually?<a href="#how-do-you-find-your-models-usually-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>There are websites where models and photographers can present their work and get in contact. The biggest-one worldwide is modelmayhem.com, and I highly recommend to become a member. Another good place is tumblr.com, but you have to go through a lot of dirt before you find some true gems. I have made contact via both sites and I recommend them.</p>
<p>You will need some pictures in your portfolio in order to show that you are - in fact - a photographer with a basic idea of portrait-work. If you shoot portraits (I mean really portraits, not some snapshots of granny and the kids under the Christmas-tree), you probably have enough photos on your disk to state the point. But if you don’t and you want to start (nude) portraits, spend some money on a workshop. I did that twice and it really helped me in several ways: communication with the model, how to start a session, do’s and don’ts - and at the end of the day you will drive home with a handful of pictures for your portfolio.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/nsfw-what-stefan-sees/Hannah_SAS_1216.jpg" width='1020' height='682' alt='Hannah by Stefan Schmitz'>
<figcaption>
<a href="https://www.flickr.com/photos/stefanschmitz/30576600621/">Hannah</a> by Stefan Schmitz
</figcaption>
</figure>


<h3 id="speaking-of-gear-what-are-you-shooting-with-currently-or-what-is-your-favorite-setup-">Speaking of gear, what are you shooting with currently (or what is your favorite setup)?<a href="#speaking-of-gear-what-are-you-shooting-with-currently-or-what-is-your-favorite-setup-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Gear is overrated. I am with Nikon since 1979 and today I own and use two bodies: a 1975 Nikon F2 photomic (bought used in 82), loaded with Kodak Tri-X and a Nikon D610 DSLR. 90% of my pictures are shot with a 50mm standard lens.  Next on the list is the 35mm - you will need that in small rooms when the 50mm is already a bit too long and you want to keep some distance. I happen to own a 85mm, but the locations I book and shoot rarely offer enough space to make use of that lens.</p>
<p>There are these cheap, circular 1m silver reflectors on amazon. They cost about 15 €/$ and you get a crappy stand for the same price. That stuff is pure gold - I use the reflector a lot and I highly recommend to learn how to work with it. It’s my little secret weapon when I shoot against the light (see Alice here above).</p>
<p>A camera with a reasonably fast standard lens, a second battery and a silver reflector is all I need. The rest is luxury for me, but I am pretty much a one-trick-pony. Other photographers will benefit more from a bigger kit.</p>
<h3 id="most-of-your-images-appear-to-be-making-great-use-of-natural-light-do-you-use-other-lighting-gear-speedlights-monoblocks-modifiers-etc-">Most of your images appear to be making great use of natural light. Do you use other lighting gear (speedlights, monoblocks, modifiers, etc)?<a href="#most-of-your-images-appear-to-be-making-great-use-of-natural-light-do-you-use-other-lighting-gear-speedlights-monoblocks-modifiers-etc-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Right - available light is where it’s at. I very rarely shoot with a flash kit today because it distracts me from the work with the model. I’m a loner on the set, no assistants or friends who come and help, so everything must be totally simple and foolproof.</p>
<p>Saying that, I own an alarming number of speedlights, umbrellas, triggers and softboxes, but I don’t need that gear very often. I try to visit the locations before I shoot.  I check the directions and plan for a realistic timeframe, so today I will neither find myself in a totally dark dungeon nor in a sun-filled room with contrasts à gogo. Windows to the west - shoot in the morning, windows facing south-east: shooting in the (late) afternoon.</p>
<figure>
<img src="https://pixls.us/articles/nsfw-what-stefan-sees/Karolina_SAS_1976.jpg" width='1020' height='683' alt='Karolina Lewschenko by Stefan Schmitz'>
<figcaption>
<a href="https://www.flickr.com/photos/stefanschmitz/31516298240/">Karolina Lewschenko</a> by Stefan Schmitz
</figcaption>
</figure>

<p>Here’s a shot of Karolina Lewschenko. We took this photo in a hotel room by the end of October and the available (window) light got too weak, so I used an Aurora Firefly 65 cm softbox with  a Metz speedlight and set-up some classic Rembrandt-Light. I packed that gear because I knew that our timeframe wasn’t guaranteed to work out perfectly. “Better be safe than sorry”.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/nsfw-what-stefan-sees/FS_SAS_6415.jpg" width='1020' height='1522' alt='Franzi Skamet by Stefan  Schmitz'>
<figcaption>
<a href="https://www.flickr.com/photos/stefanschmitz/25163948263/">Franzi Skamet</a> by Stefan Schmitz
</figcaption>
</figure>



<h3 id="do-you-pre-visualize-and-plan-your-shoots-ahead-of-time-usually-or-is-there-a-more-organic-interaction-with-the-model-and-the-space-you-re-shooting-in-">Do you pre-visualize and plan your shoots ahead of time usually, or is there a more organic interaction with the model and the space you’re shooting in?<a href="#do-you-pre-visualize-and-plan-your-shoots-ahead-of-time-usually-or-is-there-a-more-organic-interaction-with-the-model-and-the-space-you-re-shooting-in-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Yes, I do. When I visit a place, a possible location, I have some Ideas of where to shoot, what furniture to push around and what pose to try. I can pretty much see the final picture (or my idea of it) before I book the model. Having said that, you know that no battle-plan has ever survived the first shot fired…</p>
<p>When the model arrives, we take some time to walk around the locations and discuss possible sets. We will then start to shoot fully clothed in order to get used to another and see how the light will be on the final shots. It’s very important for me to get feedback from the model. She might say that a pose is difficult for her or hurts after a few seconds, that she’s not comfy with something or that she would like to try a totally different thing here. I always pay a lot of attention to those ideas and - out of experience - those shots based on the model’s ideas are in general among the best of the day.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/nsfw-what-stefan-sees/Karolina_SAS_1730.jpg" width='1020' alt='Karolina Lewschenko by Stefan Schmitz'>
<figcaption>
<a href="https://www.flickr.com/photos/stefanschmitz/30900009814/">Karolina Lewschenko</a> by Stefan Schmitz
</figcaption>
</figure>

<p>I mean we’re not here because I shoot bugs or furniture, you don’t give me the opportunity to express myself here because you are a fan of crickets; all the attention is linked to the beautiful women on my photos and how they connect with the beholder. I am just the one who captures the moments, it’s the models who fill those moments with intensity and beauty. It would be very stupid of me not to cooperate with a model who knows how to present herself and who comes up with her own ideas.</p>
<p>Always listen to the model, always communicate, never go quiet.</p>
<p>The discussion with the model also includes what degree of nudity we consider. So the second round of photos starts with the “open shirt” or topless shots before the model undresses completely. If we take photos in lingerie, we do that last (after the nudes) because lingerie often leaves traces on the skin and we don’t want that to show.</p>
<figure>
<img src="https://pixls.us/articles/nsfw-what-stefan-sees/FS_SAS_6091.jpg" width='686' height='1024' alt='Franzi Skamet by Stefan Schmitz'>
<figcaption>
<a href="https://www.flickr.com/photos/stefanschmitz/27984856976/">Franzi Skamet</a> by Stefan Schmitz
</figcaption>
</figure>

<p>It is important to know what to do and in what order. You don’t want to have a nude model standing in front of you, asking “what’s next?” and you answer “I dunno - maybe (!) try this or that again”. If you lose your directions for a moment, just say so or say “please get your bathrobe and let’s have a look at the last pictures together”. If you are “not sure”, the model might be “not comfy”, and that’s something we want to avoid.</p>
<h3 id="would-you-describe-your-workflow-a-bit-which-projects-do-you-use-regularly-">Would you describe your workflow a bit? Which projects do you use regularly?<a href="#would-you-describe-your-workflow-a-bit-which-projects-do-you-use-regularly-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>A typical session is 90 to 120 minutes and I will end-up with about 500 exposures on the SD-card and maybe a roll of exposed Kodak Tri-X. The film goes to a lab and I will get the negatives and scans back within 15 to 30 days.</p>
<p>There’s two SD-cards, one with RAW files that I import with <a href="https://wiki.gnome.org/Apps/Gthumb">gThumb</a> to /photos/year/month/day. The other card holds fine-quality JPG and those go to /pictures/year/name_of_model. My camera is already set to monochrome, I get every picture I shoot in b/w on the camera-screen and the JPG-files are also monochrome. </p>
<p>Next step is a pre-selection in <a href="http://geeqie.org/">Geeqie</a>. That’s one great picture viewer and I delete all the missed shots (bad framing, out of focus etc.) and note/mark all the promising/good shots here. This is normally the end of day one.</p>
<p>Switching from <a href="https://rawstudio.org/">RAWstudio</a> to <a href="https://darktable.org/">darktable</a> has been a giant step for me. dt is just a great program and I still learn about new functions and modules every day. The file comes in, is converted to monochrome and afterwards color saturation and lights (red and yellow) are manipulated . This way I can treat the skin (brighter or darker) without influencing the general brightness of the picture. Highlights and lowlights may be pushed a bit to the left and I add the signature and a frame 0,5% wide, lens correction is set automatically. That’s the whole deal. On very rare occasions I add some vignette or drop the brightness gradually from top to bottom, but again: it doesn’t happen all that often. I never cut, crop or re-frame a shot. WYSIWYG. Cropping something out, turning the picture in order to get perfectly vertical lines or the likes - it all feels like cheating. I have no client to please, no deadline to meet, I can take a second longer and frame my photo when I look through the viewfinder.</p>
<figure>
<img src="https://pixls.us/articles/nsfw-what-stefan-sees/FS_SAS_6408.jpg" width='686' height='1024' alt='Franzi Skamet by Stefan Schmitz'>
<figcaption>
<a href="https://www.flickr.com/photos/stefanschmitz/25464237123/">Franzi Skamet</a> by Stefan Schmitz
</figcaption>
</figure>

<p>The photos will then be treated in the <a href="https://www.gimp.org">GIMP</a>. Some dodge and burn (especially when there are problematic, very high or low contrasts), maybe stamp an electric plug away and in the end I re-size them down to 2560 on the long side (big enough for A3 prints) and (sometimes) apply the sharpening tool with value 20 or 25. Done. I can’t save a crappy shot in post-prod and I won’t try. Out of the 500 or so frames, 10 to 15 will be processed like that and it feels like nothing has changed over the last 40 years. The golden rule was “one good shot per roll of film” and I happen to be there, too. Spot-on!</p>
<p>I load those 15 pictures up on my <a href="https://www.flickr.com/photos/stefanschmitz/">Flickr account</a> and about once or twice a week I place a shot in the many Flickr groups. Also once a week (or every ten days) I post a photo on my <a href="http://whatstefansees.tumblr.com/">Tumblr account</a>. Today I have about 5k followers and my photos are seen between 500’000 and one million times a month, depending on the time of year and weather. There’s less traffic on warm summer days and more during cold and rainy winter-nights.</p>
<p>It takes me some time before I add a shot to <a href="https://whatstefansees.com/" title="what stefan sees - sensual &amp; nude photography, Hauts de France">my own website</a>. In comparison I show few photos there, every one for a reason and I point point people to that address, so I hope I only show the best.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/nsfw-what-stefan-sees/Aya_SAS_9200.jpg" alt='Aya Kashi by Stefan Schmitz' width='1020' height='1523'>
<figcaption>
<a href="https://www.flickr.com/photos/stefanschmitz/28006743752/">Aya Kashi</a> by Stefan Schmitz
</figcaption>
</figure>


<h3 id="is-your-choice-to-use-free-software-for-pragmatic-reasons-or-more-idealistic-">Is your choice to use Free Software for pragmatic reasons, or more idealistic?<a href="#is-your-choice-to-use-free-software-for-pragmatic-reasons-or-more-idealistic-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I owned an Apple II in 1983 and a digital MicroVax in 1990 or so. My way to FOSS started out pragmatic and it became a conviction later on. In the late 90’s and early 2000’s I had my own small business and worked with MS Office on a Win NT machine. Photos were processed with a Nikon film-scanner through the proprietary software into an illegal copy of Adobe PS4. It was OK, stable and I didn’t fear anything, but I wasn’t really happy neither. One day I swung over to Star-Office/OpenOffice.org for financial reasons and I also got rid of that unlicensed PS and installed the GIMP (I don’t know what version, but I upgraded some time later to 1.2, that’s for sure). I had internet access and an email address since 1994, but in the late 90’s big programs still came on CDs attached to computer-magazines. Downloading the GIMP was out of question.</p>
<p>Gaming was never my thing and when I installed Win XP, all hell broke lose - keeping a computer safe, virus-free and running wasn’t easy before the first service pack, but MS reacted way too slow in my opinion - I tried debian (10 CD kit) on my notebook, got it running, found the GIMP and OOo - and that was it. It took a bit of trial and error and I had to buy a number of W-Lan sticks because very few were supported and so on, but in the end I got the machines running.</p>
<p>Later on I got hold of an Ubuntu 7.10 CD, tried that and never looked back. The few changes on my system were from Gnome to XFCE desktop and from Thunderbird to a browser-based mail-client. Xubuntu is a no-brainer, it runs stable and fast. I contribute every December 100.- € to FOSS. That’s in general 50 and 40 to two projects and a tenner to <a href="https://www.wikipedia.com">Wikipedia</a>. I’d spend an extra tenner to any project that helps to convert old star-office files (.sdw and so on) to today’s standards (odt…), but nobody seems interested.</p>
<h3 id="what-is-one-piece-of-advice-you-would-offer-to-another-photographer-">What is one piece of advice you would offer to another photographer?<a href="#what-is-one-piece-of-advice-you-would-offer-to-another-photographer-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Don’t take any advise from me, i’m still learning myself. Or wait: be kind and a gentleman with the models. They all - each and everyone of them - have had bad experiences with photographers who forgot that the models are nude for the camera, not for the man behind it. They all have been in a room with a photographer who breathes a bit too hard and doesn’t get his gear working … don’t be that arsewipe!</p>
<figure>
<img src="https://pixls.us/articles/nsfw-what-stefan-sees/Irina_SAS_2911.jpg" alt='Irina by Stefan Schmitz' width='760' height='1134'>
<figcaption>
<a href="https://www.flickr.com/photos/stefanschmitz/17790276476/">Irina</a> by Stefan Schmitz
</figcaption>
</figure>

<p>Arrange for a place where the model can undress in privacy - she didn’t come for a strip-show and you shouldn’t try to make it one. Have some bottles of water at hand and talk about your plans, poses and sets with the model. Few people can read minds, so communication works best when you say what you have in mind and the model says how she thinks this can be realized. The more you talk, the better you communicate, the better the pictures. No good photo has ever been shot during a quiet session, believe me.</p>
<p>In general the model will check your portfolio/website and expect to do more or less the same kind of work with you. If you want to do something different, say so when booking the model. If your website shows a lot of nude portraits, models will expect to do that kind of photos. They may be a bit upset if you ask them out of nowhere to wear a latex suit because it’s fetish-Friday in your world. The more open and honest you are from the beginning, the better the shooting will go down.</p>
<figure>
<img src="https://pixls.us/articles/nsfw-what-stefan-sees/Irina_SAS_3059_bw.jpg" width='722' height='1080' alt='Irina by Stefan Schmitz'>
<figcaption>
<a href="https://www.flickr.com/photos/stefanschmitz/17196207163/">Irina</a> by Stefan Schmitz
</figcaption>
</figure>

<p>Don’t overdo the gear-thingy. 90% of my photos are taken with the 50mm standard lens. Period. Sometimes I have to switch to 35mm because the room is a bit to small and the distance too close for the one four-fifty, so everything I bring to an indoor-shooting is the camera, a 50, a 35, an el-cheap-o 100cm reflector from amazon (+/- 15 €/$) and an even cheaper stand for the reflector. Gear is not important, communication is.</p>
<p>Want to spend 300 €/$ on new gear? Spend it on a workshop. Learn how to communicate, get inspiration and fill your portfolio with a first set of pictures, so the next model you email can see that you already have some experience in the field of (nude) portraits. That’s more important than a new flashlight in your bag.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/nsfw-what-stefan-sees/Isabelle_SAS_8498.jpg" width='723' height='1080' alt='Arwen Kimara by Stefan Schmitz'>
<figcaption>
<a href="https://www.flickr.com/photos/stefanschmitz/39424595835/">Arwen Kimara</a> by Stefan Schmitz
</figcaption>
</figure>


<h2 id="thank-you-stefan-">Thank You Stefan!<a href="#thank-you-stefan-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>I want to thank Stefan again for taking the time and being patient enough to chat with me!</p>
<figure>
<img src="https://pixls-discuss.s3.amazonaws.com/original/2X/f/f5ae99ed3f346cbb533718177e06433ac08a1960.jpg" width="690" height="462">
</figure>

<p>Stefan is currently living in Northern France.  Before that he lived and worked in Miami, FL, and Northern Germany where he is from, went to school, and met his wife.  His main website is at <a href="https://whatstefansees.com/">https://whatstefansees.com/</a>, and he can be found on <a href="https://www.flickr.com/photos/stefanschmitz/">Flickr</a>, <a href="https://www.facebook.com/Stefan.Schmitz.Photo">Facebook</a>, <a href="https://twitter.com/whatstefansees">Twitter</a>, <a href="https://www.instagram.com/stafan.a.schmitz/">Instagram</a>, and <a href="http://whatstefansees.tumblr.com/">Tumblr</a>.</p>
<p><small>Unless otherwise noted, all of the images are copyright Stefan Schmitz (all rights reserved) and are used with permission.</small></p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Profiling a camera with darktable-chart]]></title>
            <link>https://pixls.us/articles/profiling-a-camera-with-darktable-chart/</link>
            <guid isPermaLink="true">https://pixls.us/articles/profiling-a-camera-with-darktable-chart/</guid>
            <pubDate>Thu, 26 Apr 2018 00:00:07 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/profiling-a-camera-with-darktable-chart/darktable_colorpicker.png" /><br/>
                <h1>Profiling a camera with darktable-chart</h1> 
                <h2>Figure out the development process of your camera</h2>  
                <p>[Article updated on: 2019-06-18]</p>
<h2 id="what-is-a-camera-profile-">What is a camera profile?<a href="#what-is-a-camera-profile-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>A camera profile is often a combination of a color lookup table (LUT) and a tone
curve which is applied to a RAW file to get a developed image. It translates
the colors that a camera captures into the colors they should look like. If you
shoot in RAW and JPEG at the same time, the JPEG file is already a developed
picture. Your camera can do color corrections to the data it gets from the
sensor when developing a picture. In other words, if a certain camera tends to
turn blue into turquoise, the manufacturers internal profile will correct for
the color shift and convert those turquoise values back to their proper hue.</p>
<p>The camera manufacturer creates a tone curve for the camera and understands
what color drifts the camera tends to capture and can correct it. Also RAW
files normally look very dull and the profile will allow it to look more
pleasing with just one click.  We can mimic what the camera does using a tone
curve and a color LUT. We want to do this as the base curves provided by
darktable are generalized for a manufacturers sensor behavior, but individually
profiling your camera can provide better color results.</p>
<h2 id="why-do-we-want-a-color-profile-">Why do we want a color profile?<a href="#why-do-we-want-a-color-profile-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>The camera captures light as linear RGB values. RAW development software needs
to transform those into <a href="https://en.wikipedia.org/wiki/CIE_1931_color_space">CIE XYZ tristimulus
values</a> for mathematical
calculations. The color transformation is often done under the assumption that
the conversion from camera RGB to CIE XYZ is a linear 3x3 mapping. Unfortunately
it is not because the process is spectral and the camera sensor sensitivity
also absorbs spectral light. In darktable the conversion is done the
following way: The camera RGB values are transformed using the color matrix
(either coming from the Adobe DNG Converter or dcraw) to arrive at
approximately profiled XYZ values. darktable provides color lookup table in
<a href="https://en.wikipedia.org/wiki/Lab_color_space"><em>Lab</em> color space</a> to fix
inaccuracies or implement styles which are semi-camera independent. A very cool
feature is that a user can edit the color LUT. This color LUT can be created by
darktable-chart as this article will show so that you don’t have to create it
yourself.</p>
<p>What we want to have is the same knowledge about colors in our raw development
software as the manufacturer put into the camera. Therefore we have two ways to
achieve this. Either we fit to a JPEG generated by the camera, which can also
apply creative styles (such as film emulations, filters), or we profile against
<em>real color</em> reproduction. For <em>real color</em> a color
target ships with a file providing the color values for each patch it has.</p>
<p>In summary, we can create a profile that emulates the manufactures color
processing inside the body, or we can create a profile that renders <em>real color</em>
as accurately as possible.</p>
<p>The process for both is nearly identical, and we will note when it diverges in
the instructions.</p>
<h2 id="creating-pictures-for-color-profiling">Creating pictures for color profiling<a href="#creating-pictures-for-color-profiling" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>To create the required pictures for camera profiling we need a <a href="https://en.wikipedia.org/wiki/ColorChecker">color chart
(aka Color Checker)</a> or an <a href="https://en.wikipedia.org/wiki/Color_chart#IT8_charts">IT8
chart</a> as our target. The
difference between a color chart and IT8 chart is the number of patches and
often the price. As the IT8 chart has more patches the result will be much
better.  Optimal would be if the color target comes with a grey card for
creating a custom White Balance. I can recommend the <a href="https://www.xrite.com/categories/calibration-profiling/colorchecker-passport-photo">X-Rite ColorChecker
Passport Photo</a>.
It is small, lightweight, all plastic, a good quality tool and also has a gray
card. An alternative is the <a href="http://www.datacolor.com/photography-design/product-overview/spyder-checkr-family/">Spyder
Checkr</a>.
If you want a better profiling result, you can buy a good <a
href="http://targets.coloraid.de/" target="_blank">IT8 chart from Coloraid</a>
(you want C1) or invest for example in the <a href="https://www.xrite.com/categories/calibration-profiling/colorchecker-digital-sg">ColorChecker Digital
SG</a>.
(Please share you experience if you buy a Coloraid C1!). I recommend getting
a gray card as this makes profiling easier.</p>
<p>Note: ArgllCMS offers <em>CIE</em> and <em>CHT</em> files for different color charts, if you
already have one or are going to buy one, check if ArgyllCMS offers support for
it first! You can always add support to your color chart to ArgylCMS, but the
process is much more complex. This will be very important later!
You can find these files (generally) in:</p>
<pre><code>/usr/share/color/argyll/ref/
</code></pre><p>The path might differ depending on the distribution you’re using. Your package
management tool should provide a way to list all files of a package so it should
be easy to find.</p>
<pre><code>find /usr/share -name &quot;*.cht&quot;
</code></pre><p>is a possible alternative to track down where the files are located.</p>
<p>We are creating a color profile for direct sunlight conditions (D50) which can
be used as a general purpose profile. For this we need some special conditions.</p>
<p>The Color Checker needs to be photographed in direct sunlight at 5000K (K =
Kelvin), which helps to reduce any metamerism of colors on the target and
ensures a good match to the data file that tells the profiling software what
the colors on the target should look like. However a major concern is glare,
but we can reduce it with some tricks.</p>
<p>One of the things we can do to reduce glare, is to build a simple shooting box.
For this we need a cardboard box and at least three black T-Shirts. The box
should be open on the top and on the front like in the following picture
(Figure 1).</p>
<figure>
<img src="https://pixls.us/articles/profiling-a-camera-with-darktable-chart/01_cardboard_box.jpg" alt="A cardboard box" width="760" height="507"/>
<figcaption>
<b>Figure 1:</b> Cardboard box suitable for color profiling
</figcaption>
</figure>

<p>Normally you just need to cut one side open. However it is better if you use
one big cardboard and build the box yourself. This way you can make the box so
it widens up in the front, see Figure 1. Then coat the inside of the box with
black T-Shirts like this:</p>
<figure>
<img src="https://pixls.us/articles/profiling-a-camera-with-darktable-chart/02_profiling_box.jpg" alt="A cardboard box coated with black t-shirts" width="507" height="760"/>
<figcaption><b>Figure 2:</b> A simple box for color profiling
</figcaption></figure>

<p>To further reduce glare we just need the right location to shoot the picture.
We want to shoot the target when the sun provides a temperature of 5000K (D50).
We get that in the morning hours when the sun is at about 45° in the sky. It
varies on where on earth you are located and on the season of the year.</p>
<p>I took my shots in central Europe in mid October at 09:45.</p>
<p>To measure 5000K I used a gray card for white balancing. When I shoot the gray
card, my camera displayed which temperature the profile has.</p>
<p>Try to shoot on a day with minimal clouds so the sun isn’t changing intensity
while you shoot. The higher the temperature the more water is in the
atmosphere, which means the quality of the images for profiling might be
reduced. Temperatures below 20°C are better than above.</p>
<p>In some countries it may not be possible to accurately produce these images
with sunlight. This could be due to air pollution (or lack of), temperature,
humidity, latitude, and atmospheric conditions. For example, in Australia, one
might be unable to use direct sunlight to create this profile, and would have
to use a set of color balanced bulbs with the same box setup to create this.</p>
<h3 id="shooting-outdoor">Shooting outdoor<a href="#shooting-outdoor" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>If you want to shoot outdoor, look for an empty tared parking lot or a lonely
road. The parking lot should be pretty big, like from a mall, without any cars
or trees! You should be far away from walls, trees or anything which could
possibly reflect. Put the box on the ground or a small chair and shoot with the
sun above your right or left shoulder behind you. You can use a black fabric
(bed sheets) if the ground reflects.</p>
<h3 id="shooting-indoor-with-artificial-light">Shooting indoor with artificial light<a href="#shooting-indoor-with-artificial-light" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Avoid all windows and stained glass. Create the box as mentioned, and arrange
it in a V shape with your tripod. At the top left of the V is the camera, at
the bottom is the color target, and at the top right is the light source. The
right source should be bright and even across the room and your setup. Position
yourself underneath it to avoid all shadows.</p>
<h2 id="how-to-shoot-the-target-">How to shoot the target?<a href="#how-to-shoot-the-target-" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="outdoor-preparations">Outdoor preparations<a href="#outdoor-preparations" class="header-link"><i class="fa fa-link"></i></a></h3>
<ol>
<li>Start white balancing your camera outside your house, office etc. with a
gray card every hour or 30 mintues in the morning and write down the time
and temperature. This way you will find out when the sun provides the right
temperatue (5000K) to take pictures for your target.</li>
</ol>
<h3 id="taking-the-pictures">Taking the pictures<a href="#taking-the-pictures" class="header-link"><i class="fa fa-link"></i></a></h3>
<h4 id="preparations-at-home">Preparations at home<a href="#preparations-at-home" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>If you’re shooting outdoor, do the following preparations at home. You will not
have much time for taking the pictures of your target. You only have a Window
of about 10 minutes. An assitent in the field can be useful.</p>
<ol>
<li><p>You should use a prime lens for taking the pictures. If possible a 50 mm or
85 mm lens (or anything in between, numbers are for full frame). The less
glass the light has to travel through the better it is for profiling. Thus
those two lenses are a good choice in the number of glass elements they have
and their field of view and also vignetting!  With a tele lens we would be
too far away and with a wide angle lens we would need to be too near to have
just the black box in the picture.</p>
</li>
<li><p>Set your metering mode to matrix metering (evaluative metering or multi
metering - this is often a symbol with 4 boxes and a circle in the center)
and use an aperture of f/8.0 (+/- 1/3 EV).
[If you have a spot metering mode which isn’t fixed on the center, then you
can point it to the neutral gray patch of the color checker, that’s the one
we want to have exposed correctly.]</p>
</li>
<li><p>Make sure that Dynamic Range Optimization (DRO) and Auto HDR (High Dynamic
Range) or anything like that are turned off!</p>
</li>
<li><p>Set the camera to capture “RAW &amp; JPEG” and disable lens corrections
(vignetting corrections) for JPEG files if possible. This is important
for JPEG and real color fitting. You can leave corrections for color
failures turned on.</p>
</li>
<li><p>Set your camera to color profile to AdobeRGB.</p>
</li>
<li><p>Set the ISO to the lowest possible value. Some cameras have an extended
ISO range, don’t use any of those values. For example my camera offers ISO
50, ISO 64 and ISO 80. Those are extended ISO values. The lowest ISO not in
the extended range for my camera is ISO 100. Check your camera manual!</p>
</li>
<li><p>Wear dark cloths, the best is a black hoody with long sleeves :-)</p>
</li>
</ol>
<h4 id="in-the-field">In the field<a href="#in-the-field" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Be there in advance so you have time to prepare everything.</p>
<ol>
<li>Set up your shooting box and mount your camera on a tripod. The best is to
have the camera looking down on the color chart like in the following
picture:</li>
</ol>
<figure>
<img src="https://pixls.us/articles/profiling-a-camera-with-darktable-chart/03_b_profiling_setup_outdoor.jpg" alt="A camera pointing into the profiling box" width="450" height="800"/>
<figcaption><b>Figure 3a:</b> Camera setup for creating pictures of the Color Checker
</figcaption></figure>

<ol start="2">
<li><p>Make sure the color chart is parallel to plane of the camera sensors
so all patches of the chart are in focus. The color chart should be in the
middle of the image using about 1/3 of the screen so that vignetting is
not an issue.</p>
</li>
<li><p>Shoot the target, zoom to 100% and check for glare and reposition if
necessary! In Figure 3b you can see a patch with extreme glare. In Figure 3c
you can see a patch with a bit of glare. You should try to get no glare at
all. Make sure the sun is shining at an angle on the Color Checker. Change
the angle of the target in to box till you get no glare!</p>
</li>
</ol>
<figure>
<img src="https://pixls.us/articles/profiling-a-camera-with-darktable-chart/03_c_extreme_glare.jpg" alt="The target with extreme glare cause by a wrong angle of the sun shining on the target" width="600" height="400"/>
<figcaption><b>Figure 3b:</b> The target with extreme glare cause by a wrong angle of the sun shining on the target
</figcaption></figure>

<figure>
<img src="https://pixls.us/articles/profiling-a-camera-with-darktable-chart/03_d_bit_of_glare.jpg" alt="The target with some glare cause by a wrong angle of the sun shining on the target" width="600" height="400"/>
<figcaption><b>Figure 3c:</b> The target with some glare cause by a wrong angle of the sun shining on the target
</figcaption></figure>



<ol start="3">
<li><p>If your camera has a custom white balance feature and you have a gray card,
create a custom white balance profiles till you get 5000K (D50) and use it
(see figure 3). Put the gray card in your black box in the sunlight or
artificial light at the same position as the Color Checker.  If you don’t
have a gray card, you have to use Auto White Balance (AWB) and find other
ways how to measure when you get 5000K from the sun.</p>
<p>Once you get 5000K from the the sun, you have about 10 minutes to take the
pictures of your target!</p>
</li>
</ol>
<p>Now you want to begin taking images. Normally we want to have a camera profile
just for the lowest ISO value.</p>
<p>Note: I created profiles for ISO 100 to ISO 640, because my camera has a gain
switch at ISO 640. I learned about that by inspecting the charts which have
been measured by <a href="https://dpreview.com">DPReview</a>.</p>
<p>You need to take 5 pictures of your target. This is so that if an image is over
or under exposed, you have an image with a stop above or below that is then
exposed correctly. One photo for -0.3 EV, 0 EV, 0.3 EV, 0.7 EV and 1.0 EV.
Some cameras (Fuji) ISO 100 is an Extended value, so use ISO 200. Normally
Extended ISO values are captured with the lowest physical ISO and overexposed and then
exposure is reduced with image processing. Use the lowest ISO profile for them.</p>
<p>Hint: Some cameras have a “Continues Bracketing” feature. You can set this to
0.3EV and 5 Images. Then the camera will automatically capture 5 images in 0.3
EV stops (-0.3 EV, 0.0 EV, 0.3 EV, 0.7 EV, 1.0 EV) for you.</p>
<p>Once you have done all the required shots, it is time to download the RAW and
JPEG files to your computer.</p>
<h2 id="verifying-correct-images-in-darktable">Verifying correct images in darktable<a href="#verifying-correct-images-in-darktable" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>For verifying the images we need to know the L-value from the <a href="https://en.wikipedia.org/wiki/Lab_color_space"><em>Lab</em> color
space</a> of the neutral gray field
in the gray ramp of our color target. For the ColorChecker Passport we can look
it up in the color information (CIE) file
(<a href="ColorCheckerPassport.cie">ColorCheckerPassport.cie</a>) shipping with
<a href="http://www.argyllcms.com/">ArgyllCMS</a>, which should be located at:</p>
<pre><code>/usr/share/color/argyll/ref/ColorCheckerPassport.cie
</code></pre><p>The ColorChecker Passport has actually two gray ramps. The neutral gray field
is the field on the bottom right of the color target ramp and is called D1 (see
Figure 4).  For the ColorChecker SG it is the patch E5 and for Wolf Faust’s IT8
target the one on the left of the gray ramp (GS0). It should be described in the
specification of your target.</p>
<p>If we check the CIE file, we will find out that
the neutral gray field D1 has an L-value of: <em>L=96.260066</em>. Lets round it to
<em>L=96</em>. For other color targets you can find the L-value in the description or
specification of your target, often it is <em>L=92</em> (e.g. Wolf Faust’s IT8 GS0).
Better check the CIE or CGATS file!</p>
<p>You then open the RAW file in darktable and disable the <a href="https://www.darktable.org/usermanual/en/modules.html#base_curve">base
curve</a> and all
other modules which might be applied automatically! You can leave the
Orientation module turned on. Select the standard input matrix in the
<a href="https://www.darktable.org/usermanual/en/color_group.html#input_color_profile">input color profile</a>
module and disable gamut clipping. Make sure “camera white balance” in the
<a href="https://www.darktable.org/usermanual/en/modules.html#whitebalance">white balance</a>
module is selected. If lens corrections are automatically applied to your JPEG
files, you need to enable
<a href="https://www.darktable.org/usermanual/en/correction_group.html#lens_correction">lens corrections</a>
for your RAW files too! Only apply what has been applied to the JPEG file too.</p>
<p>For my configuration I was left with the following modules enabled:</p>
<pre><code>Output Color Profile
Input Color Profile
Lens Correction (Optional)
Crop &amp; Rotate (Optional)
Demosaic
White Balance
Raw Black/White Point
</code></pre><p><strong>Apply the changes to all RAW files you have created!</strong></p>
<p>You could consider making a “profiling” style and applying it en-masse.</p>
<p>You can also crop the image but you need to apply exactly the same crop to the
RAW and JPEG file! (This is why you use a tripod!)</p>
<p>Now we need to use the <a href="https://www.darktable.org/usermanual/en/global_color_picker.html">global color picker
module</a> in
darkroom to find out the value of the natural white field on the color target.</p>
<ul>
<li>Open the first RAW file in darkroom and expand the global color picker module
on the left.</li>
<li>Select <em>area</em>, <em>mean</em> and <em>Lab</em> in the color picker and use the eye-dropper
to select the natural gray field of your target. On the Color Checker it’s
on the bottom right. Here is an example:</li>
</ul>
<figure>
<img src="https://pixls.us/articles/profiling-a-camera-with-darktable-chart/04_darktable_colorpicker.png" alt="darktable global color picker" width="516" />
<figcaption><b>Figure 4:</b> Determining the color of the neutral white patch
</figcaption></figure>

<ul>
<li><p>If the value displayed in the color picker module matches the L-value of the
field or is close (+0/-2. This means L=94 to L=96 is acceptable), give the RAW
file and the corresponding JPEG file 5
stars. In the picture above it is the first value of: <em>(96.491, -0.431,
3.020)</em>.  This means <em>L=96.491</em>, which is what you’re looking for on
this color target. You might be looking for e.g. <em>L=92</em> if you are using a
different Color Checker. See above how to find out the L-value for your
target.</p>
</li>
<li><p>For real color profiling this is <em>very</em> important to get right. Additionally
you want to check the JPEG is registering a L value between 96 and 98 (0/+2
tolerance). You do not want overexposure here (L=100 is white)! If your
images are over exposed, your profile will actually darken the images (this
is not what you want).</p>
</li>
<li><p>For profile extraction, this is less important as darktable-chart will extract
the differences between the raw and the JPEG, and will assume the camera’s
exposure level was correct. This means if your camera “thinks” a good exposure
is L=98 for the JPEG, and the RAW reads as L=85, then your profile needs to
create the difference here so you get the same effect.</p>
</li>
</ul>
<h2 id="exporting-images-for-darktable-chart">Exporting images for darktable-chart<a href="#exporting-images-for-darktable-chart" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>For exporting we need to select <em>Lab</em> as output color profile. This color space
is not visible in the combo box by default. You can enable it by starting
darktable with the following command line argument:</p>
<pre><code>darktable --conf allow_lab_output=true
</code></pre><p>Or you always enable it by setting allow_lab_output to TRUE in darktablerc. Make
sure that you have closed darktable before making this change, then reopen it (
darktable writes to this file and may erase your change if you edit while
darktable is running).</p>
<pre><code>~/.config/darktable/darktablerc
allow_lab_output=TRUE
</code></pre><p>As the output format select “PFM (float)” and for the export path you can use:</p>
<pre><code>$(FILE_FOLDER)/PFM/$(MODEL)_ISO$(EXIF_ISO)_$(FILE_EXTENSION)
</code></pre><p>Remember to select the <em>Lab</em> output color profile here as well.</p>
<p>You need to export all the RAW and JPEG files, not just the RAWs.</p>
<p><strong>Select all 5 star RAW and JPEG files and export them.</strong></p>
<figure>
<img src="https://pixls.us/articles/profiling-a-camera-with-darktable-chart/05_darktable_export.png" alt="darktable export dialog" width="516" height="631">
<figcaption><b>Figure 5:</b> Exporting the images for profiling
</figcaption></figure>

<h2 id="profiling-with-darktable-chart">Profiling with darktable-chart<a href="#profiling-with-darktable-chart" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Before we can start you need the chart file for your color target. The chart
file contains the layout of the color checker. For example it tells the
profiling software where the gray ramp is located or which field contains
which color. For the “X-Rite Colorchecker Passport Photo” there is a
(<a href="ColorCheckerPassport.cht">ColorCheckerPassport.cht</a>) file provided by
ArgyllCMS. You can find it here:</p>
<pre><code>/usr/share/color/argyll/ref/ColorCheckerPassport.cht
</code></pre><p>Now it is time to start darktable-chart. The initial screen will look like
this:</p>
<figure>
<img src="https://pixls.us/articles/profiling-a-camera-with-darktable-chart/06_darktable-chart_startup.png" alt="darktable-chart startup" width="516" height="393">
<figcaption><b>Figure 6:</b> The darktable-chart screen after startup
</figcaption></figure>

<h3 id="source-image">Source Image<a href="#source-image" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>In the source image tab, select your PFM exported RAW file as <em>image</em> and for
<em>chart</em> your Color Checker chart file. Then fit the displayed grid on your
image.</p>
<figure>
<img src="https://pixls.us/articles/profiling-a-camera-with-darktable-chart/07_darktable-chart_source_image.png" alt="darktable-chart source image" width="516" height="393">
<figcaption><b>Figure 7:</b> Selecting the source image in darktable-chart
</figcaption></figure>

<p>Make sure that the inner rectangular of the grid is completely inside of the
color field, see Figure 8. If it is too big, you can use the size slider in the
top right corner to adjust it. Better too small than too large.</p>
<figure>
<img src="https://pixls.us/articles/profiling-a-camera-with-darktable-chart/08_darktable-chart_source_image_select.png" alt="darktable-chart source image with grid" width="516" height="393">
<figcaption><b>Figure 8:</b> Placing the chart grid on the source image
</figcaption></figure>

<h3 id="reference-values">Reference values<a href="#reference-values" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>This is the only step where the process diverges for <em>real color</em> vs camera
profile creation.</p>
<p>If you are creating a color profile to match the manufacturers color
processing in body, you will want to select <em>color chart image</em> and as the
<em>reference image</em> select the PFM exported JPEG file which corresponds to the
RAW file in the source image tab. Once opened you need to resize the grid again
to match the Color Checker in your image. Adjust the size with the slider if
necessary.</p>
<figure>
<img src="https://pixls.us/articles/profiling-a-camera-with-darktable-chart/09_darktable-chart_reference_values.png" alt="darktable-chart selecting reference values" width="516" height="393">
<figcaption><b>Figure 9:</b> Selecting the reference value for profiling in darktable-chart
</figcaption></figure>

<p>If you are creating a color profile for <em>real color</em>, select the mode as
<em>cie/it8 file</em> and load the corresponding CIE file for your color target. If
you have issues with this, run darktable-chart from the CLI and check the output.
I found that my chart would not open with:</p>
<pre><code>error with the IT8 file, can&#39;t find the SAMPLE_ID column
</code></pre><p>It’s worth checking the ‘Lab (reference)’ values at the bottom of the display
to ensure they match what you expect and were correctly loaded. I saw some
cool (but incorrect) results when they did not load!</p>
<h3 id="process">Process<a href="#process" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>In this tab you’re asked to select the <em>patches with the gray ramp</em>. For the
‘X-Rite Color Checker Passport’ these are the ‘NEU1 .. NEU8’ fields. Newer
version of darktable automatically detect the gray ramps!  The input field
<em>number of final patches</em> defines how many editable color patches the resulting
style will use within the color look up table module. More patches give a
better result but slows down the process. I think 28 is a good compromise but
you might want to user the maxium of 49.</p>
<p>Once you have done this click on ‘process’ to start the calculation. The
quality of the result in terms of average delta E and maximum delta E are
displayed below the button. These data show how close the resulting style
applied to the source image will be able to match the reference values – the
lower the better.</p>
<p>You must click process each time you change source images or reference chart
to generate the new profiles. Sometimes process is “greyed out”, so simply
toggling the grey ramp setting will reactivate it.</p>
<p>After running ‘process’, click on ‘export’ to save the darktable style.</p>
<figure>
<img src="https://pixls.us/articles/profiling-a-camera-with-darktable-chart/10_darktable-chart_process.png" alt="darktable-chart export" width="516" height="393">
<figcaption>
<b>Figure 10:</b> Processing the image in darktable-chart
</figcaption>
</figure>

<p>In the export window you should already get a good name for the style. Add a
leading zero for ISO values smaller than 1000 get correct sorting in the styles
module, for example: <em>ILCE-7M3_ISO0100_JPG.dtstyle</em>. The JPG in the name should
indicate that we fitted against a JPG file. If you fitted against a CIE file,
remove the CIE filename from the style name. If you applied a creative style
(for example, a film emulation or filter in the camera).  to the JPG, probably
add it at the end of the file name and style name.</p>
<h2 id="importing-your-dtstyle-in-darktable">Importing your dtstyle in darktable<a href="#importing-your-dtstyle-in-darktable" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>To use your just created style, you need to import it in the <a href="https://www.darktable.org/usermanual/en/styles.html">style
module</a> in the lighttable.
In the lighttable open the module on the right and click on ‘import’. Select
the dtstyle file you created to add it. Once imported you can select a raw file
and then double click on the style in the ‘style module’ to apply it.</p>
<p>Open the image in darkroom and you will notice that the <a href="https://www.darktable.org/usermanual/en/modules.html#base_curve">base
curve</a> has
been disabled and a few modules been enabled. The additional modules activated
are normally: <a href="https://www.darktable.org/usermanual/en/color_group.html#input_color_profile">input color
profile</a>,
<a href="https://www.darktable.org/usermanual/en/color_group.html#color_look_up_table">color lookup
table</a>
and <a href="https://www.darktable.org/usermanual/en/tone_group.html#tone_curve">tone
curve</a>.</p>
<h2 id="verifying-your-profile">Verifying your profile<a href="#verifying-your-profile" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>To verify the style you created you can either apply it to one of the RAW files
you created for profiling. Then use the global color picker to compare the
color in the RAW with the style applied to the one in the JPEG file.</p>
<p>I also shoot a few normal pictures with nice colors like flowers in RAW and
JPEG and then compare the result. Sometimes some colors can be off which can
indicate that your pictures for profiling are not the best. This can be because
there were some kind of clouds, glare or the wrong daytime. Redo the shots till
you get the result you’re satisfied with.</p>
<p>Sadly this is a trial and error process, so you will have to create some number
of profiles before you find the results you want. It’s a good idea to
read this article again to see if you missed any important steps.</p>
<h2 id="how-does-the-result-look-like-">How does the result look like?<a href="#how-does-the-result-look-like-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>In the following screenshot (Figure 11) you can see the calculated tone curve by darktable
chart and the Sony base curve of darktable. The tone curve is based on the color LUT. It will
look flat if you apply it without the LUT.</p>
<figure>
<img src="https://pixls.us/articles/profiling-a-camera-with-darktable-chart/11_darktable_curves.png" alt="darktable base curve vs. tone curve" width="516" height="393">
<figcaption>
<b>Figure 11:</b> Comparison of the default base curve with the new generated tone curve
</figcaption>
</figure>

<p>Here is a comparison between the base curve for Sony on the left and the
dtstyle (color LUT + tone curve) created with darktable-chart:</p>
<figure>
<a href="12_darktable_style_compare.png"><img src="https://pixls.us/articles/profiling-a-camera-with-darktable-chart/12_darktable_style_compare.png" alt="darktable comparison" width="516" height="393"></a>
<figcaption>
<b>Figure 12:</b> Side by side comparison on an image (left the standard base curve, right the calculated dtstyle)
</figcaption>
</figure>

<h2 id="other-ideas">Other ideas<a href="#other-ideas" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>This process will work for extracting in-body black and white profiles, as
well as creative color profiles. I see a significant improvment in black
and white profiles from this process over the use of some of the black and white
modules in darktable.</p>
<p>You may find that the lowest ISO profile may provide pretty good results for
higher ISO values. This will save you a lot of time profiling, and allows
you to blanket-apply your profile to all your images quickly - you only
need one profile now! This is highly
dependant on your camera however, so experiment with this.</p>
<p>These profiles <em>should</em> work in all light conditions, provided your white
balance is correct. Given you now have a color target, you should always take
one photo of it, so you can correct the whitebalance later.</p>
<h2 id="discussion">Discussion<a href="#discussion" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>As always the ways to get better colors are open for discussion an it can be
improved in collaboration.</p>
<p>Feedback is very welcome.</p>
<p>Thanks to the darktable developers for such a great piece of software! :-)</p>
<p>William Brown has contributed to the article, based on his profiling experience
following this tutorial.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[How to create camera noise profiles for darktable]]></title>
            <link>https://pixls.us/articles/how-to-create-camera-noise-profiles-for-darktable/</link>
            <guid isPermaLink="true">https://pixls.us/articles/how-to-create-camera-noise-profiles-for-darktable/</guid>
            <pubDate>Mon, 16 Apr 2018 00:00:07 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/how-to-create-camera-noise-profiles-for-darktable/lede_gradient.jpg" /><br/>
                <h1>How to create camera noise profiles for darktable</h1> 
                <h2>An easy way to create correct profiling pictures</h2>  
                <p>[Article updated on: 2019-11-26]</p>
<h2 id="what-is-noise-">What is noise?<a href="#what-is-noise-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Noise in digital images is similar to film grain in analogue photography.  In
digital cameras, noise is either created by the amplification of digital
signals or heat produced by the sensor. It appears as random, colored speckles
on an otherwise smooth surface and can significantly degrade image quality.</p>
<p>Noise is always present, and if it gets too pronounced, it detracts from the
image and needs to be mitigated. Removing noise can decrease image quality or
sharpness. There are different algorithms to reduce noise, but the best option
is if having profiles for a camera to understand the noise patterns a camera
model produces.</p>
<p>Noise reduction is an image restoration process. You want to remove the digital
artifacts from the image in such a way that the original image is discernible.
These artifacts can be just some kind of grain (luminance noise) or colorful,
disturbing dots (chroma noise). It can either add to a picture or detract from
it. If the noise is disturbing, we want to remove it. The following pictures
show a picture with noise and a denoised version:</p>
<figure>
<img src="https://pixls.us/articles/how-to-create-camera-noise-profiles-for-darktable/example_noise.jpg" alt="Noisy cup" title="Image with noise" width="760" height="507">
<img src="https://pixls.us/articles/how-to-create-camera-noise-profiles-for-darktable/example_denoised.jpg" alt="Denoised cup" title="Denoised image" width="760" height="507">
</figure>

<p>To get the best noise reduction, we need to generate noise profiles for each
ISO value for a camera.</p>
<h2 id="creating-the-pictures-for-noise-profiling">Creating the pictures for noise profiling<a href="#creating-the-pictures-for-noise-profiling" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>For every ISO value your camera has, you have to take a picture. The pictures
need to be exposed a particular way to gather the information correctly. The
photos need to be out of focus with a widespread histogram like in the
following image:</p>
<figure>
<img src="https://pixls.us/articles/how-to-create-camera-noise-profiles-for-darktable/histogram.png" alt="Histogram" width="516" height="271">
</figure>

<p>We need overexposed and underexposed areas, but mostly particularly the grey
areas in between. These areas contain the information we are looking for.</p>
<p>Let’s go through the noise profile generation step by step. For easier creation
of the required pictures, we will create a stencil which will make it easier to
capture the photos.</p>
<h3 id="building-a-profiling-testbed">Building a profiling testbed<a href="#building-a-profiling-testbed" class="header-link"><i class="fa fa-link"></i></a></h3>
<h4 id="requiements">Requiements<a href="#requiements" class="header-link"><i class="fa fa-link"></i></a></h4>
<ul>
<li>A dark room (wait till night time)</li>
<li>Monitor</li>
<li>Printer</li>
<li>Sheets of black thick paper (DIN A3, &gt;= 200g/m²)</li>
<li>White paper</li>
<li>Scissors</li>
<li>Sellotape (Tesafilm)</li>
</ul>
<p>First you need to get some thicker black paper or cardboard. No light should shine
through it! Then you need to print out a gradient on white paper. Light
should shine through the white paper!</p>
<p><a href="bw_gradient.pdf">Print this black to white gradient (PDF)</a></p>
<p>I got black thick paper (DIN A3, &gt;= 200g/m²) and used two sheets. You need to
be able to cover your monitor with the black paper. Put the printed gradient in
the middle and draw around it. From three sides (bottom, left, top) make the
window smaller by 1 cm, see Figure 1. On the right we need to have a gap.</p>
<figure>
<img src="https://pixls.us/articles/how-to-create-camera-noise-profiles-for-darktable/stencil_step1.jpg" alt="Stencil Step 1" width="760" height="507">
<figcaption>
<b>Figure 1:</b> Drawn window reduced by 1 cm on the bottom, left and top.
</figcaption>
</figure>

<p>Next is to cut out the window and type the gradient onto the black paper like
in Figure 2. It is important that there is a gap between the white and the
black paper on the white side of the gradient. We need light for an overexposed
area.</p>
<figure>
<img src="https://pixls.us/articles/how-to-create-camera-noise-profiles-for-darktable/stencil_step2.jpg" alt="Stencil Step 2" width="760" height="507">
<figcaption>
<b>Figure 2: The gradient taped into the window of the black paper.</b> 
</figcaption>
</figure>

<p>Once you have done that go to your monitor and make it all white. You can an
<a href="white.png">all white image</a> for that. Then tape the sheets to your monitor
like in Figure 3.</p>
<figure>
<img src="https://pixls.us/articles/how-to-create-camera-noise-profiles-for-darktable/stencil_step3.jpg" alt="Stencil Step 3" width="760" height="507">
<figcaption>
<b>Figure 3: The sheets of black paper taped to the monitor.</b> 
</figcaption>
</figure>

<h2 id="taking-the-pictures">Taking the pictures<a href="#taking-the-pictures" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>It is time to get your camera. You need to shoot in RAW. It is best to turn off
any noise reduction especially long exposure noise reduction. Mount the camera
on a tripod and use a lens between 35 mm to 85 mm (full frame). I used a 85 mm
f/1.4 lens.</p>
<p>Make sure the gradient fills most of the frame. Set your camera to manual focus
and focus on infinity. Select the manual mode of your camera and choose the
fastest aperture and ISO100. Depending on the lens you’re using you might want
to close the aperture. For me f/1.4 was too blurry and I closed it till f/4.0.
You don’t want to see any edges or any structure on the paper but be too
blurry. You want to still see the gradient but we want nice transitions between
different lightning zone black -&gt; grey -&gt; white like in Figure 4.</p>
<p>Now you need to set the shutter speed. Make the picture really dark and then
make the shutter speed longer till the gap which gives us the white from the
monitor is overexposed, pure white see Figure 4. The black around the white
paper should be underexposed (pure black).</p>
<figure>
<img src="https://pixls.us/articles/how-to-create-camera-noise-profiles-for-darktable/noise_example_shot.jpg" alt="Example noise picture" title="Example shot for noise" width="760" height="505">
<figcaption>
<b>Figure 3: Example shot for noise.</b> 
</figcaption>
</figure>

<p>Now you need to take a picture for each ISO value of your camera. When you
increase the ISO value you need to decrease the shutter speed!</p>
<h2 id="creating-the-noise-profiles">Creating the noise profiles<a href="#creating-the-noise-profiles" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="step-1">STEP 1<a href="#step-1" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Run</p>
<pre><code>/usr/lib/darktable/tools/darktable-gen-noiseprofile --help
</code></pre><p>If this gives you the help of the tool, continue with STEP 2 otherwise go to
STEP 1a. Packages for openSUSE, Fedora, Ubuntu and Debian packaging the noise
tools can be found
<a href="https://software.opensuse.org/download.html?project=graphics:darktable&amp;package=darktable">here</a>.</p>
<h3 id="step-1a">STEP 1a<a href="#step-1a" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Your darktable installation doesn’t offer the noise tools so you need to
compile it yourself. Before you start make sure that you have the following
dependencies installed on your system:</p>
<ul>
<li>git</li>
<li>gcc</li>
<li>make</li>
<li>gnuplot</li>
<li>convert (ImageMagick)</li>
<li>darktable-cli</li>
</ul>
<p>Get the darktable source code using git:</p>
<pre><code>git clone https://github.com/darktable-org/darktable.git
</code></pre><p>Now change to the source and build the tools for creating noise profiles using:</p>
<pre><code>mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX=/opt/darktable -DBUILD_NOISE_TOOLS=ON ..
cd tools/noise
make
sudo make install
</code></pre><h3 id="step-2">STEP 2<a href="#step-2" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Download the pictures from your camera and change to the directory on the
command line:</p>
<pre><code>cd /path/to/noise_pictures
</code></pre><p>and run the following command:</p>
<pre><code>/usr/lib/darktable/tools/darktable-gen-noiseprofile -d $(pwd)
</code></pre><p>or if you had to download and build the source, run:</p>
<pre><code>/opt/darktable_source/lib/tools/darktable-gen-noiseprofile -d $(pwd)
</code></pre><p>This will automatically do everything for you. Note that this can take quite
some time to finish. I think it took 15 to 20 minutes on my machine. If a
picture is not exposed correctly, the tool will tell you the image name and you
have to recapture the picture with that ISO. Remove the non-working picture.</p>
<p>The tool will tell you, once completed, how to test and verify the
noise profiles you created.</p>
<p>Once the tool finished, you end up with a tarball you can send to darktable for
inclusion. You can open a bug <a href="https://github.com/darktable-org/darktable/issues">here</a></p>
<p>The interesting files are the <code>presets.json</code> file (darktable input) and, for the
developers, the noise_result.pdf file. You can find an example PDF 
<a href="ilce-7m3_noise_result.pdf">here</a>. It is a
collection of diagrams showing the histogram for each picture and the results
of the calculations.</p>
<p>A detailed explanation of the diagrams and the math behind it can be found in
<a href="https://www.darktable.org/2012/12/profiling-sensor-and-photon-noise/">the original noise profile
tutorial</a>
by Johannes Hanika.</p>
<p>Feedback is very much welcome in the comments below!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[G'MIC 2.2]]></title>
            <link>https://pixls.us/blog/2018/02/g-mic-2-2/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2018/02/g-mic-2-2/</guid>
            <pubDate>Wed, 21 Feb 2018 21:18:32 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2018/02/g-mic-2-2/lede_gmic_equalize_hsi.jpg" /><br/>
                <h1>G'MIC 2.2</h1> 
                <h2>New features and filters!</h2>  
                <p>The <a href="https://www.greyc.fr/?page_id=443&amp;lang=en">IMAGE team</a> of the <a href="https://www.greyc.fr">GREYC</a> laboratory (UMR <a href="http://www.cnrs.fr">CNRS</a> 6072, Caen, France) is pleased to announce the release of a new <strong>2.2</strong> version of <a href="http://gmic.eu"><em>G’MIC</em></a>, its open-source, generic, and extensible framework for <a href="https://en.wikipedia.org/wiki/Image_processing">image processing</a>. As <a href="https://pixls.us/blog/2017/06/g-mic-2-0/">we already did in the past</a>, we take this opportunity to look at the latest notable features added since the previous major release (<strong>2.0</strong>, last June).
<!--more--></p>
<hr>
<ul>
<li><a href="http://gmic.eu">The G’MIC project</a></li>
<li><a href="https://twitter.com/gmic_ip">Twitter feed</a></li>
<li><a href="https://gmicol.greyc.fr">The G’MIC Online Web Service</a></li>
</ul>
<hr>
<p><em>Note 1: click on a picture to view a larger version.</em>
<em>Note 2: This is a translation of an original article, in French, published on <a href="http://linuxfr.org/news/g-mic-2-2-v-la-les-filtres">Linuxfr</a></em>.</p>
<h1 id="1-context-and-recent-evolutions">1. Context and recent evolutions</h1>
<p><em>G’MIC</em> is a free and open-source software developed since August 2008 (distributed under the <a href="http://www.cecill.info/">CeCILL</a> license), by folks in the <a href="https://www.greyc.fr/image">IMAGE</a> team at the <a href="https://www.greyc.fr/">GREYC</a>, a French public research laboratory located in Caen and supervised by three institutions: the <a href="http://www.cnrs.fr">CNRS</a>, the <a href="http://www.unicaen.fr/">University of Caen</a>, and the <a href="http://www.ensicaen.fr/">ENSICAEN</a> engineering school. This team is made up of researchers and lecturers specialized in the fields of algorithms and mathematics for image processing.
As one of the main developer of <em>G’MIC</em>, I wanted to sum up the work we’ve made on this software during these last months.</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/logo_gmic.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/logo_gmic.png" alt="G&#39;MIC logo"></a>
<figcaption>Fig. 1.1: The G’MIC project logo, and its cute little mascot “Gmicky” (designed by <a href="http://www.davidrevoy.com/">David Revoy</a>).
</figcaption></figure>

<p><em>G’MIC</em> is multi-platform (GNU/Linux, MacOS, Windows …) and provides many ways of manipulating <em>generic</em> image data, i.e. still images or image sequences acquired as hyperspectral 2D or 3D floating-point arrays (including usual color images). More than <a href="http://gmic.eu/reference.shtml">950 different image processing functions</a> are already available in the <em>G’MIC</em> framework, this number being expandable through the use of the <em>G’MIC</em> scripting capabilities.</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_220.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_220.png" alt="G&#39;MIC plugin for GIMP"></a>
<figcaption>Fig.1.2: The G’MIC-Qt plugin for GIMP, currently the most popular G’MIC interface.
</figcaption></figure>

<p>Since the last major version release there have been two important events in the project life:</p>
<h2 id="1-1-port-of-the-g-mic-qt-plugin-to-krita"><a href="#1-1-port-of-the-g-mic-qt-plugin-to-krita" class="header-link-alt">1.1. Port of the <em>G’MIC-Qt</em> plugin to <a href="http://www.krita.org"><em>Krita</em></a></a></h2>
<p>When we released version <strong>2.0</strong> of <em>G’MIC</em> a few months ago, we were happy to announce a complete rewrite (in <em><a href="https://en.wikipedia.org/wiki/Qt">Qt</a></em>) of the plugin code for <a href="http://www.gimp.org"><em>GIMP</em></a>. An extra step has been taken, since this plugin has been extended to fit into the open-source digital painting software <a href="http://www.krita.org"><em>Krita</em></a>.
This has been made possible thanks to the development work of <a href="https://twitter.com/boudewijnrempt"><em>Boudewijn Rempt</em></a> (maintainer of <em>Krita</em>) and <a href="https://foureys.users.greyc.fr"><em>Sébastien Fourey</em></a> (developer of the plugin). The <em>G’MIC-Qt</em> plugin is now available for <em>Krita</em> versions <strong>3.3+</strong> and, although it does not yet implement all the I/O functionality of its <em>GIMP</em> counterpart, the feedback we’ve had so far is rather positive.
This new port replaces the old <em>G’MIC</em> plugin for <em>Krita</em> which has not been maintained for some time. The good news for <em>Krita</em> users (and developers) is that they now have an up-to-date plugin whose code is common with the one running in <em>GIMP</em> and for which we will be able to ensure the maintenance and further developments.
Note this port required the writing of a source file <a href="https://github.com/c-koi/gmic-qt/blob/master/src/Host/Krita/host_krita.cpp"><code>host_krita.cpp</code></a> (in <em>C++</em>) implementing the communication between the host software and the plugin, and it is reasonable to think that a similar effort would allow other programs to get their own version of the <em>G’MIC</em> plugin (and the <em>500</em> image filters that come with it!).</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_krita.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_krita.png" alt="G&#39;MIC for Krita"></a>
<figcaption>Fig. 1.3: Overview of the G’MIC-Qt plugin running on Krita.
</figcaption></figure>

<h2 id="1-2-cecill-c-a-more-permissive-license"><a href="#1-2-cecill-c-a-more-permissive-license" class="header-link-alt">1.2. CeCILL-C, a more permissive license</a></h2>
<p>Another major event concerns the new license of use :  The <a href="http://www.cecill.info/licences/Licence_CeCILL-C_V1-en.html"><em>CeCILL-C</em></a> license (that is in the spirit of the <a href="https://en.wikipedia.org/wiki/GNU_Lesser_General_Public_License"><em>LGPL</em></a>) is now available for some components of the <em>G’MIC</em> framework. This license is more permissive than the previously proposed <a href="http://www.cecill.info/licences/Licence_CeCILL_V2.1-en.html"><em>CeCILL</em></a> license (which is <a href="https://en.wikipedia.org/wiki/GNU_General_Public_License"><em>GPL</em></a>-compatible) and is more suitable for the distribution of software libraries. This license extension (now <em>double licensing</em>) applies precisely to the core files of <em>G’MIC</em>, i.e. its <em>C++</em> library <code>libgmic</code>. Thus, the integration of the <code>libgmic</code> features (therefore, all G’MIC image filters) is now allowed in software that are not themselves licensed under <em>GPL/CeCILL</em> (including closed source products).
The source code of the <em>G’MIC-Qt</em> plugin, meanwhile, remains distributed under the single <em>CeCILL</em> license (<em>GPL</em>-like).</p>
<h1 id="2-fruitful-collaboration-with-david-revoy">2. Fruitful collaboration with David Revoy</h1>
<p>If you’ve followed us for a while, you may have noticed that we very often refer to the work of illustrator <a href="http://www.davidrevoy.com"><em>David Revoy</em></a> for his multiple contributions to <em>G’MIC</em>: mascot design, ideas of filters, articles or video tutorials, tests of all kinds, etc. More generally, <em>David</em> is a major contributor to the world of free  digital art, as much with the comic <a href="https://www.peppercarrot.com/"><em>Pepper &amp; Carrot</em></a> he produces (distributed under free license <em>CC -BY</em>), as with his suggestions and ongoing bug reports for the open-source software he uses.
Therefore, it seems quite natural to devote a special section to him in this article, summarizing the different ideas, contributions and experiments he has brought to <em>G’MIC</em> just recently. A <strong>big thank you</strong>, <em>David</em> for your availability, the sharing of your ideas, and for all your work in general!</p>
<h2 id="2-1-improving-the-lineart-colorization-filter"><a href="#2-1-improving-the-lineart-colorization-filter" class="header-link-alt">2.1. Improving the lineart colorization filter</a></h2>
<p>Let’s first mention the progress made on the <a href="https://pixls.us/blog/2017/06/g-mic-2-0/#3-easing-the-work-of-cartoonists-"><strong>Black &amp; White / Colorize lineart (smart-coloring)</strong></a> filter  that had appeared at the time of the <strong>2.0</strong> <em>G’MIC</em> release. 
This filter is basically a <em>lineart</em> colorization assistant which was developed in collaboration with <em>David</em>. It tries to automatically generate a colorization layer for a given <em>lineart</em>, from the analysis of the contours and the geometry of that <em>lineart</em>. Following <em>David</em>‘s suggestions, we were able to add a new colorization mode, named “<em>Autoclean</em>“. The idea is to try to automatically “clean” a coloring layer (made roughly by the user) provided in addition to the <em>lineart</em> layer, using the same geometric analysis as for the previous colorization modes. 
The use of this new mode is illustrated below, where a given <em>lineart</em> (<em>left</em>) has been colorized approximately by the user. From the two layers <em>line art</em> + <em>color layer</em>, our “<em>Autoclean</em>“ algorithm generates an image (<em>right</em>), where the colors do not overflow the <em>lineart</em> contours (even for “virtual” contours that are not closed). The result is not always perfect, but nevertheless reduces the time spent in the tedious process of colorization.</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_autoclean.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_autoclean.png" alt="Gmic_autoclean"></a>
<figcaption>Fig. 2.1: The new “Autoclean” mode of the lineart colorization filter can automatically “clean” a rough colorization layer.
</figcaption></figure>

<p>Note that this filter is also equipped with a new hatch detection module, which makes it possible to avoid generating too many small areas when using the previously available random colorization mode, particularly when the input <em>lineart</em> contains a large number of hatches (see figure below). </p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_hatch_detection2.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_hatch_detection2.png" alt="Gmic_hatch_detect"></a>
<figcaption>Fig. 2.2: The new hatching detection module limits the number of small colored areas generated by the automatic random coloring mode.
</figcaption></figure>

<h2 id="2-2-color-equalizer-in-hsi-hsl-and-hsv-spaces"><a href="#2-2-color-equalizer-in-hsi-hsl-and-hsv-spaces" class="header-link-alt">2.2. Color equalizer in HSI, HSL and HSV spaces</a></h2>
<p>More recently, <em>David</em> suggested the idea of a filter to separately vary the hue and saturation of colors having certain levels of luminosity. The underlying idea is to give the artist the ability to draw or paint digitally using only grayscale, then colorize his masterpiece afterwards by re-assigning specific colors to the different gray values of the image. The obtained result has of course a limited color range, but the overall color mood is already in place. The artist only has to retouch the colors locally rather than having to colorize the entire painting by hand.
The figure below illustrates the use of this new filter <strong>Colors/Equalize HSI/HSL/HSV</strong> available in the <em>G’MIC</em> plugin : each category of values can be finely adjusted, resulting in preliminary colorizations of black and white paintings.</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_equalize_hsi.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_equalize_hsi.png" alt="Equalize HSI1"></a>
<a href="http://gmic.eu/gmic220/fullsize/gmic_equalize_hsi5.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_equalize_hsi5.png" alt="Equalize HSI2"></a>
<a href="http://gmic.eu/gmic220/fullsize/gmic_equalize_hsi2.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_equalize_hsi2.png" alt="Equalize HSI3"></a>
<figcaption>Fig. 2.3: Equalization in HSI/HSL/HSV colorspaces allows to easily set the global color mood for B&amp;W paintings.
</figcaption></figure>

<p>Note that the effect is equivalent to applying a color gradient to the different gray values of the image. This is something that could already be done quite easily in GIMP. But the main interest here is we can ensure that the pixel brightness remains unchanged during the color transformation, which is not an obvious property to preserve when using a gradient map.
What is nice about this filter is that it can apply to color photographs as well. You can change the hue and saturation of colors with a certain brightness, with an effect that can sometimes be surprising, like with the landscape photography shown below.</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_eqhsi_tree_all.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_eqhsi_tree_all.png" alt="Equalize HSI4"></a>
<figcaption>Fig. 2.4: The filter “Equalize HSI/HSL/HSV” applied on a color photograph makes it possible to change the colorimetric environment, here in a rather extreme way.
</figcaption></figure>

<h2 id="2-3-angular-deformations"><a href="#2-3-angular-deformations" class="header-link-alt">2.3. Angular deformations</a></h2>
<p>Another one of the <em>David</em>‘s ideas concerned the development of a random local deformation filter, having the ability to generate <em>angular</em> deformations. From an algorithmic point of view, it seemed relatively simple to achieve.
Note that once the implementation has been done (in concise style: <a href="https://pastebin.com/VurLncvs">12 lines!</a>) and pushed into the official filter updates, <em>David</em> just had to press the “<em>Update Filters</em>“ button of his <em>G’MIC-Qt</em> plug-in, and the new effect <strong>Deformations/Crease</strong> was there immediately for testing. This is one of the practical side of developing new filters using the <em>G’MIC</em> script language!</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_crease.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_crease.png" alt="G&#39;MIC Crease"></a>
<figcaption>Fig. 2.5: New effect “Crease” for local angular deformations.
</figcaption></figure>

<p>However, I must admit I didn’t really have an idea on what this could be useful for in practice. But the good thing about cooperating with <em>David</em> is that HE knows exactly what he’s going to do with it! For instance, to give a crispy look to the edges of his comics, or for improving the render of his alien death ray.</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_crease2.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_crease2.png" alt="G&#39;MIC Crease 2"></a>
<a href="http://gmic.eu/gmic220/fullsize/gmic_crease3.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_crease3.png" alt="G&#39;MIC Crease 3"></a>
<figcaption>Fig. 2.6: Using the G’MIC “Crease” filter for two real cases of artistic creation.
</figcaption></figure>

<h1 id="3-filters-filters-filters-">3. Filters, filters, filters…</h1>
<p><em>David Revoy</em> is not the only user of <em>G’MIC</em>: we sometimes count up to 900 daily downloads from the main project website. So it happens, of course, that other enthusiastic users inspire us new effects, especially during those lovely discussions that take place on our <a href="https://discuss.pixls.us/c/software/gmic">forum</a>, kindly made available by the <a href="https://pixls.us/"><em>PIXLS.US</em></a> community.</p>
<h2 id="3-1-bring-out-the-details-without-creating-halos-"><a href="#3-1-bring-out-the-details-without-creating-halos-" class="header-link-alt">3.1. Bring out the details without creating “halos”</a></h2>
<p>Many photographers will tell you that it is not always easy to enhance the details in digital photographs without creating naughty <a href="https://en.wikipedia.org/wiki/Visual_artifact">artifacts</a> that often have to be masked manually afterwards. Conventional contrast enhancement algorithms are most often based on increasing the local variance of pixel lightness, or on the equalization of their local histograms. Unfortunately, these operations are generally done by considering neighborhoods with a fixed size and geometry, where each pixel of a neighborhood is always considered with the same weight in the statistical calculations related to these algorithms.
It is simpler and faster, but from a qualitative point of view it is not an excellent idea: we often get “halos” around contours that were already very contrasted in the image. This classic phenomenon is illustrated below with the application of the <em>Unsharp mask</em> filter (the one present by default in GIMP) on a part of a landscape image. This generates an undesirable “halo” effect at the frontier between the mountain and the sky (this is particularly visible in full resolution images).</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_highland01.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_highland01.png" alt="G&#39;MIC details filters"></a>
<figcaption>Fig. 3.1: Unwanted “halo” effects often occur with conventional contrast enhancement filters.
</figcaption></figure>

<p>The challenge of the detail enhancement algorithms is to be able to analyze the geometry of the local image structures in a more fine way, to take into account geometry-adaptive local weights for each pixel of a given neighborhood. To make it simple, we want to create <a href="https://en.wikipedia.org/wiki/Anisotropy">anisotropic</a> versions of the usual enhancement methods, orienting them by the edges detected in the images.
Following this logic, we have added two new <em>G’MIC</em> filters recently, namely <strong>Details/Magic details</strong> and <strong>Details/Equalize local histograms</strong>, which try to better take the geometric content of the image into account for local detail enhancement (e.g. using the <a href="https://en.wikipedia.org/wiki/Bilateral_filter">bilateral filter</a>).</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_magic_details.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_magic_details.png" alt="G&#39;MIC magic details"></a>
<a href="http://gmic.eu/gmic220/fullsize/gmic_eqdetails1.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_eqdetails1.png" alt="G&#39;MIC equalize local histograms"></a>
<a href="http://gmic.eu/gmic220/fullsize/gmic_eqdetails.gif"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_eqdetails.gif" alt="G&#39;MIC equalize local histograms"></a>
<figcaption>Fig. 3.2: The new G’MIC detail enhancement filters.
</figcaption></figure>

<p>Thus, the application of the new <em>G’MIC</em> local histogram equalization on the landscape image shown before gives something slightly different : a more contrasted result both in geometric details and colors, and reduced halos.</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_highland02.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_highland02.png" alt="G&#39;MIC magic details"></a>
<a href="http://gmic.eu/gmic220/fullsize/gmic_highland.gif"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_highland.gif" alt="G&#39;MIC magic details"></a>
<figcaption>Fig. 3.3: Differences of results between the standard GIMP Unsharp Mask filter and the local histogram equalization of G’MIC, for details enhancement.
</figcaption></figure>

<h2 id="3-2-different-types-of-image-deformations"><a href="#3-2-different-types-of-image-deformations" class="header-link-alt">3.2. Different types of image deformations</a></h2>
<p>New filters to apply geometric deformations on images are added to <em>G’MIC</em> on a regular basis, and this new major version <strong>2.2</strong> offers therefore a bunch of new deformation filters.
So let’s start with <strong>Deformations/Spherize</strong>, a filter which allows to locally distort an image to give the impression that it is projected on a 3D sphere or ellipsoid. This is the perfect filter to turn your obnoxious office colleague into a <a href="https://en.wikipedia.org/wiki/Mr._Potato_Head">Mr. Potato Head</a>!</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_spherize.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_spherize.png" alt="G&#39;MIC spherize"></a>
<a href="http://gmic.eu/gmic220/fullsize/gmic_spherize.gif"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_spherize.gif" alt="G&#39;MIC spherize"></a>
<figcaption>Fig .3.4: Two examples of 3D spherical deformations obtained with the G’MIC “Spherize” filter.
</figcaption></figure>

<p>On the other hand, the filter <strong>Deformations/Square to circle</strong> implements the direct and inverse transformations from a square domain (or rectangle) to a disk (as mathematically described on <a href="http://squircular.blogspot.fr/2015/09/mapping-circle-to-square.html"><em>this page</em></a>), which makes it possible to generate this type of deformations.</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_sqtoci.gif"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_sqtoci.gif" alt="G&#39;MIC square to circle"></a>
<figcaption>Fig. 3.5: Direct and inverse transformations from a square domain to a disk.
</figcaption></figure>

<p>The effect <strong>Degradations/Streak</strong> replaces an image area masked by the user (filled with a constant color) with one or more copies of a neighboring area. It works mainly as the <em>GIMP</em> clone tool but prevents the user to fill the entire mask manually.</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_streak.gif"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_streak.gif" alt="G&#39;MIC streak"></a>
<figcaption>Fig. 3.6: The “Streak” filter clones parts of the image into a user-defined color mask.
</figcaption></figure>

<h2 id="3-3-artistic-abstractions"><a href="#3-3-artistic-abstractions" class="header-link-alt">3.3. Artistic Abstractions</a></h2>
<p>You might say that image deformations are nice, but sometimes you want to transform an image in a more radical way. Let’s introduce now the new effects that turn an image into a more abstract version (simplification and re-rendering). These filters have in common the analysis of the local image geometry, followed by a step of image synthesis.</p>
<p>For example, <em>G’MIC</em> filter <strong>Contours/Super-pixels</strong>  locally gathers the image pixels with the same color to form a partitioned image, like a puzzle, with geometric shapes that stick to the contours. This partition is obtained using the <a href="https://ivrl.epfl.ch/research/superpixels"><em>SLIC</em> method</a> (<em>Simple Linear Iterative Clustering</em>), a classic image partitioning algorithm, which has the advantage of being relatively fast to compute.</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_slic.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_slic.png" alt="G&#39;MIC super pixels 1"></a>
<a href="http://gmic.eu/gmic220/fullsize/gmic_slic2.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_slic2.png" alt="G&#39;MIC super pixels 2"></a>
<figcaption>Fig. 3.7: Decomposition of an image in super-pixels by the Simple Linear Iterative Clustering algorithm (SLIC).
</figcaption></figure>

<p>The filter <strong>Artistic/Linify</strong> tries to redraw an input image by superimposing semi-transparent colored lines on an initially white canvas, as shown in the figure below. This effect is the re-implementation of the smart algorithm initially proposed on the site <a href="http://linify.me">http://linify.me</a> (initially implemented in JavaScript).</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_linify.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_linify.png" alt="G&#39;MIC linify 1"></a>
<a href="http://gmic.eu/gmic220/fullsize/gmic_linify.gif"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_linify.gif" alt="G&#39;MIC linify 2"></a>
<figcaption>Fig. 3.8: The “Linify” effect tries to redraw an image by superimposing only semi-transparent colored lines on a white canvas.
</figcaption></figure>

<p>The effect <strong>Artistic/Quadtree variations</strong> first decomposes an image as a <a href="https://en.wikipedia.org/wiki/Quadtree"><em>quadtree</em></a>, then re-synthesize it by drawing oriented and plain ellipses on a canvas, one ellipse for each <em>quadtree</em> leaf. This renders a rather interesting “painting” effect. It is likely that with more complex shapes, even more attractive renderings could be synthesized. Surely an idea to keep in mind for the next filters update :)</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_quadtree.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_quadtree.png" alt="G&#39;MIC quadtree 1"></a>
<a href="http://gmic.eu/gmic220/fullsize/gmic_qdellipse.gif"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_qdellipse.gif" alt="G&#39;MIC quadtree 2"></a>
<figcaption>Fig. 3.9: Decomposing an image as a quadtree allows to re-synthesize it by superimposing only plain colored ellipses.
</figcaption></figure>

<h2 id="3-4-are-there-any-more-"><a href="#3-4-are-there-any-more-" class="header-link-alt">3.4. “Are there any more?”</a></h2>
<p>And now that you have processed so many beautiful pictures, why not arrange them in the form of a superb photo montage? This is precisely the role of the filter <strong>Arrays &amp; tiles/Drawn montage</strong>, which allows to create a juxtaposition of photographs very quickly, for any kind of shapes.
The idea is to provide the filter with a colored template in addition to the serie of photographs (<em>Fig.3.10a</em>), and then to associate each photograph with a different color of the template (<em>Fig.3.10b</em>). Next, the arrangement is done automatically by <em>G’MIC</em>, by resizing the images so that they appear best framed within the shapes defined in the given montage template (<em>Fig.3.10c</em>).
We made <a href="https://www.youtube.com/watch?v=CxopG_DqQj4">a video tutorial</a> illustrating the use of this specific filter.</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_drawn_montage0.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_drawn_montage0.png" alt="G&#39;MIC drawn montage"></a>
<figcaption>Fig. 3.10a: Step 1: The user draws the desired organization of the montage with shapes of different colors.
</figcaption></figure>

<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_drawn_montage.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_drawn_montage.png" alt="G&#39;MIC drawn montage"></a>
<figcaption>Fig. 3.10b: Step 2: G’MIC’s “Drawn Montage” filter allows you to associate a photograph for each template color.
</figcaption></figure>

<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_drawn_montage2.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_drawn_montage2.png" alt="G&#39;MIC drawn montage"></a>
<figcaption>Fig. 3.10c: Step 3: The photo montage is then automatically synthetized by the filter.
</figcaption></figure>

<p>But let’s go back to more essential questions: have you ever needed to draw gears? No?! It’s quite normal, that’s not something we do everyday! But just in case, the new <em>G’MIC</em> filter <strong>Rendering/Gear</strong> will be glad to help, with different settings to adjust gear size, colors and number of teeth. Perfectly useless, so totally indispensable!</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_gears.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_gears.png" alt="G&#39;MIC drawn montage"></a>
<figcaption>Fig. 3.11: The Gear filter, running at full speed.
</figcaption></figure>

<p>Need a satin texture right now? No?! Too bad, the filter <strong>Patterns / Satin</strong> could have been of a great help!</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_satin.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_satin.png" alt="G&#39;MIC satin"></a>
<figcaption>Fig. 3.12: G’MIC’s satin filter will make your life more silky.
</figcaption></figure>

<p>And finally, to end up with the series of these <em>“effects that are useless until we need them”</em>, note the apparition of the new filter <strong>Degradations/JPEG artifacts</strong> which simulates the appearance of <em>JPEG</em> compression artifacts due to the quantization of the <a href="https://en.wikipedia.org/wiki/Discrete_cosine_transform" title="Discrete cosine transform">DCT</a> coefficients encoding 8×8 image blocks (yes, you will get almost the same result saving your image as a <em>JPEG</em> file with the desired quality).</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_dct.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_dct.png" alt="Simulate JPEG Artifacts"></a>
<a href="http://gmic.eu/gmic220/fullsize/gmic_dct.gif"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_dct.gif" alt="Simulate JPEG Artifacts"></a>
<figcaption>Fig. 3.13: The “JPEG artifacts” filter simulates the image degradation due to 8×8 block DCT compression.
</figcaption></figure>

<h1 id="4-other-notable-improvements">4. Other notable improvements</h1>
<p>This review of these new available <em>G’MIC</em> filters should not overshadow the various improvements that have been made “under the hood” and that are equally important, even if they are less visible in practice for the user.</p>
<h2 id="4-1-a-better-g-mic-qt-plugin-interface"><a href="#4-1-a-better-g-mic-qt-plugin-interface" class="header-link-alt">4.1. A better <em>G’MIC-Qt</em> plugin interface</a></h2>
<p>A big effort of cleaning and restructuring the <em>G’MIC-Qt</em> plugin code has been realized, with a lot of little inconsistencies fixed in the <a href="https://en.wikipedia.org/wiki/Graphical_user_interface"><em>GUI</em></a>. Let’s also mention in bulk order some new interesting features that have appeared in the plugin: </p>
<ul>
<li>The ability to set a <a href="https://en.wikipedia.org/wiki/Timeout_(computing"><em>timeout</em></a>) when trying to preview some computationnaly intensive filters.</li>
<li>A better management of the input-output parameters for each filter (with persistence, better menu location, and a reset button).</li>
<li>Maximizing the size of the preview area is now easier. Editing its zoom level manually is now possible, as well as chosing the language of the interface (regardless of the language used for the system), etc. </li>
</ul>
<p>All these little things gathered together globally improves the user experience.</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_prefs.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_prefs.png" alt="G&#39;MIC Preferences"></a>
<figcaption>Fig. 4.1: Overview of the G’MIC-Qt plugin interface in its latest version 2.2.
</figcaption></figure>

<h2 id="4-2-improvements-in-the-g-mic-core"><a href="#4-2-improvements-in-the-g-mic-core" class="header-link-alt">4.2. Improvements in the <em>G’MIC</em> core</a></h2>
<p>Even less visible, but just as important, many improvements have appeared in the <em>G’MIC</em> computational core and its associated <em>G’MIC</em> script language interpreter. You have to know that all of the available filters are actually written in the form of scripts in the <em>G’MIC</em> language, and each small improvement brought to the interpreter may have a beneficial consequence for all filters at once. Without going too much into the technical details of these internal improvements, we can highlight those points:</p>
<ul>
<li>The notable improvement in the syntax of the language itself, which goes along with better performances for the analysis of the language syntax (therefore for the script executions), all this with a smaller memory footprint.</li>
<li><p>The <em>G’MIC</em> built-in mathematical expression evaluator is also experiencing various optimizations and new features, to consider even more possibilities for performing non-trivial operations at the pixel level.</p>
</li>
<li><p>A better support of raw video input/outputs (<code>.yuv</code> format) with support for<code>4:2:2</code> and <code>4:4:4</code> formats, in addition to<code>4:2:0</code> which was the only mode supported before.</p>
</li>
<li><p>Finally, two new animations have been added to the <em>G’MIC</em> demos menu (which is displayed e.g. when invoking <code>gmic</code> without arguments from the command-line):</p>
<ul>
<li>First, a 3D starfield animation:</li>
</ul>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_starfield.gif"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_starfield.gif" alt="Starfield demo"></a>
<figcaption>Fig.4.2: New 3D starfield animation added to the G’MIC demo menu.
</figcaption>  </figure>

<ul>
<li>Second, a playable 3D version of the <a href="https://en.wikipedia.org/wiki/Tower_of_Hanoi"><em>Tower of Hanoi</em></a>:</li>
</ul>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmic_hanoi.gif"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_hanoi.gif" alt="Hanoi Demo"></a>
<figcaption>Fig. 4.3: The playable 3D version of the “Tower of Hanoi”, available in G’MIC.
</figcaption>  </figure>
</li>
<li><p>Finally, let us mention the introduction of the command <code>tensors3d</code> dedicated to the 3D representation of second order <a href="https://en.wikipedia.org/wiki/Tensor_field">tensor fields</a>. In practice, it does not only serve to make you want to eat <em>Smarties<sup>®</sup></em>! It can be used for example to visualize certain regions of <a href="https://en.wikipedia.org/wiki/Diffusion_MRI#Diffusion_tensor_imaging">MRI volumes of diffusion tensors</a>:</p>
  <figure>
  <a href="http://gmic.eu/gmic220/fullsize/gmic_tensors3d.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmic_tensors3d.png" alt="Tensors3d"></a>
  <figcaption>Fig. 4.4: G’MIC rendering of a 3D tensor field, with command <code>tensors3d</code>.
  </figcaption>    </figure>

</li>
</ul>
<h2 id="4-3-new-design-for-g-mic-online"><a href="#4-3-new-design-for-g-mic-online" class="header-link-alt">4.3. New design for <em>G’MIC Online</em></a></h2>
<p>To finish this tour, let us also mention the complete redesign of <a href="https://gmicol.greyc.fr/"><em>G’MIC Online</em></a> during the year 2017, done by <em>Christophe Couronne</em> and <em>Véronique Robert</em> from the development departement of the <em>GREYC</em> laboratory.
<em>G’MIC Online</em> is a web service allowing you to apply a subset of <em>G’MIC</em> filters on your images, directly inside a web browser. These web pages now have a <a href="https://en.wikipedia.org/wiki/Responsive_web_design">responsive design</a>, which makes them more enjoyable than before on mobile devices (smartphones and tablets). Shown below is a screenshot of this service running in <em>Chrome</em>/<em>Android</em>, on a 10’’ tablet.</p>
<figure>
<a href="http://gmic.eu/gmic220/fullsize/gmicol.png"><img src="https://pixls.us/blog/2018/02/g-mic-2-2/gmicol.png" alt="G&#39;MICol"></a>
<figcaption>Fig. 4.5: New responsive design of the G’MIC Online web service, running here on a 10” tablet.
</figcaption></figure>

<h1 id="5-conclusion-and-perspectives">5. Conclusion and perspectives</h1>
<p>The overview of this new version <strong>2.2</strong> of <em>G’MIC</em> is now over. 
One possible conclusion could be: “<em>There are plenty of perspectives!</em>“. </p>
<p><em>G’MIC</em> is a free project that can be considered as mature: the first lines of code were composed almost ten years ago, and today we have a good idea of the possibilities (and limits) of the beast. We hope to see more and more interest from <a href="https://en.wikipedia.org/wiki/Free_and_open-source_software">FOSS</a> users and developers, for example for integrating the <em>G’MIC-Qt</em> generic plugin in various software focused on image or video processing.</p>
<p>The possibility of using the <em>G’MIC</em> core under a more permissive <em>CeCILL-C</em> license can also be a source of interesting collaborations in the future (some companies have already approached us about this). While waiting for potential collaborations, we will do our best to continue developping <em>G’MIC</em> and feed it with new filters and effects, according to the suggestions of our enthusiastic users. A big thanks to them for their help and constant encouragement (the motivation to write code or articles, past 11pm, would not be the same without them!).</p>
<p><em>“Long live open-source image processing and artistic creation!”</em></p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[LGM and Libre Graphics at SCaLE 16x]]></title>
            <link>https://pixls.us/blog/2017/12/lgm-and-libre-graphics-at-scale-16x/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2017/12/lgm-and-libre-graphics-at-scale-16x/</guid>
            <pubDate>Tue, 12 Dec 2017 21:24:39 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2017/12/lgm-and-libre-graphics-at-scale-16x/LGM+SCaLE.png" /><br/>
                <h1>LGM and Libre Graphics at SCaLE 16x</h1> 
                <h2>All the libre graphics!</h2>  
                <p>There are two libre graphics related meetings coming up early next year.
The annual <a href="http://libregraphicsmeeting.org/2018/" title="LGM Website">Libre Graphics Meeting</a> (in Spain this year), and something entirely new: a 
libre graphics track at SCaLE.
How exciting!</p>
<!-- more --> 
<h2 id="libre-graphics-meeting-2018"><a href="#libre-graphics-meeting-2018" class="header-link-alt">Libre Graphics Meeting 2018</a></h2>
<figure>
    <img src="https://pixls.us/blog/2017/12/lgm-and-libre-graphics-at-scale-16x/lgm-logo.svg" alt='LGM Logo SVG' width='689' height='332'>
</figure>

<p>The <a href="http://libregraphicsmeeting.org/2018/" title="LGM Website">Libre Graphics Meeting</a> is going to be in Seville, Spain this year.
They recently <a href="http://libregraphicsmeeting.org/2018/call-for-participation/" title="LGM CFP">published their Call for Participation</a> and are accepting presentation and talk proposals now.
Unfortunately, I won’t be able to attend this year, but there’s a pretty good chance some friendlier folks from the community will be!
We’ll update more about who will be making it out as soon as we know, and maybe we can convince someone to run another photowalk with everyone.
(On a side note, if anyone from the community is going to make it and wants a hand putting anything together for a presentation just let us know - we’re here to help.)</p>
<h2 id="libre-graphics-at-scale-california-usa-"><a href="#libre-graphics-at-scale-california-usa-" class="header-link-alt">Libre Graphics at SCaLE (California, USA)</a></h2>
<figure>
    <img src="https://pixls.us/blog/2017/12/lgm-and-libre-graphics-at-scale-16x/16x_logo_lg.png" alt="SCaLE 16x Logo" width='640' height='252'>
</figure>

<p>This year we have a neat announcement - due to some prodding from Nate Willis, we have been given a day at the <a href="https://www.socallinuxexpo.org/scale/16x" title="Southern California Linux Expo">Southern California Linux Expo (SCaLE)</a> to hold a Libre Graphics focused track!
The expo is at the Pasadena Convention Center, March 8-11, 2018.</p>
<p>We first had a chance to hang out with <a href="https://lwn.net/" title="LWN.net">LWN</a> editor <a href="https://twitter.com/n8willis" title="Nathan Willis on Twitter">Nate Willis</a> during the Libre Graphics Meeting 2016 in London, and later out at the <a href="https://2016.texaslinuxfest.org/" title="Texas Linux Fest 2016">Texas Linux Fest</a>.
<a href="https://www.gimp.org" title="The GIMP website">GIMP</a> was able to have both <a href="http://www.shallowsky.com/" title="Akkana Peck&#39;s website">Akkana Peck</a> and myself out to present on GIMPy stuff and host a photowalk as well.</p>
<p>The organizer for SCaLE, Ilan, was kind enough to give us a day (Friday, March 9<sup>th</sup>) and a room for all the libre graphics artists, designers, programmers, and hackers.</p>
<figure>
<img class="inline" src="https://pixls.us/blog/2017/12/lgm-and-libre-graphics-at-scale-16x/ullah.png">
<img class="inline" src="https://pixls.us/blog/2017/12/lgm-and-libre-graphics-at-scale-16x/paperdigits.png">
<figcaption>You could come meet the face behind these avatars.</figcaption>
</figure>

<p>I will be in attendance promoting GIMP stuff in the main track, Dr. Ullah (Isaac Ullah) will hopefully be presenting, and Mica will be there (@paperdigits) as well.
I’m pretty certain we’ll be holding a photowalk for attendees while we’re there - and we may even setup a nice headshot booth in the expo to take free headshots for folks.</p>
<p>We would <em>love</em> to see some folks out there.
If you think you might be able to make it, or even better submit a talk proposal, please come and join us!
(I was thinking about getting an AirBnB to stay in, so if folks let me know they are going to make it out we can coordinate a place to all stay together.)</p>
<h2 id="scale-libre-graphics-track-call-for-participation"><a href="#scale-libre-graphics-track-call-for-participation" class="header-link-alt">SCaLE Libre Graphics Track Call for Participation</a></h2>
<p>The libre graphics community is thrilled to announce that a special,
one-day track at SCaLE 16x will be dedicated to libre graphics
software and artists. All those who work with free and open-source
tools for creative graphics projects are invited to submit a proposal
and join us for the day!</p>
<p>SCaLE 16x will take place from March 8 to 11 of 2018 in Pasadena
California. Libre Graphics Day: SCaLE will take place at the main
SCaLE venue on Friday, March 9.</p>
<p>The libre graphics track is an opportunity for teams, contributors and
practitioners involved in Libre Graphics projects to share their
experiences, showcase new developments, and hear new and inspiring ideas.</p>
<p>By libre graphics we mean “free, Libre and Open Source tools for
creative uses”.  Libre graphics is not just about software, but extends to
standards and file formats used in creative work.</p>
<p>People from around the world who are passionate about
Free/Libre tools and their creative applications are encouraged to
submit a talk proposal. Sessions will be 30 minutes in length.</p>
<p>Developers, artists, and activists alike are invited.  First-time
presenters and established projects of all sizes are welcome to submit.</p>
<p>We are looking for:</p>
<ul>
<li>Reflections and practical sessions on promoting the philosophy
and use of Libre Graphics tools. </li>
<li>Technical presentations and workshops for developers.</li>
<li>Showcases of excellent work made using Libre Graphics tools.</li>
<li>New tools and workflows for graphics and code.</li>
<li>Reflections on the activities of existing Free/Libre and Open Source communities.</li>
</ul>
<h3 id="submit"><a href="#submit" class="header-link-alt">Submit</a></h3>
<p>Please submit your proposal to <a href="mailto:&#103;&#114;&#x61;&#x70;&#x68;&#x69;&#99;&#x73;&#x2d;&#99;&#x66;&#x70;&#64;&#x73;&#x6f;&#x63;&#x61;&#x6c;&#108;&#105;&#110;&#x75;&#x78;&#x65;&#120;&#112;&#111;&#46;&#x6f;&#x72;&#103;">&#103;&#114;&#x61;&#x70;&#x68;&#x69;&#99;&#x73;&#x2d;&#99;&#x66;&#x70;&#64;&#x73;&#x6f;&#x63;&#x61;&#x6c;&#108;&#105;&#110;&#x75;&#x78;&#x65;&#120;&#112;&#111;&#46;&#x6f;&#x72;&#103;</a>.</p>
<p>If you have any questions feel free to reach out to me on the forum.</p>
<h3 id="deadline"><a href="#deadline" class="header-link-alt">Deadline</a></h3>
<p>The deadline for submissions is <strong>January 10th, 2018</strong>, and participants will be notified by the end of January 2018.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Simple Exposure Mapping in GIMP]]></title>
            <link>https://pixls.us/articles/simple-exposure-mapping-in-gimp/</link>
            <guid isPermaLink="true">https://pixls.us/articles/simple-exposure-mapping-in-gimp/</guid>
            <pubDate>Tue, 05 Dec 2017 15:17:37 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/simple-exposure-mapping-in-gimp/lede-sample.jpg" /><br/>
                <h1>Simple Exposure Mapping in GIMP</h1> 
                <h2>A simple approach to blending exposures</h2>  
                <p>There are many different approaches to blending exposures in the various <a href="https://pixls.us/software/">projects</a>, and they can range from extremely detailed and complex to quick and simple.
Today we’re going to look at the latter.</p>
<p>I was recently lucky enough to attend an old friends wedding in upstate NY.
Mairi got married!
(For those not familiar with her, she’s the model from <a href="https://pixls.us/articles/an-open-source-portrait-mairi/">An Open Source Portrait</a> as well as <a href="https://pixls.us/articles/a-chiaroscuro-portrait/">A Chiaroscuro Portrait</a> tutorials.)</p>
<figure>
<img src="https://pixls.us/articles/simple-exposure-mapping-in-gimp/mairi-final_w600.jpg" alt="Mairi Chiaroscuro Portrait" width="600" height="750">
<figcaption>Mairi’s chiaroscuro portrait.</figcaption>
</figure>

<p>I had originally planned on celebrating with everyone and wrangling my two kids, so I left my camera gear at home.
Turns out Mairi was hoping that I’d be shooting photos.
Not wanting to disappoint, I quickly secured a kit from a local rental shop.
(Thank goodness for friends new and old to help wrangle a very busy 2 year old.)</p>
<p>During the rehearsal I was experimenting with views to get a feel for the room I’d have and how framing would work out.
One of the shots looked from the audience and right into a late afternoon sun.
My inner nerd kicked in and I thought, <em>“This might be a neat image to use for a tutorial!”</em>.</p>
<h2 id="exposure-fusion-mapping-">Exposure Fusion (Mapping)<a href="#exposure-fusion-mapping-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>The idea behind exposure fusion (or mapping) is to extend the dynamic range represented in an image by utilizing more than one exposure of the same subject and choosing relevant bits to fit in the final output.</p>
<p>I say “fit” because usually you are trying to put a larger amount of data than a single image might have been able to capture.
So you choose which parts you want to use to get something that you’ll like.
For example, here we have two images that will be used in this article where the image showing the foreground correctly causes the sky to blow out, while exposing for the sky causes the foreground to go almost black.
By selectively combining these two images, we can get something that might show a larger dynamic range than would have been possible in a single exposure:</p>
<figure class="big-vid">
<img src="https://pixls.us/articles/simple-exposure-mapping-in-gimp/example.jpg" alt="Sample of exposure fusion results" width="1020" height="384">
<figcaption>This porridge is too hot, this porridge is too cold, but this porridge is just right.</figcaption>
</figure>

<p>This is one common use case scenario for creating HDR/EXR imaging (and tonemapping is the term for doing exactly what we’re describing here - squishing data into the viewable range in a way that we like).
In fact, at the end of this article I’ll show how <a href="http://enblend.sourceforge.net/" title="Enfuse/Enblend">Enfuse</a> handled merging these image exposures (<em>spoiler: pretty darn well</em>).</p>
<h2 id="exposing">Exposing<a href="#exposing" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>In exposing for the subjects in this image, I used the structure to block the sun from direct view (though there’s still a loss of contrast and flaring).
The straight out of the camera jpg looks like this:</p>
<figure>
<img src="https://pixls.us/articles/simple-exposure-mapping-in-gimp/fore.jpg" alt="Foreground Exposure" width="600" height="900">
<figcaption>
Foreground exposure
</figcaption>
</figure>

<p>This gave me a well exposed foreground and subjects.
I then gave the shutter speed a quick spin to <sup>1</sup>&frasl;<sub>1000</sub> (about 4-stops) to get the sky better exposed.
The camera jpg for the sky looked like this:</p>
<figure>
<img src="https://pixls.us/articles/simple-exposure-mapping-in-gimp/sky.jpg" alt="Sky Exposure" width="600" height="900">
<figcaption>
Sky exposure
</figcaption>
</figure>

<p>In retrospect I probably would have been better to shoot for 2-stops difference to keep the sky exposed higher in the histogram, but <em>c’est la vie</em>.
It also helps to avoid going too far in the extremes when exposing, so as to avoid making it look too unrealistic (yes - the example above probably skirts that pretty close, but it’s exaggerated to make a good article).
This gives us a nice enough starting point to play with some simple exposure mapping.</p>
<h2 id="alignment">Alignment<a href="#alignment" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>For this to work properly the images do need to line up as perfectly as possible.
Imperfect alignment can be worked around to some extent, usually determined by how complex your masking will have to be, but the better aligned the images are the easier your job will be.</p>
<p>As usual I have had good luck using the <code>align_image_stack</code> script that’s included as part of <a href="http://hugin.sourceforge.net/">Hugin</a>.
This makes short work of getting the images aligned properly for just this sort of work:</p>
<pre><code>/path/to/hugin/align_image_stack -m -a OUT FILE1 FILE2
</code></pre><p>On Windows, this looks like:</p>
<pre><code>c:\Program Files\Hugin\bin\align_image_stack.exe -m -a OUT FILE1 FILE2
</code></pre><p>Once it finishes up, you’ll end up with some new files like <code>OUT0001.tif</code>.
These are your aligned images that we’ll now bring into <a href="https://www.gimp.org">GIMP</a>!</p>
<h2 id="masking">Masking<a href="#masking" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>The heart of fusing these exposures is going to rely entirely on masking.
This is where it usually pays off nicely to take your time and consider carefully an approach that will keep things looking clean and with a natural transition between the exposures.</p>
<p>If this had simple geometry it would be an easy problem to solve, but the inclusion of the trees in the background makes it slightly more complex (but not nearly as bad as hair masking can get).
The foreground and sky are very simple things to mask overall, where we know we may want 100% of the foreground to come from one image, and 100% of the sky from another.  This helps simplifies things greatly.</p>
<figure>
<img src="https://pixls.us/articles/simple-exposure-mapping-in-gimp/rough.jpg" alt="Rough Mask Example" width="600" height="900">
<figcaption>Rough mask (temporary - for reference only) where we can see we want all of the sky from one image, and all of the foreground from another.</figcaption>
</figure>

<p>I tend to keep the darker sky layer on the bottom and the lighter foreground layer above that.</p>
<p>The hard edges of the structure make it an easy masking job there, so the main area of concern here is getting a good blend with the treeline in the background.
There are a couple of approaches we can take to try and get a good blend, so let’s have a look…</p>
<h3 id="luminosity-grayscale-mask">Luminosity (Grayscale) Mask<a href="#luminosity-grayscale-mask" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>A common approach would be to apply an inverted grayscale mask to the foreground layer.
If there’s a decent amount of contrast between the foreground/sky layers then this is a quick and easy way to get something to use as a base for further work:</p>
<figure>
<img src="https://pixls.us/articles/simple-exposure-mapping-in-gimp/add-layer-mask.png" alt="GIMP Add Layer Mask Dialog" width="530" height="421">
</figure>

<p>Applying this mask yields pretty good looking results right away:</p>
<figure>
<img src="https://pixls.us/articles/simple-exposure-mapping-in-gimp/mask-grayscale.jpg" alt="GIMP Inverted Grayscale Mask" width="600" height="900">
</figure>

<p>You can also investigate some of the other color channel options to see if there might be something that works better to create a clean mask with.
In GIMP 2.9.x I also found that using <code>Colors &gt; Components &gt; Extract Component</code> using <code>CMYK Key</code> produced another finely separated option.</p>
<p>This makes a nice starting point.
As we said we wanted all of the sky from one exposure and the rest of the image from the other exposure, we can start roughing in the overall mask.</p>
<p>For this simple walkthrough we can make our job a bit easier by using <em>two</em> copies of the foreground layer and selectively blending them over the sky layer.</p>
<h3 id="why-two-layers-">Why Two Layers?<a href="#why-two-layers-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>If you take your foreground layer and start cleaning up the mask for the sky by painting in black, it should be relatively easy.
Until you get down to the treeline.
You can use a soft-edged brush and try to blend in smoothly, but in order to let the sky come through nicely you may find yourself getting more of the dark exposure trees showing through.
This will show as a dark halo on the tops of the trees:</p>
<figure>
<img src="https://pixls.us/articles/simple-exposure-mapping-in-gimp/halo-dark.jpg" alt="GIMP Mask Dark Halo">
</figure>

<p>A nice way to adjust the falloff along the tops of the trees is by using a second copy of the foreground layer, and using a gradient on the mask that will blend smoothly from full sky to full foreground along the tops of the trees.
This will ease the dark halo a bit until the transition looks good to you.
You can then modify/update the gradient on the copy until the transition is smooth or to your liking.</p>
<p>At this point my layers would look like this in GIMP:</p>
<figure>
<img src="https://pixls.us/articles/simple-exposure-mapping-in-gimp/halo-layers.png" alt="GIMP Layer Stack Blending" width="225" height="123">
</figure>

<p>The results of using a second fore layer with a gradient to help ease the transition:</p>
<figure class="big-vid">
<img src="https://pixls.us/articles/simple-exposure-mapping-in-gimp/halo-original.jpg" alt="GIMP Mask Original" width="1020" height="270">
<img src="https://pixls.us/articles/simple-exposure-mapping-in-gimp/halo-dark.jpg" alt="GIMP Mask Dark Halo" width="1020" height="270">
<img src="https://pixls.us/articles/simple-exposure-mapping-in-gimp/halo-gradient.jpg" alt="GIMP Mask Gradient" width="1020" height="270">
<figcaption>
Top: Original grayscale mask only<br>
Middle: Manual mask painting down to treeline<br>
Bottom: Second layer with gradient mask
</figcaption>
</figure>

<p>When pixel-peeping it may not seem <em>perfect</em>, but in the context of the entire image it’s a nice way to get an easy blend for not too much extra work.</p>
<figure class="big-vid">
<img src="https://pixls.us/articles/simple-exposure-mapping-in-gimp/mask-grayscale_vs_gradient.jpg" alt="GIMP Mask Grayscale vs Gradient" width="1020" height="765">
<figcaption>
Left: Grayscale mask, Right: final mask with gradient
</figcaption>
</figure>

<p>At this point most of the exposure is blended nicely.
The only place in this particular image I would work some more would be to darken the structure a little bit to lessen the flaring and maybe bring back a little darkness.</p>
<p>This can be accomplished by painting over the top mask.
I used black with a smaller opacity of around 25% to paint over the structure and allow the darker version underneath to show through a bit more:</p>
<figure class="big-vid">
<img src="https://pixls.us/articles/simple-exposure-mapping-in-gimp/darken-structure.jpg" alt="GIMP Mask Darkened on Structure" width="1020" height="765">
<figcaption>
Left: gradient masked, Right: structure darkened slightly to taste
</figcaption>
</figure>


<h2 id="enfuse-comparison">Enfuse Comparison<a href="#enfuse-comparison" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>I have previously used <a href="http://enblend.sourceforge.net/" title="Enfuse/Enblend">Enfuse</a> and gotten great results even more quickly.
For comparison here is the result of running Enfuse against the same images:</p>
<figure class="big-vid">
<img src="https://pixls.us/articles/simple-exposure-mapping-in-gimp/enfuse.jpg" alt="Enfuse compared to our manual masking" width="1020" height="765">
<figcaption>
Left: Manual masking result, Right: Enfuse automatic blend
</figcaption>
</figure>

<p>I prefer our manually blended result personally, but I could see another future article or post about using the Enfuse blend for areas of complexity and <em>blending the Enfuse output</em> into the final image to help.
Might be interesting.</p>
<p>(I prefer our blended result because Enfuse considered extremely bright areas as candidates for fusion with the other exposure - so the bricks and tent highlights got pushed down automatically.)</p>
<h2 id="fin">Fin<a href="#fin" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>From here any further fiddling with the image is purely for fun.
The two versions of the image have been merged nicely.
If you wanted to adjust the result to not appear quite as extreme you could modify each of the layers to taste.</p>
<p>For instance, you could lighten the sky layer to decrease the extreme range difference between it and the foreground layer.
Where’s the fun in keeping it too realistic though? :)</p>
<p>For reference, here’s my final version after masking with a bit of a Portra tone thrown in for good measure:</p>
<figure>
<img src="https://pixls.us/articles/simple-exposure-mapping-in-gimp/final.jpg" alt="Final version with portra color toning" width="600" height="900">
<figcaption>
Not bad for a relatively quick approach.
</figcaption>
</figure>


<h2 id="resources">Resources<a href="#resources" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>This wouldn’t be complete with some resources and further reading for folks!</p>
<p>I have saved the GIMP 2.9.x .XCF file I used to write this.
It has all of the masks and layers I used to create the final version of the image: </p>
<ul>
<li><a href="https://pixls.us/files/XCF/SimpleExposureMapping.xcf">Download the GIMP 2.9 .XCF file</a> (116MB).</li>
<li><a href="https://pixls.us/files/XCF/SimpleExposureMapping_half-res.xcf">Download the GIMP 2.9 .XCF file (half resolution)</a> (38MB)</li>
</ul>
<h3 id="further-reading">Further Reading<a href="#further-reading" class="header-link"><i class="fa fa-link"></i></a></h3>
<ul>
<li><a href="https://pixls.us/articles/luminosity-masking-in-darktable/" title="Luminosity Masking in darktable on PIXLS.US">Luminosity Masking in darktable</a></li>
<li><a href="https://pixls.us/articles/basic-landscape-exposure-blending-with-gimp-and-g-mic/">Basic Exposure Blending with GIMP and G’MIC</a></li>
<li><a href="https://pixls.us/articles/a-blended-panorama-with-photoflow/">A Blended Panorama with PhotoFlow</a></li>
<li><a href="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/">HDR Photography with Free Software</a></li>
</ul>
<p>@McCap also <a href="https://discuss.pixls.us/t/youtube-processing-exposure-blended-image/5254">created a YouTube</a> video walking through how he approached exposure blending:</p>
<div>
<div class='fluid-vid'>
<iframe width="560" height="315" src="https://www.youtube.com/embed/0DnWoyOkEJk" frameborder="0" gesture="media" allow="encrypted-media" allowfullscreen></iframe>
</div>
</div>


<p>The image <span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/StillImage" property="dct:title" rel="dct:type"><a href="https://www.flickr.com/photos/patdavid/37164904891">Wedding Rehearsal</a></span> by <a xmlns:cc="http://creativecommons.org/ns#" href="https://patdavid.net" property="cc:attributionName" rel="cc:attributionURL">Pat David</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.<br />Permissions beyond the scope of this license may be available at <a xmlns:cc="http://creativecommons.org/ns#" href="https://patdavid.net/about" rel="cc:morePermissions">https://patdavid.net/about</a>.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Giving Thanks]]></title>
            <link>https://pixls.us/blog/2017/11/giving-thanks/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2017/11/giving-thanks/</guid>
            <pubDate>Wed, 22 Nov 2017 16:20:28 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2017/11/giving-thanks/Rockwell-Thanksgiving-Simpsons.jpg" /><br/>
                <h1>Giving Thanks</h1> 
                <h2>For a wonderful community</h2>  
                <p>This is becoming <a href="https://pixls.us/blog/2016/11/giving-thanks/">a sort of tradition</a> for me to post something giving thanks around this holiday.
I think it’s because this community has become such a large part of my life (even if I don’t have nearly as much time to spend on it as I’d like).
Also, I think it helps to remind ourselves once in a while of the good things that happen to us. So in that spirit…</p>
<!-- more -->
<h2 id="financial-supporters"><a href="#financial-supporters" class="header-link-alt">Financial Supporters</a></h2>
<p>I want to start things off by acknowledging those that go the extra mile and help offset the costs of the infrastructure to keep this crazy ship afloat (sorry, I’m an ocean engineer by training and a sailor - so nautical metaphors abound!).</p>
<h3 id="holy-benefactors-batman-"><a href="#holy-benefactors-batman-" class="header-link-alt">Holy Benefactors, Batman!</a></h3>
<p>Once again the amazing <a href="https://plus.google.com/+DimitriosPsychogios" title="Dimitrios Psychogios on Google+"><strong>Dimitrios Psychogios</strong></a> has graciously covered our server expenses (<em>and then some</em>) <strong>for another full year</strong>.
On behalf of the community, and particularly myself, thank you so much!
Your generosity will cover infrastructure costs for the year and give us room to grow as the community does.</p>
<p>We also have some awesome folks who support us through monthly donations (which are nice because we can plan better if we need to). Together they cover the costs of data storage + transfer in/out of Amazon AWS S3 storage (basically the storage and transfer of all of the attachments and files in the forums).
So <strong>thank you</strong>, you cool froods, you really know where your towels are:</p>
<ul>
<li><a href="https://discuss.pixls.us/u/paperdigits/">Mica</a> (@paperdigits - <a href="https://silentumbrella.com">https://silentumbrella.com</a>)</li>
<li>Luka S.</li>
<li><a href="https://discuss.pixls.us/u/bminney/">Barrie</a> (@bminney)</li>
</ul>
<p>Thank you all!
If you happen to see any of these great folks around the forum consider taking a moment to thank them for their generosity!
If you’d like to join them in supporting the site financially, check out the <a href="https://pixls.us/support">support page</a>.</p>
<h2 id="growth"><a href="#growth" class="header-link-alt">Growth</a></h2>
<p>The community has just been amazing, and we’ve seen nice growth this past year.
Since the end of August we’ve seen about a 50% increase in weekly sessions on discuss.
We’re currently hovering around 2,500 daily pageviews on the forums:</p>
<figure>
<img src="https://pixls.us/blog/2017/11/giving-thanks/discuss-sessions-weekly-2017.png" alt="PIXLS.US discuss traffic" srcset="https://pixls.us/blog/2017/11/giving-thanks/discuss-sessions-weekly-2017_2x.png 2x">
</figure>

<p>We’ve added almost 950 new users, or almost 3 new users every day!</p>
<p>There have been quite a few <a href="https://discuss.pixls.us/latest?order=views">interesting discussions happening</a> on the forums as well.
The <a href="http://rawtherapee.com/">RawTherapee</a> folks have some neat conversations going on (<a href="https://discuss.pixls.us/t/local-lab-build/1430">Local Lab build</a>, <a href="https://discuss.pixls.us/t/new-windows-builds/615/423">New Windows builds</a>, and <a href="https://discuss.pixls.us/t/support-for-pentax-pixel-shift-files-3489/2560">Pixel Shift!</a>), and @Carmelo_DrRaw (creator of the <a href="http://photoflowblog.blogspot.com/" title="PhotoFlow Image Editor">PhotoFlow</a> editor) has been packaging a <a href="https://discuss.pixls.us/t/gimp-2-9-5-appimage/1959" title="GIMP 2.9.5 AppImages">GIMP 2.9.X AppImage</a> as well!</p>
<p>Of course, the fun news for many was @houz finally pushing out a <a href="https://discuss.pixls.us/t/darktable-for-windows/4966">Windows version of darktable</a> that was made possible through the help of Peter Budai.</p>
<h2 id="raw-pixls-us"><a href="#raw-pixls-us" class="header-link-alt">raw.pixls.us</a></h2>
<p>I figure @LebedevRI will yell at me if I forget to mention <a href="https://raw.pixls.us">raw.pixls.us</a> (RPU) again.
Back in January @andabata built a new site to help pick up the work of the old rawsamples.ch website to collect raw sample files for testing.</p>
<p>So thank you @andabata and @LebedevRI for your work on this!
A big thank you to everyone who has taken the time to check the site and upload missing (or non-freely licensed) raw files to include!</p>
<p>While we’re talking about RPU, please consider having a look at <a href="https://discuss.pixls.us/t/raw-samples-wanted/5420">this post about it on discuss</a> and take a few minutes to see if you might be able to contribute by providing raw samples that we are missing or need (see the post for more details).
If you don’t have something we need, please consider sharing the post on social media to help us raise awareness of RPU!
Thank you!</p>
<h2 id="digikam"><a href="#digikam" class="header-link-alt">digiKam</a></h2>
<p>If you’re not aware of it, one of the things we try to do here beside run the site and forum is to assist projects with websites and design work if they want it.
Earlier this year the <a href="https://www.digikam.org/">digiKam</a> team needed to migrate their old Drupal website to something more modern (and secure) and @paperdigits figured, <em>“why not”</em>?</p>
<figure>
<img src="https://pixls.us/blog/2017/11/giving-thanks/digikam-logo-w600.png" width="600" height="300" alt="digiKam Logo" srcset="https://pixls.us/blog/2017/11/giving-thanks/digikam-logo_2x.png 2x">
</figure>

<p>So we rolled up our sleeves and got them setup with a newly designed static website built using <a href="https://gohugo.io/">Hugo</a> (which was completely new to me).
We were also able to manage their comments on the website for them by embedding topics from right here on discuss.
This way their users can still own their comments and we can manage spam and moderate things for them.</p>
<p>The best part, though, is the addition of their users and knowledge to the community!</p>
<h2 id="darix"><a href="#darix" class="header-link-alt">darix</a></h2>
<p>I want to personally take a moment to thank @darix for all the work he does keeping things running smoothly here.
If you don’t see him, it means all the work he’s doing is paying off.</p>
<p>I speak with him daily and see firsthand the great work he’s doing to make sure all of us have a nice place to call home.
Thank you so much, @darix!</p>
<h2 id="mica"><a href="#mica" class="header-link-alt">Mica</a></h2>
<p>As usual @paperdigits (<a href="https://silentumbrella.com">https://silentumbrella.com</a>) also has a great attitude and pro-active approach to the community which I am super thankful for.
He also does things that aren’t always visible, but are essential to keeping things running smoothly, like moderating the forum, checking the health of sites we are helping to manage, and writing/editing posts.</p>
<p>I can’t stress enough how much it helps to keep your interest and spirits engaged in the community when you have someone else around who’s so positive and helpful.  Thank you so much, @paperdigits!</p>
<h2 id="all-of-you"><a href="#all-of-you" class="header-link-alt">All of You</a></h2>
<p>At the end of the day this is a community, and it’s vibrancy and health is a direct result of all of you, its members.
So above all else this is by far the thing I am most thankful for - getting to meet, learn, and interact with all of <em>you</em>.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Faces of Open Source]]></title>
            <link>https://pixls.us/articles/faces-of-open-source/</link>
            <guid isPermaLink="true">https://pixls.us/articles/faces-of-open-source/</guid>
            <pubDate>Tue, 17 Oct 2017 19:20:28 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/faces-of-open-source/face-montage.jpg" /><br/>
                <h1>Faces of Open Source</h1> 
                <h2>Peter Adams’s Portraits of Revolutionaries</h2>  
                <p>Recently, <a href="https://houz.org/" title="houz.org">@houz</a> <a href="https://discuss.pixls.us/t/faces-of-open-source/4772" title="Discuss post about Faces of Open Source">posted about</a> an amazing project by photographer <a href="http://www.peteradamsphoto.com/" title="Peter Adams Photography">Peter Adams</a> called <strong><a href="http://facesofopensource.com/" title="Faces of Open Source">Faces of Open Source</a></strong>.</p>
<p>Peter really <small><em>(ahem)</em></small> throws a light on many amazing luminaries from not only the Free/Open Source Software community, but in some cases the history and roots of all modern computing.
He has managed to coordinate portrait sessions with many people that may be unassuming to a layperson, but take a moment to read any of the short bios on the site and the gravity of the contributions from the subjects to modern computing becomes apparent.</p>
<p>It’s easy for non-technical folks to spot a Bill Gates or Steve Jobs, but what about those who invented <a href="http://facesofopensource.com/bill-and-john-ritchie/" title="Dennis Ritchie">the most-used programming language</a>, created <a href="http://facesofopensource.com/brian-behlendorf-2/" title="Brian Behlendorf">the web server</a> that runs the majority of the internet, or <a href="http://facesofopensource.com/jim-kent/" title="Jim Kent">mapped the human genome</a>?</p>
<figure class='big-vid'>
    <img src="https://pixls.us/articles/faces-of-open-source/ritchie-behlendorf-kent.jpg" alt='Dennis Ritchie, Brian Behlendorf, Jim Kent'>
    <figcaption>
    (From L-R): <a href="http://facesofopensource.com/bill-and-john-ritchie/" title="Dennis Ritchie">Dennis Ritchie</a>, <a href="http://facesofopensource.com/brian-behlendorf-2/" title="Brian Behlendorf">Brian Behlendorf</a>, and <a href="http://facesofopensource.com/jim-kent/" title="Jim Kent">Jim Kent</a>
    </figcaption>
</figure>

<p>He is acutely aware that his subjects represent an important part of the <a href="https://en.wikipedia.org/wiki/History_of_free_and_open-source_software" title="History of Free and Open Source Software at Wikipedia">history of Open Source</a>, and 
in his <a href="http://facesofopensource.com/artist-statement/" title="Peter&#39;s Artist Statement">artist statement</a> for the project he notes:</p>
<blockquote>
<p>This project is my attempt to highlight a revolution whose importance is not broadly understood by a world that relies heavily upon the fruits of its labor.</p>
</blockquote>
<p>That’s really what Peter has done here.
He has collected individuals whose contributions all add up to something far greater than their collective sums to shape the digital world many take for granted these days, and is presenting them in a powerful and thoughtful way more befitting their gifts.</p>
<figure>
    <a href='http://www.peteradamsphoto.com/about-peter/pap/'>
        <img src="https://pixls.us/articles/faces-of-open-source/pap.jpg" alt='Peter Adams Photography'>
    </a>
</figure>

<h2 id="a-chat-with-peter-adams">A Chat with Peter Adams<a href="#a-chat-with-peter-adams" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>I was lucky enough to be able to get a little bit of time with Peter recently, and with some help from the community had a few questions to present to him.
He was kind enough to take some time out of his day and be patient while I prattled on…</p>
<figure>
    <img src="https://pixls.us/articles/faces-of-open-source/Linus_Torvalds_by_Peter_Adams_w640.jpg" alt='Linus Torvalds by Peter Adams'>
    <figcaption>
    <a href="http://facesofopensource.com/linus-torvalds/" title="Linus Torvalds by Peter Adams">Linus Torvalds</a>, Santa Fe, New Mexico, 2016 by Peter Adams
    </figcaption>
</figure>

<h3 id="what-was-the-motivation-for-this-particular-project-for-you-why-these-people-">What was the motivation for this particular project for you? Why these people?<a href="#what-was-the-motivation-for-this-particular-project-for-you-why-these-people-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I had a long career working in the tech industry, and kind of grew up on a lot of this software when I was in college.<br>Then got to apply it throughout a career as senior technologist or CTO at a bunch of different companies in the valley. 
So I went from learning about it in college, to being someone that used it, to then being somebody that contributed to it and starting my own open source project back in 2006.
That open source ethos, the software, and the people that created, maintained and promoted it - it’s something that’s been right there in my face for, really, the last 25 years.</p>
<p>I wanted to marry my knowledge of it with my passion for photography, and shine a light on it.
I went through a few different chapters of the story myself in the 80’s and then the mid-90’s with linux.
I kind of felt like the story was starting to slip into obscurity, not because it’s less important - in fact  I think it’s more important now than it’s ever been.</p>
<p>The software is actually used by more people now than it has ever been.
The smartphone revolution, mobile, has brought that to a forefront and all of these mobile platforms are based on this open source technology.
Everything Apple does is based on BSD, and everything Google/Android does is based on Linux.</p>
<p>I feel like it’s a more impactful story now than ever, but very few people are telling the story.
As a photographer I’ve always cringed at the photographic response to the story.
Podium shot after podium shot of these incredible people.</p>
<p>So I wanted to put some faces to names, bring these people to life in a more impactful way than I think anyone has done before.  Hopefully that’s what the project is doing!</p>
<p><strong>P: It absolutely does!</strong></p>
<figure>
    <img src="https://pixls.us/articles/faces-of-open-source/Brian_Kernighan_by_Peter_Adams_w640.jpg" alt='Brian Kernighan by Peter Adams'>
    <figcaption>
    <a href="http://facesofopensource.com/brian-kernighan/" title="Brian Kernighan by Peter Adams">Brian Kernighan</a>, New York City, 2015 by Peter Adams
    </figcaption>
</figure>

<h3 id="how-long-have-you-been-shooting-the-project-">How long have you been shooting the project?<a href="#how-long-have-you-been-shooting-the-project-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I started this project in 2013/2014, in earnest probably late 2014.</p>
<h3 id="of-all-of-the-people-that-you-ve-shot-i-m-curious-who-would-you-say-is-one-that-maybe-stuck-out-with-you-the-most-or-even-better-did-you-get-any-cool-stories-out-of-some-of-the-subjects-">Of all of the people that you’ve shot, I’m curious, who would you say is one that maybe stuck out with you the most, or even better, did you get any cool stories out of some of the subjects?<a href="#of-all-of-the-people-that-you-ve-shot-i-m-curious-who-would-you-say-is-one-that-maybe-stuck-out-with-you-the-most-or-even-better-did-you-get-any-cool-stories-out-of-some-of-the-subjects-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Everyone that I’ve photographed has been absolutely wonderful. I mean, that’s the first thing about this community: it’s a very gracious community.
Everybody was very gracious with their time, and eager to participate.
I think people recognize that this is a community they belong to and they really want me to be a part of it, which is really great.</p>
<p>So, I enjoyed my time with everybody.
Everybody brought a different, interesting story about things.
The UNIX crew from Bell Labs had particularly colorful stories, very interesting sort of historical tidbits about UNIX and Free Software.</p>
<p>I talked to Ken Thompson about going to Russia and flying MIGs right after the collapse of the Soviet Union.
Wonderful stories from Doug McIlroy about the team and the engineering - how they worked together at Bell labs.
Just a countless list of cool stories and cool people for sure.</p>
<figure>
    <img src="https://pixls.us/articles/faces-of-open-source/Ken_Thompson_by_Peter_Adams_w640.jpg" alt='Ken Thompson by Peter Adams'>
    <figcaption>
    <a href="http://facesofopensource.com/ken-thompson-2/" title="Ken Thompson by Peter Adams">Ken Thompson</a>, Menlo Park, California, 2016 by Peter Adams
    </figcaption>
</figure>

<figure>
    <img src="https://pixls.us/articles/faces-of-open-source/Doug_McIlroy_by_Peter_Adams_w640.jpg" alt='Doug McIlroy by Peter Adams'>
    <figcaption>
    <a href="http://facesofopensource.com/doug-mcilroy-2/" title="Doug McIlroy by Peter Adams">Doug McIlroy</a>, Boston, Massachusetts, 2015 by Peter Adams
    </figcaption>
</figure>

<p><strong>P: It must have been fascinating!</strong></p>
<p>It’s been really fun. A lot of these folks, I’ve really looked up to them over the years as sort of heroes, and so when you get people in front of your lens like that, it’s a really wonderful experience.
It’s also a challenging experience because you want to do justice to them.
Many of these folks that I’ve thought about for 20+ years, finally getting to shoot them is a real treat.</p>
<h3 id="where-are-you-shooting-these-are-you-mostly-bringing-them-into-your-studio-in-the-valley-">Where are you shooting these?  Are you mostly bringing them into your studio in the valley?<a href="#where-are-you-shooting-these-are-you-mostly-bringing-them-into-your-studio-in-the-valley-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I shot a lot of people when I had a studio in Silicon Valley.
I brought a lot of people there and that was great.
Now typically I’m doing shoots on the coasts.
So I’ll do shoots in NY and I’ll rent a studio and bring 6 or 7 people in there or we’ll do a studio up in SF for some people.
But I’ve done shoots in back alleyways, I’ve done shoots in tiny little conference rooms, 
I’ll bring the studio to people if that’s what I have to do.
So I’d say so far it’s been about 50-50.</p>
<h3 id="the-lighting-setups-are-wonderful-and-do-justice-to-the-subjects-and-i-think-somebody-in-the-community-was-curious-if-you-had-decided-on-bw-from-the-beginning-for-this-series-of-photos-was-this-a-conscious-decision-early-on-">The lighting setups are wonderful and do justice to the subjects, and I think somebody in the community was curious if you had decided on B&amp;W from the beginning for this series of photos?  Was this a conscious decision early on?<a href="#the-lighting-setups-are-wonderful-and-do-justice-to-the-subjects-and-i-think-somebody-in-the-community-was-curious-if-you-had-decided-on-bw-from-the-beginning-for-this-series-of-photos-was-this-a-conscious-decision-early-on-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>B&amp;W on a white background was a conscious choice right from the beginning.
Knowing the group, I felt like that was going to be the best way to explore the people and the faces.
Every one of these faces just tells, I think, a really interesting story.
I try to bring the personality of the person into the photo, and B&amp;W has always been my favorite way to do that.
The white background just puts the emphasis right on the person.</p>
<figure>
    <img src="https://pixls.us/articles/faces-of-open-source/Camille_Fournier_by_Peter_Adams_w640.jpg" alt='Camille Fournier by Peter Adams'>
    <figcaption>
    <a href="http://facesofopensource.com/camille-fournier/" title="Camille Fournier by Peter Adams">Camille Fournier</a>, New York City, 2017 by Peter Adams
    </figcaption>
</figure>

<h3 id="how-much-of-it-would-you-say-is-you-that-goes-into-the-final-pose-and-setup-of-the-person-or-do-you-let-the-subject-feel-out-the-room-and-get-comfortable-and-shoot-from-there-">How much of it would you say is you that goes into the final pose and setup of the person, or do you let the subject feel out the room and get comfortable and shoot from there?<a href="#how-much-of-it-would-you-say-is-you-that-goes-into-the-final-pose-and-setup-of-the-person-or-do-you-let-the-subject-feel-out-the-room-and-get-comfortable-and-shoot-from-there-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>It’s a little bit of both.
I wish I got to spend a lot of time up front with the person before we started shooting, but the way everybody’s schedule worked is - none of these shoots are more than an hour and many of them are much shorter than an hour.
There’s definitely the pleasantries up front and talking for a little bit, but then I try to get people right in front of the camera as quick as possible.</p>
<p>I don’t really pose them.
My process is to sit back and observe, and I always tell people <em>“if I’m not taking photos, it’s not because you’re doing anything wrong - I’m just waiting for you to settle or looking, examining”</em>.
Which is, for most people, a really uncomfortable process, I try to make it as comfortable as possible.
Then we’ll start taking pictures.
I may move them a little bit, or we may setup a table so they can rest their hand on their chin or something like that.
Generally the photos that come out are not pre-meditated.</p>
<p>It’s very rare that I go into any of these shoots with an actual <em>“I want the person like this, setup like that, etc…”</em>.
I’d say 99% of these shots, the expressions, the feeling that comes out, that I’m capturing is organic.
It’s something that comes up in the shoot.
I just try to capture it whenever I see it by clicking the shutter, that’s basically what I’m doing there.</p>
<h3 id="you-list-what-equipment-you-shot-each-portrait-with-but-i-m-curious-about-the-lighting-setup-is-there-a-go-to-lighting-setup-that-you-like-to-use-">You list what equipment you shot each portrait with, but I’m curious about the lighting setup. Is there a “go-to” lighting setup that you like to use?<a href="#you-list-what-equipment-you-shot-each-portrait-with-but-i-m-curious-about-the-lighting-setup-is-there-a-go-to-lighting-setup-that-you-like-to-use-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The lighting is literally the same on every shot, though there’s slightly different positions.
It’s a six light setup: there are four lights on the background, there’s a beauty dish overhead, and generally a fill light.
The fill is either a big Photek or PLM, basically a big umbrella, or a ringflash depending on how small the room is.
That’s the same lighting setup on all of them.
Four lights on the background, two lights on the subject.
I’ll vary the two lights on the subject positionally, but for the most part they’re pretty close.</p>
<h3 id="do-you-use-free-software-in-your-normal-photographic-workflow-at-all-">Do you use Free Software in your normal photographic workflow at all?<a href="#do-you-use-free-software-in-your-normal-photographic-workflow-at-all-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I don’t use as much Free Software as I’d like in my own workflow.
My workflow, because I shoot with Phase One, the files go into Capture One and then from there they go into Photoshop for final edits.
I have used GIMP in the past.
I really would like to use more Free Software, so I’m a learner in that regard for what tools would make sense.</p>
<figure>
    <img src="https://pixls.us/articles/faces-of-open-source/Spencer_Kimball_by_Peter_Adams_w640.jpg" alt='Spencer Kimball by Peter Adams'>
    <figcaption>
    <a href="http://facesofopensource.com/spencer-kimball/" title="Spencer Kimball by Peter Adams">Spencer Kimball</a> (co-creator of GIMP), Menlo Park, 2015 by Peter Adams
    </figcaption>
</figure>

<figure>
    <img src="https://pixls.us/articles/faces-of-open-source/Peter_Mattis_by_Peter_Adams_w640.jpg" alt='Peter Mattis by Peter Adams'>
    <figcaption>
    <a href="http://facesofopensource.com/peter-mattis-3/" title="Peter Mattis by Peter Adams">Peter Mattis</a> (co-creator of GIMP), New York City, 2015 by Peter Adams
    </figcaption>
</figure>

<h3 id="did-that-habit-grow-out-of-the-professional-need-of-having-those-tools-available-to-you-">Did that habit grow out of the professional need of having those tools available to you?<a href="#did-that-habit-grow-out-of-the-professional-need-of-having-those-tools-available-to-you-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Phase One, which makes the Medium Format digital back and camera that I use for all of my portrait work, also makes Capture One.
They have basically customized the software to get the most of their own files.
That’s pretty much why I’ve wound up there instead of Lightroom or another tool.
It’s just that that software tends to bring out the tonality, especially in the B&amp;W side, better I’ve found than any other tool.</p>
<h3 id="this-project-was-self-financed-to-start-with-">This project was self financed to start with?<a href="#this-project-was-self-financed-to-start-with-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Yes, this is a self-financed project.
I do hope that we’ll get some sponsors, especially for the book, just because it tends to be a pretty heavy upfront outlay to produce a book.
I’m going to think about things like Kickstarter but the corporate sponsors I think will be really helpful for the exhibits and the book.</p>
<h3 id="speaking-of-the-book-is-it-ready-have-you-already-gone-to-print-">Speaking of the book, is it ready - have you already gone to print?<a href="#speaking-of-the-book-is-it-ready-have-you-already-gone-to-print-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>No, the book isn’t ready yet.
I still have probably another 10-12 people that I need to photograph and then we’ll start producing it.
I’ve done some prototypes and things on it but it’s still a little bit of a ways away.
The biggest hurdle on this project is actually scheduling and logistics.
Getting access to people in a way that is economical.
Instead of me flying all over the place for one shot, I try to stack up a number of people into a day.
It’s tough - this is a busy crowd, very in demand.</p>
<figure class='big-vid'>
    <img src="https://pixls.us/articles/faces-of-open-source/faces-book-promo_w960.jpg" alt='Faces of Open Source Book Promo'>
</figure>


<h3 id="did-your-working-in-open-source-teach-you-anything-beyond-computer-code-in-some-way-was-there-an-influence-from-the-people-you-may-have-worked-around-or-the-ethos-of-free-software-in-general-that-stuck-with-you-working-with-this-crowd-was-there-a-takeaway-for-you-beyond-just-the-photographic-aspects-of-it-">Did your working in open source teach you anything beyond computer code in some way?  Was there an influence from the people you may have worked around, or the ethos of Free Software in general that stuck with you? Working with this crowd, was there a takeaway for you beyond just the photographic aspects of it?<a href="#did-your-working-in-open-source-teach-you-anything-beyond-computer-code-in-some-way-was-there-an-influence-from-the-people-you-may-have-worked-around-or-the-ethos-of-free-software-in-general-that-stuck-with-you-working-with-this-crowd-was-there-a-takeaway-for-you-beyond-just-the-photographic-aspects-of-it-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Absolutely! 
First of all it’s an incredibly inspiring group of people.
This is a group of people that have dedicated, in some cases most of, their lives to the development of software that they give away to the world, and don’t monetize themselves.
The work they’re doing is effectively a donation to humanity.
That’s incredibly inspiring when you look at how much time goes into these projects and how much time this group of people spends on that.
It’s a very humbling thing.</p>
<p>I’d say the other big lesson is that Open Source is such a unique thing.
There’s really nothing like it.
It’s starting to take over other industries and moving beyond just software - it’s gone into hardware.
I’ve started to photograph some of the open source hardware pioneers.
It’s going into bio-tech, pharmaceuticals, agriculture (there’s an open source seed project).
I think that the lessons that are learned here and that this group of people is teaching is really affecting humanity on a much much larger level than the fact that this stuff is powering your cell phone or is powering your computer.</p>
<figure>
    <img src="https://pixls.us/articles/faces-of-open-source/Limor_Fried_by_Peter_Adams_w640.jpg" alt='Limor Fried by Peter Adams'>
    <figcaption>
    <a href="http://facesofopensource.com/limor-fried-2/" title="Limor Fried by Peter Adams">Limor Fried</a>, New York City, 2017 by Peter Adams
    </figcaption>
</figure>

<p>Open source is really sort of a way of doing business now.
Even more than doing business it’s a way of operating in the world.
More and more people, industries, and companies are choosing that.
In today’s world where all you read is bad news, that’s a lot of really good news.
It’s an awesome thing to see that accelerating and catching on.
It’s been incredibly inspiring to me.</p>
<p><strong>P: I think even all the way back to the Polio vaccine, is one of those things. The effect that it had on humanity was immeasurable, and the fact that it wasn’t monetized by Salk was amazing.</strong></p>
<p>Look at how many lives were saved because of that.
If you think about the acceleration of the innovation we’ve had just in the technology sector - could things like the iPhone or the Android operating system - would these things have happened now, or over the last decade, without this [open source], or would we be looking at those types of innovations happening twenty years from now?
I think that’s a question you have to ask.</p>
<p>I don’t think it’s an obvious answer that Apple or Google or somebody else would have just come up with this without the open source [contributions].
This stuff is so fundamental, it’s such a basic building block for everything that’s happening now.
It may be responsible for the golden age that we’re seeing now.
I think it is.</p>
<p>The average teenager they pick up and post a photo to Instagram - they don’t realize that there’s a hundred open source projects at work to make that possible.</p>
<p><strong>P: And the fact that the people that underlay that entire stack gave it away.</strong></p>
<p>Right.
And that giving it away was necessary to create the Instagrams to create all these networks.
It wasn’t just this happenstance thing where people didn’t know any better.
In some cases obviously that did exist, but it’s the fact that consciously people are contributing into a commons that makes it so powerful and enables all of this innovation to happen.
It’s really cool.</p>
<figure>
    <img src="https://pixls.us/articles/faces-of-open-source/David_Korn_by_Peter_Adams_w640.jpg" alt='David Korn by Peter Adams'>
    <figcaption>
        <a href="http://facesofopensource.com/david-korn-2/" title="David Korn by Peter Adams">David Korn</a>, New York City, 2015 by Peter Adams
    </figcaption>
</figure>

<h3 id="to-close-is-there-another-photographer-book-organization-that-you-d-like-any-of-the-readers-to-know-about-and-maybe-spend-some-time-to-go-and-check-out-something-that-maybe-you-ve-long-admired-or-recently-discovered-">To close, is there another photographer, book, organization - that you’d like any of the readers to know about and maybe spend some time to go and check out. Something that maybe you’ve long admired or recently discovered?<a href="#to-close-is-there-another-photographer-book-organization-that-you-d-like-any-of-the-readers-to-know-about-and-maybe-spend-some-time-to-go-and-check-out-something-that-maybe-you-ve-long-admired-or-recently-discovered-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Sure!
You’ve mentioned <a href="https://www.google.com/search?tbm=isch&amp;q=martin+schoeller+portraits">Martin Schoeller</a>, who is one of my personal favorites and inspirations out there.
I’d say the other photographer who has had probably the most impact on my photography over the years has been <a href="https://www.google.com/search?tbm=isch&amp;q=richard+avedon+portraits">Richard Avedon</a>.
For people that aren’t familiar with his work I’d say definitely go check out the Avedon foundation.
Pick up any of his books which are just wonderful.
You’ll definitely see that influence on my photography, especially this project, since he shot black and white on white background.
Such stunning work.
I’d say that those are two great ones to start with.</p>
<p><strong>Alright!  Avedon and Schoeller - I can certainly think of worse people to go start a journey with.  Thank you so much for taking time with me today!</strong></p>
<p>Hey no problem!  It’s been fun to talk to you.</p>
<hr>
<p>There are many more fascinating portraits awaiting you over on the project site, and every one of them is worth your time!
See them all at:</p>
<p><strong><a href="http://facesofopensource.com/">http://facesofopensource.com/</a></strong></p>
<p>You can also connect with the project on </p>
<ul>
<li><a href="http://www.facebook.com/facesofopensource" title="Faces of Open Source on Facebook">Facebook</a></li>
<li><a href="http://twitter.com/facesopensource" title="Facs of Open Source on Twitter">Twitter</a></li>
<li><a href="http://instagram.com/peteradamsphoto" title="Peter Adams on Twitter">Instagram</a></li>
</ul>
<p>Find more of Peters work at <a href="http://www.peteradamsphoto.com/a" title="Peter Adams Photo">his website</a>.</p>
<p><small>All images from “<a href="http://facesofopensource.com/" title="Faces of Open Source">Faces of Open Source</a>” by <a href="http://www.peteradamsphoto.com/" title="Peter Adams Photography">Peter Adams</a>, licensed <a href="http://creativecommons.org/licenses/by-nc-sa/4.0/" title="Creative Commons Attribution-NonCommercial-ShareAlike 4.0">CC BY NC SA 4.0</a>.</small></p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Keep the Raws Coming]]></title>
            <link>https://pixls.us/blog/2017/09/keep-the-raws-coming/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2017/09/keep-the-raws-coming/</guid>
            <pubDate>Fri, 29 Sep 2017 00:00:00 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2017/09/keep-the-raws-coming/not_found.jpg" /><br/>
                <h1>Keep the Raws Coming</h1> 
                <h2>Moar samples!</h2>  
                <p>Our friendly neighborhood @LebedevRI pointed out to me a little while ago that we had reached some nice milestones for <a href="https://raw.pixls.us">https://raw.pixls.us</a>.
Not surprisingly I had spaced out and not written anything about it (or really any sort of social posts).  Bad Pat!</p>
<!-- more -->
<p>So let’s talk about <a href="https://raw.pixls.us">raw.pixls.us</a> (RPU) a bit!</p>
<h2 id="recap"><a href="#recap" class="header-link-alt">Recap</a></h2>
<p>For anyone not familiar with RPU, a quick recap (we had previously <a href="https://pixls.us/blog/2017/01/new-year-new-raw-samples-website/">written about raw.pixls.us</a> earlier this year).
There used to be a website for housing a repository of raw files for as many digital cameras as possible called <a href="http://rawsamples.ch/">rawsamples.ch</a>.
It was created by Jakob Rohrbach and had been running since March of 2007.
Back in 2016 the site was hit with a SQL injection attack that left the <a href="https://www.joomla.org/">Joomla</a> database corrupted (in a teachable moment, the site also didn’t have a database backup).</p>
<p>With the rawsamples.ch site down, @LebedevRI and @andabata worked to get a replacement option in-place and working: <a href="https://raw.pixls.us">https://raw.pixls.us</a>!</p>
<h2 id="sexy-stats"><a href="#sexy-stats" class="header-link-alt">Sexy Stats</a></h2>
<p>We grabbed all the files we could salvage from rawsamples.ch and @andabata setup the new page.
We’ve had a slowly growing response as folks have filled in gaps for camera models we still don’t have.</p>
<p>For reference, we currently have 
<a href="https://raw.pixls.us/"><img class='inline' src="http://raw.pixls.us/button-cameras.svg" alt="count of unique cameras in the archive"></a>
unique cameras, and 
<a href="https://raw.pixls.us/"><img class='inline'  src="http://raw.pixls.us/button-samples.svg" alt="total count of unique samples"></a>
unique samples.</p>
<figure>
<img src="https://pixls.us/blog/2017/09/keep-the-raws-coming/samples.png" alt='RPU samples graph'>
</figure>

<figure>
<img src="https://pixls.us/blog/2017/09/keep-the-raws-coming/cameras.png" alt='RPU cameras graph'>
</figure>


<h2 id="moar-samples-"><a href="#moar-samples-" class="header-link-alt">Moar samples!</a></h2>
<p>As @LebedevRI has said, we still really need folks to check <a href="https://raw.pixls.us">RPU</a> and send us more samples!</p>
<ul>
<li>We currently only have <a href="http://www.darktable.org/resources/camera-support/">about 77% coverage</a>.</li>
<li>We want to replace any non-<a href="https://creativecommons.org/share-your-work/public-domain/cc0/">CC0</a> (public domain) samples with <a href="https://creativecommons.org/share-your-work/public-domain/cc0/">CC0</a> licensed samples.</li>
<li>We are still missing some rarer samples like any medium-format or Sigma samples.</li>
</ul>
<p>Our hope is that some casual reader out there might look at the list and say “Hey! 
I’ve got that camera lying around - let me submit a sample!”.</p>
<p>Here’s the current list of missing camera samples:</p>
<div class='two-col'>

<div>Canon EOS Kiss Digital F</div>
<div>Canon EOS Kiss X7</div>
<div>Canon EOS Kiss X70</div>
<div>Canon EOS Kiss X80</div>
<div>Canon EOS Kiss X9</div>
<div>Canon EOS Rebel SL2</div>
<div>Canon EOS Kiss Digital</div>
<div>Canon EOS Kiss Digital X</div>
<div>Canon Kiss Digital X2</div>
<div>Canon Kiss X2</div>
<div>Canon EOS 5DS</div>
<div>Canon EOS Kiss X5</div>
<div>Canon EOS Kiss X6i</div>
<div>Canon EOS Rebel T4i</div>
<div>Canon EOS Kiss X7i</div>
<div>Canon EOS Kiss X8i</div>
<div>Canon EOS 8000D</div>
<div>Canon EOS Rebel T6s</div>
<div>Canon EOS 9000D</div>
<div>Canon EOS Kiss X9i</div>
<div>Canon EOS M10</div>
<div>Canon EOS M2</div>
<div>Canon PowerShot G9 X</div>
<div>Canon PowerShot S95</div>
<div>Canon PowerShot SX260 HS</div>
<div>Fujifilm FinePix HS30EXR</div>
<div>Fujifilm FinePix HS50EXR</div>
<div>Fujifilm FinePix S100FS</div>
<div>Fujifilm FinePix S5200</div>
<div>Fujifilm FinePix S5500</div>
<div>Fujifilm FinePix S6000fd</div>
<div>Fujifilm FinePix S9000</div>
<div>Fujifilm FinePix S9600fd</div>
<div>Fujifilm IS-1</div>
<div>Fujifilm XF1</div>
<div>Fujifilm XQ2</div>
<div>Kodak EasyShare Z980</div>
<div>Kodak P880</div>
<div>Leaf Aptus-II 5</div>
<div>Leaf Credo 40</div>
<div>Leaf Credo 60</div>
<div>Leaf Credo 80</div>
<div>Leica D-LUX 4</div>
<div>Leica D-LUX 5</div>
<div>Leica D-LUX 6</div>
<div>Leica X2</div>
<div>Minolta DiMAGE 5</div>
<div>Minolta Alpha 5D</div>
<div>Minolta Maxxum 5D</div>
<div>Minolta Alpha 7D</div>
<div>Minolta Maxxum 7D</div>
<div>Nikon 1 J3</div>
<div>Nikon 1 J4</div>
<div>Nikon 1 S1</div>
<div>Nikon 1 V3</div>
<div>Nikon Coolpix A</div>
<div>Nikon Coolpix P7700</div>
<div>Nikon D1H</div>
<div>Nikon D2H</div>
<div>Nikon D2Hs</div>
<div>Nikon D3S</div>
<div>Nikon D4S</div>
<div>Nokia Lumia 1020</div>
<div>Olympus E-10</div>
<div>Olympus E-400</div>
<div>Olympus E-PL1</div>
<div>Olympus E-PL2</div>
<div>Olympus SP320</div>
<div>Olympus SP570UZ</div>
<div>Olympus Stylus1</div>
<div>Olympus XZ-10</div>
<div>Panasonic DMC-FZ80</div>
<div>Panasonic DMC-FZ85</div>
<div>Panasonic DC-FZ91</div>
<div>Panasonic DC-FZ92</div>
<div>Panasonic DC-FZ93</div>
<div>Panasonic DC-ZS70</div>
<div>Panasonic DMC-FX150</div>
<div>Panasonic DMC-FZ100</div>
<div>Panasonic DMC-FZ35</div>
<div>Panasonic DMC-FZ40</div>
<div>Panasonic DMC-FZ50</div>
<div>Panasonic DMC-G5</div>
<div>Panasonic DMC-G8</div>
<div>Panasonic DMC-G85</div>
<div>Panasonic DMC-GF2</div>
<div>Panasonic DMC-GM5</div>
<div>Panasonic DMC-LX9</div>
<div>Panasonic DMC-TZ110</div>
<div>Panasonic DMC-ZS110</div>
<div>Panasonic DMC-ZS40</div>
<div>Panasonic DMC-ZS50</div>
<div>Panasonic DMC-TZ85</div>
<div>Panasonic DMC-ZS60</div>
<div>Pentax 645Z</div>
<div>Pentax K2000</div>
<div>Pentax Q10</div>
<div>Pentax Q7</div>
<div>Phase One IQ250</div>
<div>Ricoh GR</div>
<div>Ricoh GR II</div>
<div>Samsung EK-GN120</div>
<div>Samsung GX10</div>
<div>Samsung GX20</div>
<div>Samsung NX10</div>
<div>Samsung NX1000</div>
<div>Samsung NX11</div>
<div>Samsung NX1100</div>
<div>Samsung NX20</div>
<div>Samsung NX2000</div>
<div>Samsung NX210</div>
<div>Samsung NX5</div>
<div>Sinar Hy6</div>
<div>Sony DSC-RX1</div>
<div>Sony DSC-RX1R</div>
<div>Sony DSLR-A230</div>
<div>Sony DSLR-A290</div>
<div>Sony DSLR-A380</div>
<div>Sony DSLR-A390</div>
<div>Sony DSLR-A450</div>
<div>Sony DSLR-A500</div>
<div>Sony DSLR-A560</div>
<div>Sony ILCE-3000</div>
<div>Sony ILCE-3500</div>
<div>Sony NEX-5N</div>
<div>Sony NEX-C3</div>
<div>Sony NEX-F3</div>
<div>Sony SLT-A33</div>

<p></div></p>
<p>If you have any of the cameras on this list and don’t mind spending a few minutes uploading a sample file, 
we would be very grateful for the help!</p>
<p>Don’t forget that we <strong>are</strong> looking for:</p>
<ul>
<li>Lens mounted on the camera, cap off</li>
<li>Image in focus and properly exposed</li>
<li>Landscape orientation</li>
</ul>
<p>and we <strong>are not</strong> looking for:</p>
<ul>
<li>Series of images with different ISO, aperture, shutter, wb, lighting, or different lenses</li>
<li>DNG files created with Adobe DNG Converter</li>
<li>Photographs of people, for legal reasons.</li>
</ul>
<p>If you don’t see your camera on this list, you’re not off the hook yet!
We are also looking for files that are licensed very freely…</p>
<h3 id="non-creative-commons-zero-cc0-"><a href="#non-creative-commons-zero-cc0-" class="header-link-alt">Non Creative-Commons Zero (CC0)</a></h3>
<p>We have many raw samples that were not licensed as freely as we would like.
Ideally we are looking for images that have been released <a href="https://creativecommons.org/share-your-work/public-domain/cc0/">Creative Commons Zero (CC0)</a>.
This list is all samples we already have that are not licensed CC0, so if you happen to 
have one of the cameras listed below please consider uploading some new samples for us!</p>
<div class='two-col'>
<div>Canon IXUS900Ti</div>
<div>Canon PowerShot A550</div>
<div>Canon PowerShot A570 IS</div>
<div>Canon PowerShot A610</div>
<div>Canon PowerShot A620</div>
<div>Canon PowerShot A630</div>
<div>Canon Powershot A650</div>
<div>Canon PowerShot A710 IS</div>
<div>Canon PowerShot G7</div>
<div>Canon PowerShot S2 IS</div>
<div>Canon PowerShot S5 IS</div>
<div>Canon PowerShot SD750</div>
<div>Canon Powershot SX110IS</div>
<div>Canon EOS 10D</div>
<div>Canon EOS 1200D</div>
<div>Canon EOS-1D</div>
<div>Canon EOS-1D Mark II</div>
<div>Canon EOS-1D Mark III</div>
<div>Canon EOS-1D Mark II N</div>
<div>Canon EOS-1D Mark IV</div>
<div>Canon EOS-1Ds</div>
<div>Canon EOS-1Ds Mark II</div>
<div>Canon EOS-1Ds Mark III</div>
<div>Canon EOS-1D X</div>
<div>Canon EOS 300D</div>
<div>Canon EOS 30D</div>
<div>Canon EOS 400D</div>
<div>Canon EOS 40D</div>
<div>Canon EOS 760D</div>
<div>Canon EOS D2000C</div>
<div>Canon EOS D60</div>
<div>Canon EOS Digital Rebel XS</div>
<div>Canon EOS M</div>
<div>Canon EOS Rebel T3</div>
<div>Canon EOS Rebel T6i</div>
<div>Canon PowerShot A3200 IS</div>
<div>Canon Powershot A720 IS</div>
<div>Canon PowerShot G10</div>
<div>Canon PowerShot G11</div>
<div>Canon PowerShot G12</div>
<div>Canon PowerShot G15</div>
<div>Canon PowerShot G1</div>
<div>Canon PowerShot G1 X Mark II</div>
<div>Canon PowerShot G2</div>
<div>Canon PowerShot G3</div>
<div>Canon PowerShot G5</div>
<div>Canon PowerShot G5 X</div>
<div>Canon PowerShot G6</div>
<div>Canon PowerShot Pro1</div>
<div>Canon PowerShot Pro70</div>
<div>Canon PowerShot S30</div>
<div>Canon PowerShot S40</div>
<div>Canon PowerShot S45</div>
<div>Canon PowerShot S50</div>
<div>Canon PowerShot S60</div>
<div>Canon PowerShot S70</div>
<div>Canon PowerShot S90</div>
<div>Canon PowerShot SD450</div>
<div>Canon Powershot SX110IS</div>
<div>Canon PowerShot SX130 IS</div>
<div>Canon PowerShot SX1 IS</div>
<div>Canon PowerShot SX50 HS</div>
<div>Canon PowerShot SX510 HS</div>
<div>Canon PowerShot SX60 HS</div>
<div>Canon Poweshot S3IS</div>
<div>Epson R-D1</div>
<div>Fujifilm FinePix E550</div>
<div>Fujifilm FinePix E900</div>
<div>Fujifilm FinePix F600EXR</div>
<div>Fujifilm FinePix F700</div>
<div>Fujifilm FinePix F900EXR</div>
<div>Fujifilm FinePix HS10 HS11</div>
<div>Fujifilm FinePix HS20EXR</div>
<div>Fujifilm FinePix S200EXR</div>
<div>Fujifilm FinePix S2Pro</div>
<div>Fujifilm FinePix S3Pro</div>
<div>Fujifilm FinePix S5000</div>
<div>Fujifilm FinePix S5600</div>
<div>Fujifilm FinePix S6500fd</div>
<div>Fujifilm FinePix X100</div>
<div>Fujifilm X100S</div>
<div>Fujifilm X-A2</div>
<div>Fujifilm XQ1</div>
<div>Hasselblad CF132</div>
<div>Hasselblad CFV</div>
<div>Hasselblad H3D</div>
<div>Kodak DC120</div>
<div>Kodak DC50</div>
<div>Kodak DCS460D</div>
<div>Kodak DCS560C</div>
<div>Kodak DCS Pro SLR/n</div>
<div>Kodak EOS DCS 1</div>
<div>Kodak Kodak C330</div>
<div>Kodak Kodak C603 / Kodak C643</div>
<div>Kodak Z1015 IS</div>
<div>Leaf Aptus 75</div>
<div>Leaf Leaf Aptus 22</div>
<div>Leica Leica Digilux 2</div>
<div>Leica Leica D-LUX 3</div>
<div>Leica M8</div>
<div>Leica M (Typ 240)</div>
<div>Leica V-LUX 1</div>
<div>Mamiya ZD</div>
<div>Minolta DiMAGE 7</div>
<div>Minolta DiMAGE 7Hi</div>
<div>Minolta DiMAGE 7i</div>
<div>Minolta DiMAGE A1</div>
<div>Minolta DiMAGE A200</div>
<div>Minolta DiMAGE A2</div>
<div>Minolta Dimage Z2</div>
<div>Minolta Dynax 5D</div>
<div>Minolta Dynax 7D</div>
<div>Minolta RD-175</div>
<div>Minolta RD-175</div>
<div>Nikon 1 S2</div>
<div>Nikon 1 V1</div>
<div>Nikon Coolpix P340</div>
<div>Nikon Coolpix P6000</div>
<div>Nikon Coolpix P7000</div>
<div>Nikon Coolpix P7100</div>
<div>Nikon D100</div>
<div>Nikon D1</div>
<div>Nikon D1X</div>
<div>Nikon D2X</div>
<div>Nikon D300S</div>
<div>Nikon D3</div>
<div>Nikon D3X</div>
<div>Nikon D40</div>
<div>Nikon D60</div>
<div>Nikon D70</div>
<div>Nikon D800</div>
<div>Nikon D80</div>
<div>Nikon D810</div>
<div>Nikon E5400</div>
<div>Nikon E5700</div>
<div>Nikon LS-5000</div>
<div>Nokia Lumia 1020</div>
<div>Olympus C5050Z</div>
<div>Olympus C5060WZ</div>
<div>Olympus C8080WZ</div>
<div>Olympus E-1</div>
<div>Olympus E-20</div>
<div>Olympus E-300</div>
<div>Olympus E-30</div>
<div>Olympus E-330</div>
<div>Olympus E-3</div>
<div>Olympus E-420</div>
<div>Olympus E-450</div>
<div>Olympus E-500</div>
<div>Olympus E-510</div>
<div>Olympus E-520</div>
<div>Olympus E-5</div>
<div>Olympus E-600</div>
<div>Olympus E-P1</div>
<div>Olympus E-P2</div>
<div>Olympus E-P3</div>
<div>Olympus E-PL5</div>
<div>Olympus SP350</div>
<div>Olympus SP500UZ</div>
<div>Olympus XZ-1</div>
<div>Panasonic DMC-FZ150</div>
<div>Panasonic DMC-FZ18</div>
<div>Panasonic DMC-FZ200</div>
<div>Panasonic DMC-FZ28</div>
<div>Panasonic DMC-FZ30</div>
<div>Panasonic DMC-FZ38</div>
<div>Panasonic DMC-FZ70</div>
<div>Panasonic DMC-FZ72</div>
<div>Panasonic DMC-FZ8</div>
<div>Panasonic DMC-G1</div>
<div>Panasonic DMC-G3</div>
<div>Panasonic DMC-GF3</div>
<div>Panasonic DMC-GF5</div>
<div>Panasonic DMC-GF7</div>
<div>Panasonic DMC-GH2</div>
<div>Panasonic DMC-GH3</div>
<div>Panasonic DMC-GH4</div>
<div>Panasonic DMC-GM1</div>
<div>Panasonic DMC-GX7</div>
<div>Panasonic DMC-L10</div>
<div>Panasonic DMC-L1</div>
<div>Panasonic DMC-LF1</div>
<div>Panasonic DMC-LX1</div>
<div>Panasonic DMC-LX2</div>
<div>Panasonic DMC-LX3</div>
<div>Panasonic DMC-LX5</div>
<div>Panasonic DMC-LX7</div>
<div>Panasonic DMC-TZ60</div>
<div>Panasonic DMC-TZ71</div>
<div>Pentax *ist D</div>
<div>Pentax *ist DL2</div>
<div>Pentax *ist DS</div>
<div>Pentax K100D Super</div>
<div>Pentax K10D</div>
<div>Pentax K20D</div>
<div>Pentax K-50</div>
<div>Pentax K-m</div>
<div>Pentax K-r</div>
<div>Pentax K-S1</div>
<div>Pentax Optio S4</div>
<div>Polaroid x530</div>
<div>Ricoh GR DIGITAL 2</div>
<div>Samsung EX2F</div>
<div>Samsung NX100</div>
<div>Samsung NX300</div>
<div>Samsung NX300M</div>
<div>Samsung NX500</div>
<div>Samsung WB2000</div>
<div>Sigma DP2 Quattro</div>
<div>Sigma DP1s</div>
<div>Sigma DP2 Merrill</div>
<div>Sigma SD10</div>
<div>Sigma SD14</div>
<div>Sigma SD9</div>
<div>Sony DSC-R1</div>
<div>Sony DSC-RX100</div>
<div>Sony DSC-RX100M2</div>
<div>Sony DSC-RX100M3</div>
<div>Sony DSC-RX100M4</div>
<div>Sony DSC-RX10</div>
<div>Sony DSC-RX10M2</div>
<div>Sony DSLR-A100</div>
<div>Sony DSLR-A200</div>
<div>Sony DSLR-A300</div>
<div>Sony DSLR-A330</div>
<div>Sony DSLR-A350</div>
<div>Sony DSLR-A550</div>
<div>Sony DSLR-A580</div>
<div>Sony DSLR-A700</div>
<div>Sony DSLR-A850</div>
<div>Sony DSLR-A900</div>
<div>Sony NEX-3</div>
<div>Sony NEX-5R</div>
<div>Sony NEX-7</div>
<div>Sony SLT-A35</div>
<div>Sony SLT-A58</div>
<div>Sony SLT-A77</div>
<div>Sony SLT-A99</div>

<p></div></p>
<p>We are really working hard to make sure we are a good resource of freely available raw samples for all Free Software imaging projects to use.
Thank you so much for helping out if you can!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[G'MIC 2.0]]></title>
            <link>https://pixls.us/blog/2017/06/g-mic-2-0/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2017/06/g-mic-2-0/</guid>
            <pubDate>Thu, 08 Jun 2017 16:22:52 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2017/06/g-mic-2-0/Shrouded_in_clouds.jpg" /><br/>
                <h1>G'MIC 2.0</h1> 
                <h2>A second breath for open-source image processing.</h2>  
                <p>The <a href="https://www.greyc.fr/en/image"><em>IMAGE</em></a> team of the research laboratory <a href="https://www.greyc.fr/en"><em>GREYC</em></a> in <em>Caen</em>/<em>France</em> is pleased to announce the release of a new major version (numbered <strong>2.0</strong>) of its project <a href="http://gmic.eu"><em>G’MIC</em></a>: a generic, extensible, and <em>open source</em> framework for <a href="https://en.wikipedia.org/wiki/Image_processing">image processing</a>.
Here, we present the main advances made in the software since our <a href="https://pixls.us/blog/2016/05/g-mic-1-7-1/">last article</a>.
The new features presented here include the work carried out over the last twelve months (versions <em>2.0.0</em> and <em>1.7.x</em>, for _x_ varying from _2_ to _9_).</p>
<!-- more -->
<hr>
<h2 id="links-"><a href="#links-" class="header-link-alt">Links:</a></h2>
<ul>
<li><a href="http://gmic.eu">G’MIC main project page</a></li>
<li><a href="https://twitter.com/gmic_ip">Twitter feed</a></li>
<li><a href="http://gmic.eu/gimp.shtml">G’MIC plug-in for GIMP</a></li>
<li><a href="https://gmicol.greyc.fr">G’MIC Online web service</a></li>
<li><a href="https://discuss.pixls.us/t/release-of-gmic-2-0-0">Changelog for the <em>2.0.0</em> version</a></li>
</ul>
<hr>
<h1 id="1-g-mic-a-brief-overview">1. G’MIC: A brief overview</h1>
<p><em>G’MIC</em> is an open-source project started in August 2008, by the <a href="https://www.greyc.fr/en/image">IMAGE</a> team.
This French research team specializes in the fields of algorithms and mathematics for image processing.
<em>G’MIC</em> is distributed under the <a href="http://www.cecill.info/licences/Licence_CeCILL_V2.1-en.txt">CeCILL</a> license (which is <em>GPL</em> compatible) and is available for multiple platforms (<em>GNU/Linux</em>, <em>MacOS</em> and <em>Windows</em>).
It provides a variety of user interfaces for manipulating generic image data, that is to say, _2D_ or _3D_ multispectral images (or sequences) with floating-point pixel values. This includes, of course, “classic” color images.</p>
<figure>
<a href='logo_gmic.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/s_logo_gmic.jpg" alt="G'MIC logo"></a>
<figcaption>
<i>Fig.1.1:</i> Logo of the <i>G’MIC</i> project, an open-source framework for image processing, and its mascot <i>Gmicky</i>.
</figcaption>
</figure>

<p>The popularity of <em>G’MIC</em> mostly comes from the <a href="http://gmic.eu/gimp.shtml">plug-in</a> it provides for <a href="http://www.gimp.org"><em>GIMP</em></a> (since 2009).
To date, there are more than <em>480</em> different filters and effects to apply to your images, which considerably enlarges the list of image processing filters
available by default in <em>GIMP</em>.</p>
<p><em>G’MIC</em> also provides a powerful and autonomous <a href="http://gmic.eu/reference.shtml">command-line interface</a>, which is complementary
to the <em>CLI</em> tools you can find in the famous <a href="http://www.imagemagick.org/"><em>ImageMagick</em></a> or <a href="http://www.graphicsmagick.org"><em>GraphicsMagick</em></a> projects.
There is also a web service <a href="https://gmicol.greyc.fr/"><em>G’MIC Online</em></a>, allowing to apply image processing effects directly from a browser.
Other (but less well known) <em>G’MIC</em>-based interfaces exist: a webcam streaming tool <a href="https://www.youtube.com/watch?v=k1l3RdvwHeM"><em>ZArt</em></a>,
a plug-in for <a href="http://www.krita.org"><em>Krita</em></a>,
a subset of filters available in <a href="http://photoflowblog.blogspot.com/2014/10/two-new-photoflow-features-integration.html"><em>Photoflow</em></a>,
<a href="https://github.com/Starfall-Robles/Blender-2-G-MIC"><em>Blender</em></a> or <a href="https://github.com/NatronVFX/openfx-gmic/releases"><em>Natron</em></a>…
All these interfaces are based on the <a href="http://cimg.eu"><em>CImg</em></a> and <a href="http://gmic.eu/libgmic.shtml"><em>libgmic</em></a> libraries, that are portable,
thread-safe and multi-threaded, via the use of <a href="http://openmp.org/"><em>OpenMP</em></a>.</p>
<p><em>G’MIC</em> has more than <em>950</em> different and configurable <a href="http://gmic.eu/reference.shtml">processing functions</a>, for a library of only <em>6.5Mio</em>,
representing a bit more than <em>180 kloc</em>.
The processing functions cover a wide spectrum of the image processing field, offering algorithms for geometric manipulations, colorimetric changes,
image filtering (denoising and detail enhancement by spectral, variational, non-local methods, etc.), motion estimation and registration,
display of primitives (_2D_ or _3D_ mesh objects), edge detection, object segmentation, artistic rendering, etc.
It is therefore a very generic tool for various uses, useful on the one hand for converting, visualizing and exploring image data,
and on the other hand for designing complex image processing <em>pipelines</em> and algorithms
(see <a href="http://gmic.eu/img/gmic_slides.pdf">these project slides</a> for details).</p>
<h1 id="2-a-new-versatile-interface-based-on-qt">2. A new versatile interface, based on Qt</h1>
<p>One of the major new features of this version <strong>2.0</strong> is the re-implementation of the plug-in code, <em>from scratch</em>.
The repository <a href="https://github.com/c-koi/gmic-qt"><em>G’MIC-Qt</em></a> developed by <a href="https://www.greyc.fr/users/seb">Sébastien</a> (an experienced member of
the team) is a _Qt_-based version of the plug-in interface, being as independent as possible of the widget <em>API</em> provided by <em>GIMP</em>.</p>
<figure>
<a href='gmic_200.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/gmic_200.jpg" alt="G'MIC-Qt plug-in 1"></a>
<figcaption>
<i>Fig.2.1:</i> Overview of version <b>2.0</b> of the <i>G’MIC-Qt</i> plug-in running for <i>GIMP</i>.
</figcaption>
</figure>

<p>This has several interesting consequences:</p>
<ul>
<li><p>The plug-in uses its own widgets (in _Qt_) which makes it possible to have a more flexible and customizable interface than with the <em>GTK</em> widgets
used by the <em>GIMP</em> plug-in <em>API</em>: for instance, the preview window becomes resizable at will, manages zooming by mouse wheel, and can be freely moved
to the left or to the right. A filter search engine by keywords has been added, as well as the possibility of choosing between a light
or dark theme. The management of favorite filters has been also improved and the interface even offers a new mode for setting the visibility of the filters.
Interface personalization is now a reality.</p>
</li>
<li><p>The plug-in also defines its own <em>API</em>, which is used to facilitate its integration in third party software (other than <em>GIMP</em>).
In practice, a software developer has to write a single file <code>host_software.cpp</code> implementing the functions of the <em>API</em> to make the link between the plug-in
and the host application. Currently, the file <a href="https://github.com/c-koi/gmic-qt/blob/master/src/host_gimp.cpp"><code>host_gimp.cpp</code></a> does this for <em>GIMP</em> as a host.
But there is now also a <em>stand-alone</em> version available (file <a href="https://github.com/c-koi/gmic-qt/blob/master/src/host_none.cpp"><code>host_none.cpp</code></a> that runs
this _Qt_ interface in solo mode, from a shell (with command <code>gmic_qt</code>).</p>
</li>
<li><p><a href="https://krita.org/en/item/author/boudewijn_rempt/">Boudewijn Rempt</a>, project manager and developer of the marvelous painting software <a href="http://www.krita.org"><em>Krita</em></a>,
has also started writing such a file <a href="https://github.com/c-koi/gmic-qt/blob/master/src/host_krita.cpp"><code>host_krita.cpp</code></a> to make this “new generation” plug-in
communicate with <em>Krita</em>. In the long term, this should replace the previous <em>G’MIC</em> plug-in implementation they made (currently distributed with <em>Krita</em>),
which is aging and poses maintenance problems for developers.</p>
</li>
</ul>
<p>Minimizing the integration effort for developers, sharing the <em>G’MIC</em> plug-in code between different applications, and offering a user interface that is
as comfortable as possible, have been the main objectives of this complete redesign. As you can imagine, this rewriting required a long and sustained effort,
and we can only hope that this will raise interest among other software developers, where having a consistent set of image processing filters
could be useful (a file <code>host_blender.cpp</code> available soon ? We can dream!). The animation below illustrates some of the features
offered by this new _Qt_-based interface.</p>
<figure>
<a href='gmic_qt.gif' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/gmic_qt.gif" alt="G'MIC-Qt plug-in 2"></a>
<figcaption>
<i>Fig.2.2:</i> The new <i>G’MIC-Qt</i> interface in action.
</figcaption>
</figure>

<p>Note that the old plug-in code written in <a href="https://www.gtk.org/"><em>GTK</em></a> was updated also to work with the new version <strong>2.0</strong> of <em>G’MIC</em>,
but has fewer features and probably will not evolve anymore in the future, unlike the _Qt_ version.</p>
<h1 id="3-easing-the-work-of-cartoonists-">3. Easing the work of cartoonists…</h1>
<p>One of <em>G’MIC’s</em> purposes is to offer more filters and functions to process images.
And that is precisely something where we have not relaxed our efforts, despite the number of filters already available in the previous versions!</p>
<p>In particular, this version comes with new and improved filters to ease the colorization of line-art. Indeed, we had the chance to host the artist
<a href="https://www.davidrevoy.com/">David Revoy</a> for a few days at the lab. <em>David</em> is well known to lovers of art and free software by his multiple contributions
in these fields (in particular, his web comic <a href="https://www.peppercarrot.com/"><em>Pepper &amp; Carrot</em></a> is a must-read!).
In collaboration with <em>David</em>, we worked on the design of an original automatic line-art coloring filter, named
<a href="http://www.davidrevoy.com/article324/smart-coloring-preview-of-a-new-gmic-filter"><strong>Smart Coloring</strong></a>.</p>
<figure>
<a href='gmic_smart_coloring.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/s_gmic_smart_coloring.jpg" alt='Smart coloring 1'></a>
<figcaption>
<i>Fig.3.1:</i> Use of the “<b>Colorize line-art [smart coloring]</b>“ filter in <i>G’MIC</i>.
</figcaption>
</figure>

<p>When drawing comics, the colorization of line-art is carried out in two successive steps:
The original drawing in gray levels (<em>Fig.3.2.[1]</em>) is first pre-colored with solid areas, i.e. by assigning a unique color to each region or distinct object
in the drawing (<em>Fig.3.2.[3]</em>). In a second step, the colourist reworks this pre-coloring, adding shadows, lights and modifying the colorimetric ambiance,
in order to obtain the final colorization result (<em>Fig.3.2.[4]</em>).
Practically, flat coloring results in the creation of a new layer that contains only piecewise constant color zones, thus forming a colored partition of the plane.
This layer is then merged with the original line-art to get the colored rendering (merging both in <em>multiplication</em> mode, typically).</p>
<figure>
<a href='teaser.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/s_teaser.jpg" alt='Smart coloring 2'></a>
<figcaption>
<i>Fig.3.2:</i> The different steps of a line-art coloring process (source: <i>David Revoy</i>).
</figcaption>
</figure>

<p>Artists admit it themselves: flat coloring is a long and tedious process, requiring patience and precision.
Classical tools available in digital painting or image editing software do not make this task easy.
For example, even most filling tools (<em>bucket fill</em>) do not handle discontinuities in drawn lines very well (<em>Fig.3.3.a</em>),
and even worse when lines are anti-aliased.
It is then common for the artist to perform flat coloring by painting the colors manually with a brush on a separate layer (<em>Fig.3.3.b</em>),
with all the precision problems that this supposes (especially around the contour lines, <em>Fig.3.3.c</em>).
See also <a href="http://www.davidrevoy.com/article240/gmic-line-art-colorization">this link</a> for more details.</p>
<figure>
<a href='problemes2.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/problemes2.jpg" alt='Smart coloring 3'></a>
<figcaption>
<i>Fig.3.3:</i> Classical problems encountered when doing flat coloring (source: <i>David Revoy</i>).
</figcaption>
</figure>

<p>It may even happen that the artist decides to explicitly constrain his style of drawing, for instance by using aliased brushes in a higher resolution image,
and/or by forcing himself to draw only connected contours, in order to ease the flat colorization work that has to be done afterwards.</p>
<p>The <strong>Smart Coloring</strong> filter developed in version <strong>2.0</strong> of <em>G’MIC</em> allows to automatically pre-color an input line-art without much work.
First, it analyses the local geometry of the contour lines (estimating their normals and curvatures).
Second, it (virtually) does contour auto-completion using <a href="https://en.wikipedia.org/wiki/Spline_(mathematics)"><em>spline curves</em></a>.
This virtual closure allows then the algorithm to fill objects with disconnected contour plots.
Besides, this filter has the advantage of being quite fast to compute and gives coloring results of similar quality to more expensive optimization techniques
used in some proprietary software.
This algorithm smoothly manages anti-aliased contour lines, and has two modes of colorization:
by random colors (<em>Fig.3.2.[2]</em> and <em>Fig.3.4</em>) or guided by color markers placed beforehand by the user (<em>Fig.3.5</em>).</p>
<figure>
<a href='rain.gif' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/rain.gif" alt='Smart coloring 4'></a>
<figcaption>
<i>Fig.3.4:</i> Using the <i>G’MIC</i> “<b>Smart Coloring</b>“ filter in random color mode, for line-art colorization (source: <i>David Revoy</i>).
</figcaption>
</figure>

<p>In “random” mode, the filter generates a piecewise constant layer that is very easy to recolor with correct hues afterwards.
This layer indeed contains only flat color regions, and the classic bucket fill tool is effective here to quickly reassign a coherent color
to each existing region synthesized by the algorithm.</p>
<p>In the user-guided markers mode, color spots placed by the user are extrapolated in such a way that it respects the geometry of the original drawing as much as possible,
taking into account the discontinuities in the pencil lines, as this is clearly illustrated by the figure below:</p>
<figure>
<a href='girl_colorization.gif' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/girl_colorization.gif" alt='Smart coloring 5'></a>
<figcaption>
<i>Fig.3.5:</i> Using the <i>G’MIC</i> “<b>Smart Coloring</b>“ filter in user-guided color markers mode, for line-art colorization (source: <i>David Revoy</i>).
</figcaption>
</figure>

<p>This innovative, flat coloring algorithm has been pre-published on <em>HAL</em> (in French):
<a href="https://hal.archives-ouvertes.fr/hal-01490269"><em>A semi-guided high-performance flat coloring algorithm for line-arts</em></a>.
Curious people could find there all the technical details of the algorithm used.
The recurring discussions we had with <em>David Revoy</em> on the development of this filter enabled us to improve the algorithm step by step,
until it became really usable in production. This method has been used successfully (and therefore validated) for the pre-colorization
of the whole <a href="https://www.peppercarrot.com/en/article412/episode-22-the-voting-system">episode 22</a> of the webcomic <em>Pepper &amp; Carrot</em>.</p>
<p>The wisest of you know that <em>G’MIC</em> already had a <a href="http://www.davidrevoy.com/article240/gmic-line-art-colorization">line-art colorization filter</a>!
True, but unfortunately it did not manage disconnected contour lines so well (such as the example in <em>Fig.3.5</em>),
and could then require the user to place a large number of color spots to guide the algorithm properly.
In practice, the performance of the new flat coloring algorithm is far superior.</p>
<p>And since it does not see any objection to anti-aliased lines, why not create ones?
That is the purpose of another new filter “<strong>Repair / Smooth [antialias]</strong>“ able to add anti-aliasing
to lines in cartoons that would have been originally drawn with aliased brushes.</p>
<figure>
<a href='s_gmic_antialiasing.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/s_gmic_antialiasing.jpg" alt='Smooth [antialias]'></a>
<figcaption>
<i>Fig.3.6:</i> Filter “<b>Smooth [antialias]</b>“ smooths contours to reduce aliasing effect in cartoons (source: <i>David Revoy</i>).
</figcaption>
</figure>

<h1 id="4-not-to-forget-the-photographers-">4. …Not to forget the photographers!</h1>
<p><em>“Colorizing drawings is nice, but my photos are already in color!”</em>, kindly remarks the impatient photographer. Don’t be cruel!
Many new filters related to the transformation and enhancement of photos have been also added in <em>G’MIC</em> <strong>2.0</strong>. Let’s take a quick look of what we have.</p>
<h2 id="4-1-cluts-and-colorimetric-transformations"><a href="#4-1-cluts-and-colorimetric-transformations" class="header-link-alt">4.1. <em>CLUTs</em> and colorimetric transformations</a></h2>
<p><a href="http://www.quelsolaar.com/technology/clut.html"><em>CLUTs</em></a> (<em>Color Lookup Tables</em>) are functions for colorimetric transformations defined in the <em>RGB</em> cube:
for each color <em>(Rs,Gs,Bs)</em> of a source image _Is_, a <em>CLUT</em> assigns a new color <em>(Rd,Gd,Bd)</em> transferred to the destination image _Id_
at the same position. These processing functions may be truly arbitrary, thus very different effects can be obtained according to the different <em>CLUTs</em> used.
Photographers are therefore generally fond of them (especially since these <em>CLUTs</em> are also a good way to simulate the color rendering of certain old films).</p>
<p>In practice, a <em>CLUT</em> is stored as a _3D_ volumetric color image (possibly “unwrapped” along the <em>z = B</em> axis to get
a <a href="http://gmic.eu/film_emulation/various/clut/golden.png">_2D_ version</a>).
This may quickly become cumbersome when several hundreds of <em>CLUTs</em> have to be managed.
Fortunately, <em>G’MIC</em> has a quite efficient <em>CLUT</em> compression algorithm (already mentioned in a <a href="https://pixls.us/blog/2016/05/g-mic-1-7-1">previous article</a>),
which has been improved version after version. So it was finally in a quite relax atmosphere that we added more than <strong>60</strong> new <em>CLUT</em>-based transformations in <em>G’MIC</em>,
for a total of <strong>359</strong> <em>CLUTs</em> usable, all stored in a data file that does exceed <em>1.2 Mio</em>.
By the way, let us thank
<a href="https://patdavid.net/">Pat David</a>,
<a href="http://www.digicrea.be/haldclut-set-style-a-la-nik-software">Marc Roovers</a> and
<a href="http://blog.sowerby.me/fuji-Film-simulation-profiles/">Stuart Sowerby</a> for their contributions to these color transformations.</p>
<figure>
<a href='a891743705fd011bebe68b1f88e2f0b90fddbdb1.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/a891743705fd011bebe68b1f88e2f0b90fddbdb1.jpg" alt='CLUTs'></a>
<figcaption>
<i>Fig.4.1.1:</i> Some of the new <i>CLUT</i>-based transformations available in <i>G’MIC</i> (source: <i>Pat David</i>).
</figcaption>
</figure>

<p>But what if you already have your own <em>CLUT</em> files and want to use them in <em>GIMP</em>? No problem !
The new filter “<strong>Film emulation / User-defined</strong>“ allows to apply such transformations from <em>CLUT</em> data file, with a partial support of files with
extension <code>.cube</code> (<a href="http://wwwimages.adobe.com/content/dam/Adobe/en/products/speedgrade/cc/pdfs/cube-lut-specification-1.0.pdf"><em>CLUT</em> file format</a> proposed
by <em>Adobe</em>, and encoded in <em>ASCII</em> <code>o_O</code>!).</p>
<p>And for the most demanding, who are not satisfied with the existing pre-defined <em>CLUTs</em>,
we have designed a very versatile filter “<strong>Colors / Customize CLUT</strong>“, that allows the user to build their own custom <em>CLUT</em> <em>from scratch</em>:
the user places colored keypoints in the <em>RGB</em> color cube and these markers are interpolated in _3D_
(according to a <a href="https://en.wikipedia.org/wiki/Delaunay_triangulation">Delaunay triangulation</a>)
in order to rebuild a complete <em>CLUT</em>, i.e. a dense function in <em>RGB</em>.
This is extremely flexible, as in the example below, where the filter has been used to change the colorimetric ambiance of a landscape,
mainly altering the color of the sky.
Of course, the synthesized <em>CLUT</em> can be saved as a file and reused later for other photographs,
or even in other software supporting this type of color transformations
(for example <a href="http://rawpedia.rawtherapee.com/Film_Simulation">RawTherapee</a> or
<a href="http://www.darktable.org/2016/05/colour-manipulation-with-the-colour-checker-lut-module/">Darktable</a>).</p>
<figure>
<a href='gmic_custom_clut.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/s_gmic_custom_clut.jpg" alt='Customize CLUT 1'></a>
<figcaption>
<i>Fig.4.1.2:</i> Filter “<b>Customize CLUT</b>“ used to design a custom color transform in the <i>RGB</i> cube.
</figcaption>
</figure>

<figure>
<a href='coast.gif' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/coast.gif" alt='Customize CLUT 2'></a>
<figcaption>
<i>Fig.4.1.3:</i> Result of the custom colorimetric transformation applied to a landscape.
</figcaption>
</figure>

<p>To stay in the field of color manipulation, let us also mention the appearance of the filter “<strong>Colors / Retro fade</strong>“ which creates a “retro” rendering of
an image with grain generated by successive averages of random quantizations of an input color image.</p>
<figure>
<a href='gmic_retrofade.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/s_gmic_retrofade.jpg" alt='Retro fade'></a>
<figcaption>
<i>Fig.4.1.4:</i> Filter “<b>Retro fade</b>“ in the <i>G’MIC</i> plug-in.
</figcaption>
</figure>


<h2 id="4-2-making-the-details-pop-out"><a href="#4-2-making-the-details-pop-out" class="header-link-alt">4.2. Making the details pop out</a></h2>
<p>Many photographers are looking for ways to process their digital photographs so as to bring out the smallest details of their images,
sometimes even to exaggeration, and we can find some of them in the <a href="https://discuss.pixls.us/"><em>pixls.us</em></a> forum.
Looking at how they perform allowed us to add several new filters for detail and contrast enhancement in <em>G’MIC</em>.
In particular, we can mention the filters “<strong>Artistic / Illustration look</strong>“ and “<strong>Artistic / Highlight bloom</strong>“, which are direct re-implementations of the tutorials
and scripts written by <a href="https://discuss.pixls.us/t/highlight-bloom-and-photoillustration-look">Sébastien Guyader</a> as well as the filter
“<strong>Light &amp; Shadows / Pop shadows</strong>“ suggested by <a href="https://discuss.pixls.us/t/easy-tone-mapping-in-gimp-with-reduced-fat-cheese">Morgan Hardwood</a>.
Being immersed in such a community of photographers and cool guys always gives opportunities to implement interesting new effects!</p>
<figure>
<a href='girl_hbloom.gif' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/girl_hbloom.gif" alt='Illustration look'></a>
<figcaption>
<i>Fig.4.2.1:</i> Filters “<b>Illustration look</b>“ and “<b>Highlight bloom</b>“ applied to a portrait image.
</figcaption>
</figure>

<p>In the same vein, <em>G’MIC</em> gets its own implementation of the <a href="http://www.ipol.im/pub/art/2014/107">Multi-scale Retinex</a> algorithm,
something that was <a href="https://docs.gimp.org/en/plug-in-retinex.html">already present</a> in <em>GIMP</em>, but here enriched with additional controls
to improve the luminance consistency in images.</p>
<figure>
<a href='501f32dbfcfefd9a761162a50fead5ca33e47bdb.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/s_501f32dbfcfefd9a761162a50fead5ca33e47bdb.jpg" alt='Retinex'></a>
<figcaption>
<i>Fig.4.2.2:</i> Filter “<b>Retinex</b>“ for improving luminance consistency.
</figcaption>
</figure>

<p>Our friend and great contributor to <em>G’MIC</em>, <a href="http://www.irisa.fr/vista/Equipe/People/Jerome.Boulanger.english.html"><em>Jérome Boulanger</em></a>,
also implemented and added a dehazing filter “<strong>Details / Dcp dehaze</strong>“ to attenuate the fog effect in photographs, based on the
<a href="http://mmlab.ie.cuhk.edu.hk/archive/2011/Haze.pdf"><em>Dark Channel Prior</em></a> algorithm.
Setting the parameters of this filter is kinda hard, but the filter gives sometimes spectacular results.</p>
<figure>
<a href='gmic_dehaze.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/s_gmic_dehaze.jpg" alt='DCP dehaze 1'></a>
<a href='dehaze.gif' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/dehaze.gif" alt='DCP dehaze 2'></a>
<figcaption>
<i>Fig.4.2.3:</i> Filter “<b>DCP Dehaze</b>“ to attenuate the fog effect.
</figcaption>
</figure>

<p>And to finish with this subsection, let us mention the implementation in <em>G’MIC</em> of the
<a href="http://www.cse.cuhk.edu.hk/leojia/projects/rollguidance/"><em>Rolling Guidance</em></a> algorithm, a method to simplify images that has become a
key step used in many newly added filters. This was especially the case in this quite cool filter for image <a href="https://en.wikipedia.org/wiki/Image_editing#Sharpening_and_softening_images"><em>sharpening</em></a>,
available in “<strong>Details / Sharpen [texture]</strong>“.
This filter works in two successive steps:
First, the image is separated into a <em>texture</em> component + a <em>color</em> component, then the details of the <em>texture</em> component only are enhanced before
the image is recomposed. This approach makes it possible to highlight all the small details of an image, while minimizing the undesired
halos near the contours, a recurring problem happening with more classical sharpening methods (such as the well known
<a href="https://en.wikipedia.org/wiki/Unsharp_masking"><em>Unsharp Mask</em></a>).</p>
<figure>
<a href='lion_sharpen.gif' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/lion_sharpen.gif" alt='Sharpen [texture]'></a>
<figcaption>
<i>Fig.4.2.4:</i> The “<b>Sharpen [texture]</b>“” filter shown for two different enhancement amplitudes.
</figcaption>
</figure>

<h2 id="4-3-masking-by-color"><a href="#4-3-masking-by-color" class="header-link-alt">4.3. Masking by color</a></h2>
<p>As you may know, a lot of photograph retouching techniques require the creation of one or several “masks”, that is,
the isolation of specific areas of an image to receive differentiated processing.
For example, the very common technique of
<a href="http://goodlight.us/writing/luminositymasks/luminositymasks-1.html">luminosity masks</a> is a way to treat differently shadows and highlights
in an image. <em>G’MIC</em> <strong>2.0</strong> introduces a new interesting filter “<strong>Colors / Color mask [interactive]</strong>“ that implements a relatively sophisticated algorithm
(albeit computationally demanding) to help creating complex masks. This filter asks the user to hover the mouse over a few pixels that are representative of
the region to keep. The algorithm learns in real time the corresponding set of colors or luminosities and deduces then the set of pixels that
composes the mask for the whole image (using <a href="https://en.wikipedia.org/wiki/Principal_component_analysis">Principal Component Analysis</a> on the <em>RGB</em> samples).</p>
<p>Once the mask has been generated by the filter, the user can easily modify the corresponding pixels with any type of processing. The example below illustrates the use
of this filter to drastically change the color of a car</p>
<figure>
<a href='car_hue2.gif' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/car_hue2.gif" alt='Color mask [interactive]'></a>
<figcaption>
<i>Fig.4.3.1:</i> Changing the color of a car, using the filter “<b>Color mask [interactive]</b>“.
</figcaption>
</figure>

<p>It takes no more than a minute and a half to complete, as shown in the video below:</p>
<figure>
<iframe width="560" height="315" src="https://www.youtube.com/embed/fmvGRAnKJgs" frameborder="0" allowfullscreen></iframe>
<figcaption>
<i>Fig.4.3.2:</i> Changing the color of a car, using filter “<b>Color mask [interactive]</b>“ (video tutorial).
</figcaption>
</figure>

<p>This other video exposes an identical technique to change the color of the sky in a landscape.</p>
<figure>
<iframe width="560" height="315" src="https://www.youtube.com/embed/K2nkbkqYquc" frameborder="0" allowfullscreen></iframe>
<figcaption>
<i>Fig.4.3.3:</i> Changing the color of the sky in a landscape, using filter “<b>Color mask [interactive]</b>“ (video tutorial).
</figcaption>
</figure>

<h1 id="5-and-for-the-others-">5. And for the others…</h1>
<p>Since illustrators and photographers are now satisfied, let’s move on to some more exotic filters, recently added to <em>G’MIC</em>,
with interesting outcomes!</p>
<h2 id="5-1-average-and-median-of-a-series-of-images"><a href="#5-1-average-and-median-of-a-series-of-images" class="header-link-alt">5.1. Average and median of a series of images</a></h2>
<p>Have you ever wondered how to easily estimate the average or median frame of a sequence of input images?
The libre <em>aficionado</em> <a href="https://patdavid.net/">Pat David</a>, creator of the site <a href="https://pixls.us/"><em>pixls.us</em></a> often asked the question.
First of all when he tried to denoise images <a href="https://patdavid.net/2013/05/noise-removal-in-photos-with-median_6.html">by combining several shots</a> of a same scene.
Then he wanted to simulate <a href="https://patdavid.net/2013/09/faking-nd-filter-for-long-exposure.html">a longer exposure time</a> by averaging photographs taken successively. And finally, calculating averages of various kind of images for artistic purposes (for example, frames of
<a href="https://patdavid.net/2013/12/mean-averaged-music-videos-g.html">music video clips</a>,
<a href="https://patdavid.net/2012/08/imagemagick-average-blending-files.html">covers of <em>Playboy</em> magazine</a> or
<a href="https://patdavid.net/2012/08/more-averaging-photos-martin-schoeller.html">celebrity portraits</a>).</p>
<p>Hence, with his cooperation, we added new commands <code>-median_files</code>,<code>-median_videos</code>, <code>-average_files</code> and<code>-average_videos</code>  to compute all these image features very easily
using the <em>CLI</em> tool <code>gmic</code>. The example below shows the results obtained from a sub-sequence of the
« <a href="https://peach.blender.org/"><em>Big Buck Bunny</em></a>“ video. We have simply invoked the following commands from the <em>Bash</em> shell:</p>
<pre><code class="lang-sh">$ gmic -average_video bigbuckbunny.mp4 -normalize 0.255 -o average.jpg
$ gmic -median_video bigbuckbunny.mp4 -normalize 0.255 -o median.jpg
</code></pre>
<figure>
<a href='s_bbb.gif' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/s_bbb.gif" alt='Big buck bunny 1'></a>
<figcaption>
<i>Fig.5.1.1:</i> Sequence in the « <i>Big Buck Bunny</i>“ video, directed by the Blender foundation.
</figcaption>
</figure>

<figure>
<a href='bbb_avg.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/s_bbb_avg.jpg" alt='Big buck bunny 2'></a>
<figcaption>
<i>Fig.5.1.2:</i> Result: Average image of the « <i>Big Buck Bunny</i>“ sequence above.
</figcaption>
</figure>

<figure>
<a href='bbb_median.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/s_bbb_median.jpg" alt='Big buck bunny 3'></a>
<figcaption>
<i>Fig.5.1.3:</i> Result: Median image of the « <i>Big Buck Bunny</i>“ sequence above.
</figcaption>
</figure>

<p>And to stay in the field of video processing, we can also mention the addition of the commands <code>-morph_files</code> and <code>-morph_video</code> that render temporal interpolations
of video sequences, taking the estimated intra-frame object motion into account, thanks to a quite smart variational and multi-scale estimation algorithm.</p>
<p>The video below illustrates the rendering difference obtained for the retiming of a sequence using temporal interpolation,
with (<em>right</em>) and without (<em>left</em>) motion estimation.</p>
<figure>
<iframe width="560" height="315" src="https://www.youtube.com/embed/rjfo5gi5XOs" frameborder="0" allowfullscreen></iframe>
<figcaption>
<i>Fig.5.1.4:</i> Video retiming using <i>G’MIC</i> temporal morphing technique.
</figcaption>
</figure>

<h2 id="5-2-deformations-and-glitch-art-"><a href="#5-2-deformations-and-glitch-art-" class="header-link-alt">5.2. Deformations and “Glitch Art”</a></h2>
<p>Those who like to mistreat their images aggressively will be delighted to learn that a bunch of new image deformation and degradation effects
have appeared in <em>G’MIC</em>.</p>
<p>First of all, the filter “<strong>Deformations / Conformal maps</strong>“ allows one to distort an image using <a href="https://en.wikipedia.org/wiki/Conformal_map">conformal maps</a>.
These deformations have the property of preserving the angles locally, and are most often expressed as functions of complex numbers.
In addition to playing with predefined deformations, this filter allows budding mathematicians to experiment with their own complex formulas.</p>
<figure>
<a href='gmic_conformalmaps.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/s_gmic_conformalmaps.jpg" alt='Conformal maps'></a>
<figcaption>
<i>Fig.5.2.1:</i> Filter “<b>Conformal maps</b>“ applying a angle-preserving transformation to the image of <i>Mona Lisa</i>.
</figcaption>
</figure>

<p>Fans of <a href="https://en.wikipedia.org/wiki/Glitch_art"><em>Glitch Art</em></a> may also be concerned by several new filters whose rendering
look like image encoding or compression artifacts. The effect “<strong>Degradations / Pixel sort</strong>“ sorts the pixels of a picture by row or by
column according to different criteria and to possibly masked regions, as initially described on
<a href="http://satyarth.me/articles/pixel-sorting/">this page</a>.</p>
<figure>
<a href='girl_sorted.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/girl_sorted.jpg" alt='Pixel sort'></a>
<figcaption>
<i>Fig.5.2.2:</i> Filter “<b>Pixel sort</b>“ for rendering a kind of “Glitch Art” effect.
</figcaption>
</figure>

<p><strong>Degradations / /Pixel sort</strong> also has two little brothers, filters “<strong>Degradations / Flip &amp; rotate blocks</strong>“ and “<strong>Degradations / Warp by intensity</strong>“.
The first divides an image into blocks and allows to rotate or mirror them, potentially only for certain color characteristics
(like hue or saturation, for instance).</p>
<figure>
<a href='gmic_flip.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/s_gmic_flip.jpg" alt='Flip and rotate blocks'></a>
<figcaption>
<i>Fig.5.2.3:</i> Filter “<b>Flip &amp; rotate blocks</b>“ applied to the hue only to obtain a “Glitch Art” effect.
</figcaption>
</figure>

<p>The second locally deforms an image with more or less amplitude, according to its local geometry.
Here again, this can lead to the generation of very strange images.</p>
<figure>
<a href='gmic_warp.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/s_gmic_warp.jpg" alt='Warp by intensity'></a>
<figcaption>
<i>Fig.5.2.4:</i> Filter “<b>Warp by intensity</b>“ applied to the image of <i>Mona Lisa</i> (poor <i>Mona</i>!).
</figcaption>
</figure>

<p>It should be noted that these filters were largely inspired by the
<a href="http://forums.getpaint.net/index.php?/topic/30276-glitch-effect-plug-in-polyglitch-v14b/"><em>Polyglitch</em></a> plug-in,
available for <a href="https://www.getpaint.net/"><em>Paint.NET</em></a>, and have been implemented after a suggestion from a friendly user
(yes, yes, we try to listen to our most friendly users!).</p>
<h2 id="5-3-image-simplification"><a href="#5-3-image-simplification" class="header-link-alt">5.3. Image simplification</a></h2>
<p>What else do we have in store? A new image abstraction filter, <strong>Artistic / Sharp abstract</strong>, based on the <em>Rolling Guidance</em> algorithm mentioned before.
This filter applies contour-preserving smoothing to an image, and its main consequence is to remove the texture.
The figure below illustrates its use to generate several levels of abstraction of the same input image, at different smoothing scales.</p>
<figure>
<a href='lion_abstract.gif' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/lion_abstract.gif" alt='Sharp abstract'></a>
<figcaption>
<i>Fig.5.3.1:</i> Creating abstractions of an image via the filter “<b>Sharp abstract</b>“.
</figcaption>
</figure>

<p>In the same vein, <em>G’MIC</em> also gets a filter <strong>Artistic / Posterize</strong> which degrades an image to simulate <a href="https://en.wikipedia.org/wiki/Posterization">posterization</a>.
Unlike the filter with same name available by default in <em>GIMP</em> (which mainly tries to reduce the number of colors, i.e. do <a href="https://en.wikipedia.org/wiki/Color_quantization">color quantization</a>),
our version adds spatial simplification and filtering to approach a little more the rendering of old posters.</p>
<figure>
<a href='tiger_posterize.gif' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/tiger_posterize.gif" alt='Posterize'></a>
<figcaption>
<i>Fig.5.3.2:</i> Filter “<b>Posterize</b>“ of <i>G’MIC</i>, compared to the filter with same name available by default in <i>GIMP</i>.
</figcaption>
</figure>


<h2 id="5-4-other-filters"><a href="#5-4-other-filters" class="header-link-alt">5.4. Other filters</a></h2>
<p>If you still want more (and in this case one could say you are damn greedy!), we will end this section by discussing
some of the new, but unclassifiable filters.</p>
<p>We start with the filter “<strong>Artistic / Diffusion tensors</strong>“, which displays a field of diffusion tensors, calculated from the structure tensors of an image
(structure tensors are symmetric and positive definite matrices, classically used for estimating the local image geometry).
To be quite honest, this feature had not been originally developed for an artistic purpose, but users of the plug-in came across it by chance and asked
to make a <em>GIMP</em> filter from it. And yes, this is finally quite pretty, isn’t it?</p>
<figure>
<a href='26ec897bf8cee6af17b4af60c1ec8a22309d797e.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/s_26ec897bf8cee6af17b4af60c1ec8a22309d797e.jpg" alt='Diffusion tensors'></a>
<figcaption>
<i>Fig.5.4.1:</i> Filter “<b>Diffusion Tensors</b>“ filter and its multitude of colored ellipses.
</figcaption>
</figure>

<p>From a technical point of view, this filter was actually an opportunity to introduce new drawing features into the <em>G’MIC</em> mathematical evaluator,
and it has now become quite easy to develop <em>G’MIC</em> scripts for rendering custom visualizations of various image data.
This is what has been done for instance, with the command <code>-display_quiver</code> reimplemented <em>from scratch</em>, and which allows to generate this type of rendering:</p>
<figure>
<a href='b99e02c28583b00e3f8bd12e6b99b09b9dfe1a41.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/s_b99e02c28583b00e3f8bd12e6b99b09b9dfe1a41.jpg" alt='-display_quiver'></a>
<figcaption>
<i>Fig. 5.4.2:</i> Rendering vector fields with the <i>G’MIC</i> command <tt><code>-display_quiver</code></tt>.
</figcaption>
</figure>

<p>For lovers of textures, we can mention the apparition of two new fun effects: First, the “<strong>Patterns / Camouflage</strong>“ filter. As its name suggests,
this filter produces a military camouflage texture.</p>
<figure>
<a href='gmic_camouflage.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/s_gmic_camouflage.jpg" alt='Camouflage'></a>
<figcaption>
<i>Fig. 5.4.3:</i> Filter “<b>Camouflage</b>“, to be printed on your T-shirts to go unnoticed in parties!
</figcaption>
</figure>

<p>Second, the filter “<strong>Patterns / Crystal background</strong>“ overlays several randomly colored polygons in order to synthesize a texture that vaguely
looks like a crystal seen under a microscope. Pretty useful to quickly render colored image backgrounds.</p>
<figure>
<a href='gmic_crystal.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/s_gmic_crystal.jpg" alt='Crystal background'></a>
<figcaption>
<i>Fig.5.4.4:</i> Filter “<b>Crystal background</b>“ in action.
</figcaption>
</figure>

<p>And to end this long overview of new <em>G’MIC</em> filters developed since last year, let us mention “<strong>Rendering / Barnsley fern</strong>“.
This filter renders the well-known <a href="https://en.wikipedia.org/wiki/Barnsley_fern"><em>Barnsley fern</em></a> fractal.
For curious people, note that the related algorithm is available on <a href="https://rosettacode.org/wiki/Barnsley_fern#G.27MIC"><em>Rosetta Code</em></a>,
with even a code version written in the <em>G’MIC</em> script language, namely:</p>
<pre><code class="lang-c++"># Put this into a new file &#39;fern.gmic&#39; and invoke it from the command line, like this:
# $ gmic fern.gmic -barnsley_fern
barnsley_fern :
  1024,2048
  -skip {&quot;
      f1 = [ 0,0,0,0.16 ];           g1 = [ 0,0 ];
      f2 = [ 0.2,-0.26,0.23,0.22 ];  g2 = [ 0,1.6 ];
      f3 = [ -0.15,0.28,0.26,0.24 ]; g3 = [ 0,0.44 ];
      f4 = [ 0.85,0.04,-0.04,0.85 ]; g4 = [ 0,1.6 ];
      xy = [ 0,0 ];
      for (n = 0, n&lt;2e6, ++n,
        r = u(100);
        xy = r&lt;=1?((f1**xy)+=g1):
             r&lt;=8?((f2**xy)+=g2):
             r&lt;=15?((f3**xy)+=g3):
                   ((f4**xy)+=g4);
        uv = xy*200 + [ 480,0 ];
        uv[1] = h - uv[1];
        I(uv) = 0.7*I(uv) + 0.3*255;
      )&quot;}
  -r 40%,40%,1,1,2
</code></pre>
<p>And here is the rendering generated by this function:</p>
<figure>
<a href='3750f17a2859f582ce40c21475d886bb9295d19f.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/s_3750f17a2859f582ce40c21475d886bb9295d19f.jpg" alt='Barnsley Fern'></a>
<figcaption>
<i>Fig.5.4.5:</i> Fractal “<b>Barnsley fern</b>“, rendered by <i>G’MIC</i>.
</figcaption>
</figure>


<h1 id="6-overall-project-improvements">6. Overall project improvements</h1>
<p>All filters presented throughout this article constitute only the visible part of the <em>G’MIC</em> iceberg.
They are in fact the result of many developments and improvements made “under the hood”, i.e., directly on the code of the
<em>G’MIC</em> <a href="http://gmic.eu/reference.shtml">script language</a> interpreter.
This interpreter defines the basic language used to write all <em>G’MIC</em> filters and commands available to users.
Over the past year, a lot of work has been done to improve the performances and the capabilities of this interpreter:</p>
<ul>
<li><p>The mathematical expressions evaluator has been considerably enriched and optimized, with more functions available
(especially for matrix calculus), the support of strings, the introduction of <code>const</code> variables for faster evaluation,
the ability to write <a href="https://en.wikipedia.org/wiki/Variadic_macro">variadic</a> macros, to allocate dynamic buffers, and so on.</p>
</li>
<li><p>New optimizations have been also introduced in the <a href="http://cimg.eu">CImg</a> library, including the parallelization of new functions
(via the use of <a href="https://en.wikipedia.org/wiki/OpenMP">OpenMP</a>). This <em>C++</em> library provides the implementations of the “critical” image processing
algorithms and its optimization has a direct impact on the performance of <em>G’MIC</em> (in this respect, note that <em>CImg</em> is also released with a major version <strong>2.0</strong>).</p>
</li>
<li><p>Compiling <em>G’MIC</em> on Windows now uses a more recent version of <code>g++</code> (<strong>6.2</strong> rather than <strong>4.5</strong>), with the help of <a href="http://samjcreations.blogspot.com/"><em>Sylvie Alexandre</em></a>.
This has actually a huge impact on the performances of the compiled executables: some filters run up to <strong>60 times faster</strong> than with the previous binaries
(this is the case for example, with the <em>Deformations / Conformal Maps</em> filter, discussed in section <em>5.2</em>).</p>
</li>
<li><p>The support of large <code>.tiff</code> images (format <a href="http://www.awaresystems.be/imaging/tiff/bigtiff.html"><em>BigTIFF</em></a>, with files that can be larger than <em>4Gb</em>)
is now enabled (read and write), as it is for 64-bit floating-point <em>TIFF</em> images</p>
</li>
<li><p>The 3D rendering engine built into <em>G’MIC</em> has also been slightly improved, with the support for <a href="https://en.wikipedia.org/wiki/Bump_mapping"><em>bump mapping</em></a>.
No filter currently uses this feature, but we never know, and prepare ourselves for the future!</p>
</li>
</ul>
<figure>
<a href='d135157095b38133d1b25bea7ef97a56099a2fad.jpg' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/s_d135157095b38133d1b25bea7ef97a56099a2fad.jpg" alt='Bump mapping'></a>
<figcaption>
<i>Fig.6.1:</i> Comparison of <i>3D</i> textured rendering with (<i>right</i>) and without “Bump mapping” (<i>left</i>).
</figcaption>
</figure>

<ul>
<li>And as it is always good to relax after a hard day’s work, we added the game of <a href="https://en.wikipedia.org/wiki/Connect_Four">Connect Four</a> to <em>G’MIC</em> :).
It can be launched via the shell command <code>$ gmic -x_connect4</code> or via the plug-in filter “<strong>Various / Games &amp; demos / Connect-4</strong>“.
Note that it is even possible to play against the computer, which has a decent but not unbeatable skill
(the very simple _AI_ uses the <a href="https://en.wikipedia.org/wiki/Minimax"><em>Minimax</em> algorithm</a> with a two-level decision tree).</li>
</ul>
<figure>
<a href='gmic_connect4.gif' ><img src="https://pixls.us/blog/2017/06/g-mic-2-0/gmic_connect4.gif" alt='Connect four'></a>
<figcaption>
<i>Fig.6.2:</i> The game of “<b>Connect Four</b>“, as playable in <i>G’MIC</i>.
</figcaption>
</figure>

<p>Finally, let us mention the undergoing redesign work of the <em>G’MIC Online</em> web service, with a
<a href="https://gmicol.greyc.fr/beta">beta version</a> already available for testing.
This re-development of the site, done by <a href="https://www.greyc.fr/users/couronne">Christophe Couronne</a> and <a href="https://www.greyc.fr/users/robertv">Véronique Robert</a>
(both members of the <em>GREYC</em> laboratory), has been designed to better adapt to mobile devices.
The first tests are more than encouraging. Feel free to experiment and share your impressions!</p>
<h1 id="7-what-to-remember-">7. What to remember?</h1>
<p>First, the version <strong>2.0</strong> of <em>G’MIC</em> is clearly an important step in the project life, and the recent improvements
are promising for the future developments.
It seems that the number of users are increasing (and they are apparently satisfied!), and we hope that this will encourage open-source software developers
to integrate our new <em>G’MIC-Qt</em> interface as a plug-in for their own software.
In particular, we are hopeful to see the new <em>G’MIC</em> in action under <em>Krita</em> soon, this would be already a great step!</p>
<p>Second, <em>G’MIC</em> continues to be an active project, and evolve through meetings and discussions with members of artists and photographers communities
(particularly those who populate the forums and <em>IRC</em> of <a href="https://discuss.pixls.us/"><em>pixls.us</em></a> and <a href="http://gimpchat.com/">GimpChat</a>).
You will likely able to find us there if you need more information, or just if you want to discuss things related to (open-source) image processing.</p>
<p>And while waiting for a future hypothetical article about a future release of <em>G’MIC</em>, you can always follow the day-after-day progress of the project via
<a href="https://twitter.com/gmic_ip">our Twitter feed</a>.</p>
<p>Until then, long live open-source image processing!</p>
<hr>
<p><small>Credit: Unless explicitly stated, the various non-synthetic images that illustrate this post come from <a href="https://pixabay.com/en/"><em>Pixabay</em></a>.</small></p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Happy 2nd Birthday Discuss]]></title>
            <link>https://pixls.us/blog/2017/05/happy-2nd-birthday-discuss/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2017/05/happy-2nd-birthday-discuss/</guid>
            <pubDate>Fri, 12 May 2017 21:20:34 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2017/05/happy-2nd-birthday-discuss/birthday-1208233_1920.jpg" /><br/>
                <h1>Happy 2nd Birthday Discuss</h1> 
                <h2>Time keeps on slippin'</h2>  
                <p>I was idling in our <a href="https://kiwiirc.com/client/irc.freenode.net/?nick=webuser%7C?#pixls.us">IRC</a> chat room earlier when @Morgan_Hardwood wished us all a “Happy Discuss Anniversary”.
Wouldn’t you know it, another year slipped right by!
(Surely there’s no way it could already be a year <a href="https://pixls.us/blog/2016/04/happy-birthday-discuss-pixls-us/">since the last birthday post</a>?
Where does the time go?)</p>
<div class='fluid-vid'>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/7YjBImELgOY" frameborder="0" allowfullscreen></iframe>
</div>

<p>We’ve had a bunch of neat things happen in the community over the past year!
Let’s look at some of the highlights.</p>
<!--more-->
<h2 id="support"><a href="#support" class="header-link-alt">Support</a></h2>
<p>I want to start with this topic because it’s the perfect opportunity to recognize some folks who have been supporting the community financially…</p>
<p>When I started all of this I decided that I definitely didn’t want ads to be on the site anywhere.
I had gotten enough donations from my old blog and <a href="https://www.gimp.org">GIMP</a> tutorials that I could cover costs for a while entirely from those funds (I also re-did <a href="https://patdavid.net">my personal blog</a> recently and removed all ads from there as well).</p>
<p>I don’t like ads.
You don’t like ads.
We’re a big enough community that we can keep things going without having to bring those crappy things into our lives.
So to reiterate, we’re not going to run ads on the site.</p>
<p>We are hosting the main website on <a href="https://www.stablehost.com/">Stablehost</a>, the forums (<a href="https://discuss.pixls.us">discuss</a>) are on a VPS at <a href="https://www.digitalocean.com/">Digital Ocean</a>, and our file storage for discuss is out on Amazon S3(<a href="#amazon-s3">see below</a>).
All told our costs are about $30 per month.
Not so bad!</p>
<h3 id="thank-you-"><a href="#thank-you-" class="header-link-alt">Thank You!</a></h3>
<p>Even so, we have had some folks who have donated to help us offset these costs and I want to take a moment to recognize their generosity and graciousness!</p>
<p><strong><a href="https://plus.google.com/+DimitriosPsychogios">Dimitrios Psychogios</a></strong> has been a supporter of the site since the beginning.
This past year he covered (more than) our hosting costs for the entire year, and for that I am infinitely grateful (yes, I have infinite gratitude).
It also helps that based on his postings on G+ our musical tastes are very similarly aligned.
As soon as I get the supporters page up you’re going to the top of the list!
<em>Thank you</em>, Dimitrios, for your support of the community!</p>
<p><strong>Jonas Wagner</strong> (@Jonas_Wagner) and <strong>McCap</strong> (@McCap) both donated this past year as well.
Which is doubly-awesome because they are both active in the community and have written some great content for everyone as well (@McCap is the author of the article <em><a href="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/">A Masashi Wakui look with GIMP</a></em>, and  has been active in the community since the beginning as well).</p>
<p><strong>Mica</strong> (@paperdigits) and <strong>Luka</strong> are both <em>recurring donators</em> which I am particularly grateful for.
It really helps for planning to know we have some recurring support like that.</p>
<p>I have a bunch of donations where the donators didn’t leave me a name to use for attribution and I don’t want to just assume it’s ok.  If you know you donated and see your first name in the list below (and are ok with me using your full name and a link if you want) then please let me know and I’ll update this post (and for the donators page later).</p>
<p>These are the folks who are really making a difference by taking the time and being gracious enough to support us.
Even if you don’t want your full name out here, I know who you are and am very, very grateful and humbled by your generosity and kindness.  <strong>Thank you all so much!</strong></p>
<ul>
<li><strong>Marc W.</strong> (you rock!)</li>
<li>Ulrich P.</li>
<li>Luc V.</li>
<li>Ben E.</li>
<li>Keith A.</li>
<li>Philipp H.</li>
<li>Christian M.</li>
<li>Matthieu M.</li>
<li>Christian M.</li>
<li>Christian K.</li>
<li>Maria J.</li>
<li>Kevin P.</li>
<li>Maciej D.</li>
<li>Christian K.</li>
<li>Egbert G.</li>
<li>Michael H.</li>
<li>Jörn H.</li>
<li>Boris H.</li>
<li>Norman S.</li>
<li>David O.</li>
<li>Walfrido C.</li>
<li>Philip S.</li>
<li>David S.</li>
<li>Keith B.</li>
<li>Andrea V.</li>
<li>Stephan R.</li>
<li>David M.</li>
<li>Bastian H.</li>
<li>Chance J.</li>
<li>Luka S.</li>
<li>Nathanael S.</li>
<li>Sven K.</li>
<li>Pepijn V.</li>
<li>Benjamin W.</li>
<li>Jörg W.</li>
<li>Patrick B.</li>
<li>Joop K.</li>
<li>Alain V.</li>
<li>Egor S.</li>
<li>Samuel S.</li>
</ul>
<p>On that note.
If anyone wanted to join the folks above in supporting what we’re up to, we have a page specifically for that: </p>
<p><a href="https://pixls.us/support/">https://pixls.us/support/</a></p>
<p>Remember, no amount is too small!</p>
<h2 id="libre-graphics-meeting-rio"><a href="#libre-graphics-meeting-rio" class="header-link-alt">Libre Graphics Meeting Rio</a></h2>
<figure>
<img src="https://pixls.us/blog/2017/05/happy-2nd-birthday-discuss/Forte_de_Copacabana_panorama.jpg" alt='Forte de Copacabana, Rio'>
<figcaption>
<a title="By Gabriel Heusi/Brasil2016.gov.br (Portal Brasil 2016) [CC BY 3.0 br], via Wikimedia Commons" href="https://commons.wikimedia.org/wiki/File%3AForte_de_Copacabana_panorama.jpg">By Gabriel Heusi/Brasil2016.gov.br</a>
</figcaption>
</figure>

<p>I wasn’t able to attend <a href="http://libregraphicsmeeting.org/2017/">LGM</a> this year, being held down in Rio (but <a href="https://pixls.us/blog/2017/03/gimp-is-going-to-lgm/">the GIMP team did</a>).
That’s not to say that we didn’t have folks from the community there: Farid (@frd) from <a href="http://gunga.com.br/">Estúdio Gunga</a> was there!</p>
<p>I was able to help coordinate a presentation by Robin Mills (@clanmills) about the state (and future) of <a href="http://www.exiv2.org/">Exiv2</a>.
They’re looking for a maintainer to join the project, as Robin will be stepping down at the end of the year for studies.
If you think you’d be interested in helping out, please get in touch with Robin on the forums and let him know!</p>
<p>I also put together (quickly) a few slides on the community that were included in the “State of the Libre Graphics” presentation that kicks off the meeting (presented this year by <a href="https://www.gimp.org">GIMP</a>er Simon Budig):</p>
<figure>
<img src="https://pixls.us/blog/2017/05/happy-2nd-birthday-discuss/LGM2017-pixls.us-0.png" alt="2017 LGM/Rio PIXLS.US State Of Slide 0">
<img src="https://pixls.us/blog/2017/05/happy-2nd-birthday-discuss/LGM2017-pixls.us-1.jpg" alt="2017 LGM/Rio PIXLS.US State Of Slide 1">
<img src="https://pixls.us/blog/2017/05/happy-2nd-birthday-discuss/LGM2017-pixls.us-2.jpg" alt="2017 LGM/Rio PIXLS.US State Of Slide 2">
<figcaption>
This slide deck is availabe in our <a href="https://github.com/pixlsus/Presentations/tree/master/LGM2017_State_Of" title="PIXLS.US Github">Github repo</a>.
</figcaption>
</figure>

<p>This was just a short overview of the community and I think it makes sense to include it here was well.
Since we stood the forum up two years ago we’ve seen about 3.2 million pageviews and have just under 1,400 users in the community.
Which is just <em>awesome</em> to me.</p>
<p>@LebedevRI was also going to be mad if I <em>didn’t</em> take the time to at least let folks know about <a href="https://raw.pixls.us">raw.pixls.us</a>, where we currently have 693 raw files across 477 cameras.
Please, take a moment to check <a href="https://raw.pixls.us">raw.pixls.us</a> and see if we are missing (or need better) files from a camera you may have, and get us samples for testing!</p>
<h2 id="raw-pixls-us"><a href="#raw-pixls-us" class="header-link-alt">raw.pixls.us</a></h2>
<p>We set up <a href="https://raw.pixls.us">raw.pixls.us</a> so we can gather camera raw samples for regression testing of rawspeed as well to have a place for any other project that might need raw files to test with.
As we <a href="https://pixls.us/blog/2017/01/new-year-new-raw-samples-website/">blogged about previously</a>, the new site is also a replacement for the now defunct rawsamples.ch website.</p>
<p>Stop in and see if we’re missing a sample you can provide, or if you can provide a better (or better licensed) version for your camera.
We’re focusing specifically on <a href="https://creativecommons.org/publicdomain/zero/1.0/">CC0</a> contributions.</p>
<h2 id="welcome-digikam-"><a href="#welcome-digikam-" class="header-link-alt">Welcome digiKam!</a></h2>
<figure>
<img src="https://pixls.us/blog/2017/05/happy-2nd-birthday-discuss/digikam-logo.jpg" alt="digiKam Logo">
</figure>

<p>As I mentioned in <a href="https://pixls.us/blog/2017/05/welcome-digikam/">my last blog post</a>, we learned that the <a href="https://www.digikam.org">digiKam</a> team was looking for a new webmaster through a post on discuss.
@Andrius posted a heads up on the digiKam 5.5.0 release <a href="https://discuss.pixls.us/t/digikam-5-5-0-released/3486">in this thread</a>.</p>
<p>Needless to say, less than a month or so later, @paperdigits had already finished up a nice new website for them!
This is something we’re really trying to help out the community with and are super glad to be able to help out the digiKam team with this.
The less time they have to worry about web infrastructure and security for it, the more time they can spend on awesome new features for their project and users.</p>
<p>Yes, we used a static site generator (<a href="http://gohugo.io/">Hugo</a> in this case), and we were also able to move their commenting system to use discuss as its back-end!
This is the same way we’re doing comments for PIXLS.US right now (scroll to the bottom of this post).</p>
<p>They’ve got <a href="https://discuss.pixls.us/c/software/digikam">their own category</a> on discuss for both general digiKam discussion as well as their linked comments from their website.</p>
<p>Speaking of using <a href="http://www.discourse.org/">discourse</a> as a commenting system…</p>
<h2 id="discourse-upstream"><a href="#discourse-upstream" class="header-link-alt">Discourse upstream</a></h2>
<p>We’ve been using <a href="http://www.discourse.org/">discourse</a> as our forum software from the beginning.
It’s a modern, open, and full-featured forum software that I think works incredibly well as a modern web application.</p>
<p>The ability to embed comments in a website that are part of the forum was one of the main reasons I went with it.
I didn’t want to expose users to unnecessary privacy concerns by embedding a third-party commenting system (<em>cough, <a href="https://disqus.com/">disqus</a>, cough</em>).
If I was going to go through the trouble of setting up a way to comment on things, I wanted to homogenize it with a full community-building effort.</p>
<p>This past year they (the discourse devs) added the ability to embed comments in multiple hosts (it was only one host when we first stood things up).
This means that we can now manage the comments for anyone else thay may need them!
Of course, building out a new website for digiKam meant that this was a perfect time to test things.</p>
<p>It all works beautifully, with one minor nitpick.
The ability to <em>style</em> the embedded comments was limited to a single style for all the places that they might be embedded.
This may be fine if all of the sites look similar, but if you visit <a href="http://www.digikam.org">www.digikam.org</a> and compare it to here, you can see they are a little bit different…
(we’re on white, digikam.org is on a dark background).</p>
<p>We needed a way to isolate the styling on a per-host basis, which after much help from @darix (yet <em>again</em> :)) I was able to finally hack something together that worked and get it pushed upstream (and merged finally)!</p>
<figure>
<img src="https://pixls.us/blog/2017/05/happy-2nd-birthday-discuss/discourse-class.png" alt='Discourse embed class name'>
<figcaption>
I made this!
</figcaption>
</figure>


<h2 id="play-raw"><a href="#play-raw" class="header-link-alt">Play Raw</a></h2>
<p>When <a href="http://rawtherapee.com/">RawTherapee</a> migrated their official forums over to pixls they brought something really fun with them: Play Raw.
They would share a single raw file amongst the community and then have everyone process and share their results (including their processing steps and associated .pp3 settings file).</p>
<p>If you haven’t seen it yet, we’ve had quite a few Play Raw posts over the past year with all sorts of wonderful images to practice on and share!
There are portraits, children, dogs, cats, landscapes, HDR, and phở!
There’s over 19 different raw files being shared right now, so come try your hand at processing (or even share a file of your own)!</p>
<p>The full list of play_raw posts can always be found here:<br><a href="https://discuss.pixls.us/tags/play_raw">https://discuss.pixls.us/tags/play_raw</a></p>
<h2 id="amazon-s3"><a href="#amazon-s3" class="header-link-alt">Amazon S3</a></h2>
<p>We <em>are</em> a photography forum, so it only made sense that we made it as easy as possible for community members to upload and share images (raw files, and more).
It’s one of the things I love about discourse that it’s so easy to add these things to your posts (simply drag-and-drop into the post editor) and upload them.</p>
<p>While this is easy to do, it <em>does</em> mean that we have to store all of this data.
The VPS we use from <a href="https://www.digitalocean.com/">Digital Ocean</a> only has a 40GB SSD and it has to include all of the main forum running on it.
We did have a little space for a while, but to help alleviate the local storage as a possible problem down the line, I moved our file storage out to Amazon S3.</p>
<p>This means that we can upload all we want and won’t really hit a wall with actual available storage space. 
It costs more each month than trying to store it all on local storage for the site, but then we don’t have to worry about expansion (or migration) later.
Plus our current upload size limit per file is 100MB!</p>
<figure>
<img src="https://pixls.us/blog/2017/05/happy-2nd-birthday-discuss/s3-cost.png" alt='Amazon S3 costs'>
</figure>

<p>As you can see, we’re only looking at about $5USD/month on average in storage and transfer costs for the site with Amazon.</p>
<p>We’re also averaging about $22usd/month in hosting costs with Digital Ocean, so we’re still only about $27/month in total hosting costs.
Maybe $30 if we include the hosting for the main website which is at <a href="https://www.stablehost.com/">Stablehost</a>.</p>
<h2 id="irc"><a href="#irc" class="header-link-alt">IRC</a></h2>
<p>We’ve had an <a href="https://kiwiirc.com/client/irc.freenode.net/?nick=webuser%7C?#pixls.us">IRC room</a> for a long time (longer than <a href="https://discuss.pixls.us">discuss</a> I think), but I only just got around to including a link on the site for folks to be able to join through a nice web client (<a href="https://kiwiirc.com/">Kiwi IRC</a>).</p>
<figure>
<img src="https://pixls.us/blog/2017/05/happy-2nd-birthday-discuss/discuss-headerbar.png" alt='Discuss header bar'>
</figure>

<p>It was included as part of an oft-requested set of links to get back to various parts of the main site from the forums.
I also added these links in the menu for the site as well (the header links are hidden when on mobile, so this way you can still access the links from whatever device you’re using):</p>
<figure>
<img src="https://pixls.us/blog/2017/05/happy-2nd-birthday-discuss/discuss-menu.png" alt="Discuss menu">
</figure>

<p>If you have your own <a href="https://en.wikipedia.org/wiki/Comparison_of_Internet_Relay_Chat_clients">IRC client</a> then you can reach us on irc.freenode.net #pixls.us.
Come and join us in the chat room!
If you’re not there you are definitely missing out on a ton of stimulating conversation and enlightening discussions!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Welcome digiKam!]]></title>
            <link>https://pixls.us/blog/2017/05/welcome-digikam/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2017/05/welcome-digikam/</guid>
            <pubDate>Wed, 03 May 2017 20:44:35 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2017/05/welcome-digikam/digikam-logo.png" /><br/>
                <h1>Welcome digiKam!</h1> 
                <h2>Lending a helping hand</h2>  
                <p>One of the goals we have here at PIXLS.US is to help Free Software projects however we can, and one of those ways is to focus on things that we can do well that might help make things easier for the projects.
It may not be much fun for project developers to deal with websites or community outreach necessarily.
This is something I think we can help with, and recently we had an opportunity to do just that with the awesome folks over at the photo management project <a href="https://www.digikam.org">digiKam</a>.</p>
<!-- more -->
<p>As part of a <a href="https://discuss.pixls.us/t/digikam-5-5-0-released/3486">post announcing the release of digiKam 5.5.0</a> on <a href="https://discuss.pixls.us">discuss</a>. we learned that  they were <a href="http://digikam.1695700.n4.nabble.com/digikam-org-Webmaster-wanted-td4694408.html">in need of a new webmaster</a>, and they needed something soon to migrate away from <a href="https://www.drupal.org/">Drupal</a> 6 for security reasons.
They had a rudimentary Drupal 7 theme setup, but it was severely lacking (non-responsive and not adapted to the existing content).</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2017/05/welcome-digikam/digikam-before.jpg" alt="Old digiKam website" width='960' height='783'>
<figcaption>
The previous digiKam website, running on Drupal 6.
</figcaption>
</figure>

<figure class='big-vid'>
<img src="https://pixls.us/blog/2017/05/welcome-digikam/digikam-after.jpg" alt="new digiKam website" width='960' height='783'>
<figcaption>
The new digiKam website!  Great work Mica!
</figcaption>
</figure>


<p>Mica (@paperdigits) reached out to Gilles Caulier and the digiKam community and offered our help, which they accepted!
At that point Mica gathered requirements from them and found in the end that a static website would be more than sufficient for their needs.
We coordinated with the <a href="https://www.kde.org/">KDE</a> folks to get a git repo setup for the new website, and rolled up our sleeves to start building!</p>
<figure>
<img src="https://pixls.us/blog/2017/05/welcome-digikam/GillesCaulier_by_Alexandre_Prokoudine.jpg" alt="Gilles Caulier by Alex Prokoudine" width='600' height='516'>
<figcaption>
<a href="https://www.flickr.com/photos/prokoudine/3371163363" title="Gilles Caulier by Alexandre Prokoudine on Flickr">Gilles Caulier</a> by <a href="http://libregraphicsworld.org">Alexandre Prokoudine</a> (<a href="https://creativecommons.org/licenses/by-nc-sa/2.0/" title="Creative Commons By-Attributions, Non-commerical, ShareAlike"><small>CC BY NC SA 2.0</small></a>)
</figcaption>
</figure>


<p>Mica chose to use the <a href="http://gohugo.io/">Hugo</a> static-site generator to build the site with.
This was something new for us, but turned out to be quite fast and fun to work with (it generates the entire digiKam site in just about 5 seconds).
Coupled with a version of the Foundation 6 blog theme we were able to get a base site framework up and running fairly quickly.
We scraped all of the old site content to make sure that we could port everything as well as make sure we didn’t <a href="https://www.w3.org/Provider/Style/URI" title="Cool URIs don&#39;t change">break any urls</a> along the way.</p>
<p>We iterated some design stuff along the way, ported all of the old posts to markdown files, hacked at the theme a bit, and finally included comments that are now hosted on <a href="https://discuss.pixls.us">discuss</a>.
What’s wild is that we managed to pull the entire thing together in about 6 weeks total (of part-time working on it).
The digiKam team seems happy with the results so far, and we’re looking forward to continue helping them by managing this infrastructure for them.</p>
<p>A big <strong>kudos</strong> to Mica for driving the new site and getting everything up and running.
This was really all due to his hard work and drive.</p>
<p>Also, speaking of discuss, we also have a new category created specifically for digiKam users and hackers: <a href="https://discuss.pixls.us/c/software/digikam">https://discuss.pixls.us/c/software/digikam</a>.</p>
<p>This is the same category that news posts from the website will post in, so feel free to drop in and say hello or share some neat things you may be working on with digiKam!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[GIMP is Going to LGM!]]></title>
            <link>https://pixls.us/blog/2017/03/gimp-is-going-to-lgm/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2017/03/gimp-is-going-to-lgm/</guid>
            <pubDate>Wed, 29 Mar 2017 21:51:36 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2017/03/gimp-is-going-to-lgm/Forte_de_Copacabana_panorama.jpg" /><br/>
                <h1>GIMP is Going to LGM!</h1> 
                <h2>Tall and tan and young and lovely...</h2>  
                <p>This years <a href="http://libregraphicsmeeting.org/2017/">Libre Graphics Meeting (2017)</a> is going to be held in the lovely city seen above, Rio de Janeiro, Brazil!
This is an important meeting for so many people in the Free/Libre art community as it’s one of the only times they have an opportunity to meet face to face.</p>
<p>We’ve had some folks attending the past LGM’s (<a href="https://patdavid.net/2014/05/libre-graphics-meeting-2014-in-leipzig.html">Leipzig</a> and <a href="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/#lgm">London</a>) and it’s a wonderful opportunity to spend some time with friends. (Also, <a href="https://discuss.pixls.us/users/frd/summary">@frd</a> from the community will be there!)</p>
<figure>
<img src='https://pixls.us/blog/2016/04/post-libre-graphics-meeting/LGM-flat.jpg' alt='GIMP and darktable at LGM'>
<figcaption>
<a href="https://www.gimp.org" title="The GIMP website">GIMP</a>ers, some <a href="https://www.darktable.org" title="darktable.org">darktable</a> folks, and even <a href="https://twitter.com/n8willis" title="Editor, LWN">Nate Willis</a> at the flat during LGM/London!
</figcaption>
</figure>

<p>So in the spirit of camaraderie, I have a request…</p>
<!-- more -->
<h2 id="donate"><a href="#donate" class="header-link-alt">Donate</a></h2>
<p>The <a href="https://www.gimp.org" title="The GIMP website">GIMP</a> team will be in attendance this year.  I happen to have a fondness for them so I’m asking anyone reading this to please head over and <a href="https://www.paypal.com/cgi-bin/webscr?cmd=_donations&amp;business=gimp%40gnome%2eorg&amp;lc=US&amp;item_name=Donation%20to%20GIMP%20Project&amp;item_number=106&amp;currency_code=USD" title="Donate to GIMP using PayPal">donate to the project</a>.</p>
<figure>
<a href="https://www.paypal.com/cgi-bin/webscr?cmd=_donations&amp;business=gimp%40gnome%2eorg&amp;lc=US&amp;item_name=Donation%20to%20GIMP%20Project&amp;item_number=106&amp;currency_code=USD" title="Donate to GIMP using PayPal"><img src="https://pixls.us/blog/2017/03/gimp-is-going-to-lgm/wilber-big.png" alt='GIMP Wilber' width='300' height='224'></a>
</figure>

<p>That link is for the GNOME PayPal account, but there are <a href="https://www.gimp.org/donating" title="Donating to GIMP">other ways to donate</a> as well.</p>
<p>This is one of the few times that the GIMP team gets a chance to meet in person.
They use the time to hack at GIMP and to manage internal business.
The time they get to spend together is invaluable to the project and by extension everyone that uses GIMP.</p>
<p>Just look at these faces!
Surely this <a href="https://en.wikipedia.org/wiki/The_Brady_Bunch">(Brady) Bunch</a> of folks is worth helping to get a better GIMP?</p>
<div class='fluid-vid'>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/NLGG5AWJf7M?rel=0" frameborder="0" allowfullscreen></iframe>
</div>

<figure class='big-vid'>
<img src="https://pixls.us/blog/2017/03/gimp-is-going-to-lgm/GIMPers.jpg" alt='GIMPers at LGM/London'>
<figcaption>
Left to right, top to bottom:<br> Ville, Mitch, Øyvind,<br> Simon, Liam, João,<br> Aryeom, Jehan, Michael
</figcaption>
</figure>


<h2 id="attending"><a href="#attending" class="header-link-alt">Attending</a></h2>
<p>Besides <a href="https://discuss.pixls.us/users/frd/summary">@frd</a> I’m not sure who else from the community might be attending, so if I’ve missed you I apologize!
Please feel free to use this topic to communicate and coordinate if you’d like.</p>
<p>It appears that personally I’m on a biennial schedule with attending LGM - so I’m looking forward to next year to be able to catch up with everyone!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[RawTherapee and Pentax Pixel Shift]]></title>
            <link>https://pixls.us/articles/rawtherapee-and-pentax-pixel-shift/</link>
            <guid isPermaLink="true">https://pixls.us/articles/rawtherapee-and-pentax-pixel-shift/</guid>
            <pubDate>Fri, 24 Mar 2017 19:17:10 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/rawtherapee-and-pentax-pixel-shift/nosle-lede.jpg" /><br/>
                <h1>RawTherapee and Pentax Pixel Shift</h1> 
                <h2>Supporting multi-file raw formats</h2>  
                <h2 id="what-is-pixel-shift-">What is Pixel Shift?<a href="#what-is-pixel-shift-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Modern digital sensors (with a few exceptions) use an arrangement of RGB filters over a square grid of photosites.  For a given 2x2 square of photosites the filters are designed to allow two green, and one each red and blue colors through to the photosite.  These are arranged on a grid:</p>
<figure>
<a title="By en:User:Cburnett, CC-BY-SA-3.0 or GPL, via Wikimedia Commons" href="https://commons.wikimedia.org/wiki/File%3ABayer_pattern_on_sensor.svg">
<img width="512" alt="Bayer pattern on sensor" src="https://pixls.us/articles/rawtherapee-and-pentax-pixel-shift/512px-Bayer_pattern_on_sensor.svg.png" height='333'/>
</a>
</figure>

<p>The pattern is known as a <a href="https://en.wikipedia.org/wiki/Bayer_filter">Bayer pattern</a> (after the creator Bryce Bayer of Eastman Kodak).  The resulting pattern shows how each RGB is offset into the grid.</p>
<figure>
<a title="By en:User:Cburnett, CC-BY-SA-3.0 or GPL, via Wikimedia Commons" href="https://commons.wikimedia.org/wiki/File%3ABayer_pattern_on_sensor_profile.svg">
<img alt="Bayer pattern on sensor profile" src="https://pixls.us/articles/rawtherapee-and-pentax-pixel-shift/Bayer_pattern_on_sensor_profile.svg.png" width='512' height='328' />
</a>
</figure>

<p>Each of the pixel sites captures a single color.  In order to produce a full color representation at each pixel, the other color values need to be interpolated from the surrounding grid.  This interpolation and methods for calculating it are referred to as <a href="https://en.wikipedia.org/wiki/Demosaicing">demosaicing</a>. The methods for accomplishing this vary across different algorithms.</p>
<figure>
<img src="https://pixls.us/articles/rawtherapee-and-pentax-pixel-shift/bayer-interp.png" width="250" height="250" alt='Bayer Interpolation Example'>
<figcaption>
The final RGB value for the initially Red pixel needs to be interpolated from the surrounding Blue and Green pixels.
</figcaption>
</figure>

<p>Unfortunately, this can often result in problems.
There can be chromatic aliasing problems resulting in odd color fringing and roughness on edges or a loss of detail and sharpness.</p>
<h3 id="pixel-shift">Pixel Shift<a href="#pixel-shift" class="header-link"><i class="fa fa-link"></i></a></h3>
<p><a href="http://us.ricoh-imaging.com/">Pentax</a>‘s Pixel Shift (Available on the <a href="http://www.ricoh-imaging.co.jp/english/products/k-1/">K-1</a>, <a href="http://www.ricoh-imaging.co.jp/english/products/k-3-2/">K-3 II</a>, <a href="http://www.ricoh-imaging.co.jp/english/products/kp/">KP</a>, <a href="http://www.ricoh-imaging.co.jp/english/products/k-70/">K-70</a>) attempts to alleviate some of these problems through a novel approach of capturing four images quickly in succession and by moving the entire camera sensor a single pixel for each shot.  This has the effect of capturing a full RGB value at each pixel location:</p>
<figure>
<img src="https://pixls.us/articles/rawtherapee-and-pentax-pixel-shift/pixel-shift-example.png" width="283" height="999" alt="Pixel Shift Example Diagram">
<figcaption>
Pixel Shift shifts the sensor by one pixel in each direction to be able to generate a full set of RGB values at each photosite.
</figcaption>
</figure>


<p>This means a full RGB value for a pixel location can be created without having to interpolate from neighboring values.</p>
<h3 id="advantages">Advantages<a href="#advantages" class="header-link"><i class="fa fa-link"></i></a></h3>
<h4 id="less-noise">Less Noise<a href="#less-noise" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>If you look carefully at the Bayer pattern, you’ll notice that when shifting to adjacent pixels there will always be two green values captured per pixel.  The average of these green values helps to suppress noise that may have been interpolated and spread through a normal, single-shot raw file.</p>
<figure>
<img src="https://pixls.us/articles/rawtherapee-and-pentax-pixel-shift/ps-adv-noise.png" width="640" height="640" alt="Pixel Shift Noise Reduction Example">
<figcaption>
Top: single raw frame, Bottom: Pixel Shift
</figcaption>
</figure>

<h4 id="less-moir-">Less Moiré<a href="#less-moir-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Avoiding the interpolation of pixel colors from surrounding photosites helps to reduce the appearance of Moiré in the final result:</p>
<figure>
<img src="https://pixls.us/articles/rawtherapee-and-pentax-pixel-shift/ps-adv-moire.png" width="640" height="640" alt="Pixel Shift Moiré Reduction Example">
<figcaption>
Top: single raw frame, Bottom: Pixel Shift
</figcaption>
</figure>


<h4 id="increased-resolution">Increased Resolution<a href="#increased-resolution" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>This method is similar in concept to what was previously seen when Olympus announced their “High Resolution” mode for the OMD E-M5mkII camera (or manually as we <a href="https://pixls.us/blog/2015/09/softness-and-superresolution/#a-question-of-scaling">previously described in this blog post</a>).
In that case they combine 8 frames moved by sub-pixel amounts to increase the overall resolution.
The difference here is that Olympus generates a single, combined raw file from the results, while Pixel Shift gets you access to each of the four raw files before they’re combined.</p>
<p>In each case, a higher resolution image can be created from the results:</p>
<figure>
<img src="https://pixls.us/articles/rawtherapee-and-pentax-pixel-shift/ps-adv-resolution.png" width="640" height="640" alt="Pixel Shift Increased Resolution Example">
<figcaption>
Top: single raw frame, Bottom: Pixel Shift
</figcaption>
</figure>


<h3 id="disadvantages">Disadvantages<a href="#disadvantages" class="header-link"><i class="fa fa-link"></i></a></h3>
<h4 id="movement">Movement<a href="#movement" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>As with most approaches for capturing multiple images and combining them, a particularly problematic area is when there are objects in motion between the frames being captured.
This is a common problem when stitching panoramic photography, when creating image stacks for noise reduction, and when combining images using methods such as Pixel Shift.</p>
<p>Although…</p>
<h2 id="the-rawtherapee-approach">The RawTherapee Approach<a href="#the-rawtherapee-approach" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Simply combining four static frames together is really trivial, and is something that all the other Pixel Shift-capable software can do without issue. The real world is not often so accommodating as a studio setup, and that is where the recent work done by <a href="https://discuss.pixls.us/users/heckflosse/summary">@Ingo</a> and <a href="https://discuss.pixls.us/users/ilias_giarimis/summary">@Ilias</a> on <a href="http://www.rawtherapee.com">RawTherapee</a> really begins to shine.</p>
<p>What they’ve been working on in RawTherapee is to improve the <em>detection of movement</em> in a scene.  There are several types of movement possible: </p>
<ul>
<li>Objects showing at different places in a scene such as fast moving cars.</li>
<li>Partly moving objects like foliage in the wind.</li>
<li>Moving objects reflecting light onto static objects in the scene</li>
<li>Changing illumination conditions such as long exposures at sunset.</li>
</ul>
<p>All of these types of movement need to be detected to avoid the artifacts they may cause in the final shot.</p>
<p>One of the key features of Pixel Shift movement detection in RawTherapee is that it allows you to show the movement mask, so you get feedback on which regions of the image are detected as movement and which are static.  For the regions with movement RawTherapee will then use the demosaiced frame of your choice to fill it in, and for regions without movement it will use the Pixel Shift combined image with more detail and less noise.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/rawtherapee-and-pentax-pixel-shift/movemask.jpg" width='960' height='720' alt="Pixel Shift Movement Mask from RawTherapee">
<figcaption>
Unique to RawTherapee is the option to export the resulting motion mask<br>(for those that may want to do further blending/processing manually).
</figcaption>
</figure>

<p>The accuracy of movement detection in RawTherapee leads to much better handling of motion artifacts that works well in places where proprietary solutions fall short.
For most cases the Automatic motion correction mode works well, but you can also fine tune the parameters in custom mode to correctly detect motion in high ISO shots.</p>
<p>Besides being the only option (barring <a href="https://github.com/tomtor/dcrawps">dcrawps</a> possibly) to process Pixel Shift files in Linux, RawTherapee has some other neat options that aren’t found in other solutions. One of them is the ability to export the actual movement mask separate from the image. This will let users generate separate outputs from RT, and to combine them later using the movement mask. Another option is the ability to choose which of the other frames to use for filling in the movement areas on the image.</p>
<h2 id="pixel-shift-support-in-other-software">Pixel Shift Support in Other Software<a href="#pixel-shift-support-in-other-software" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Pentax’s own Digital Camera Utility (a rebranded version of SilkyPix) naturally supports Pixel Shift, but as with most vendor-bundled software it can be slow, unwieldy, and a little buggy sometimes.  Having said that, the results do look good, and at least the “Motion Correction” is able to be utilized with this software.</p>
<p><a href="https://helpx.adobe.com/camera-raw/using/supported-cameras.html">Adobe Camera Raw</a> (ACR) got support for Pixel Shift files in version 9.5.1 (but doesn’t utilize the “Motion Correction”).  In fact, ACR didn’t have support at the time that <a href="https://www.dpreview.com">DPReview.com</a> looked at the feature last year, causing them to retract the article and re-post when they had a chance to use a version of ACR with support.</p>
<p>A <a href="https://www.dpreview.com/reviews/k1-pixel-shift-resolution-updated-field-test">recent look at Pixel Shift</a> processing over at DPReview.com showed some interesting results.</p>
<figure>
<img src="https://pixls.us/articles/rawtherapee-and-pentax-pixel-shift/IMGP0597.RT-PS-1.jpg" width="640" height="428" alt="Sample Image Raw">
<figcaption>
The image used in the DPReview article. &copy;<a href="https://www.dpreview.com/reviews/k1-pixel-shift-resolution-updated-field-test">Chris M Williams</a>
</figcaption>
</figure>

<p>We’re going to look at some 100% crops from that article and compare them to the results available using RawTherapee (the latest development version, to be released as 5.1 in April).
The RawTherapee versions were set to the most neutral settings with only an exposure adjustment to match other samples better.</p>
<p>Looking first at an area of foliage with motion, the places where there are issues <a href="https://www.dpreview.com/reviews/k1-pixel-shift-resolution-updated-field-test#reviewImageComparisonWidget-52182546">becomes apparent</a>.</p>
<p>For reference, here is the Adobe Camera Raw (ACR) version of a single frame from a Pixel Shift file:</p>
<figure>
<img src="https://pixls-discuss.s3.amazonaws.com/original/2X/e/ed57f97d73ec2dc6f80256a6e8e57bf812682fc0.jpg" width="300" height="200" alt="Pixel Shift Comparison #1">
</figure>

<p>The results with Pixel Shift on, and motion correction on, from straight-out-of-camera (SOOC), Adobe Camera Raw (ACR), SilkyPix, and RawTherapee (RT) are decidedly mixed.  In all but the RT version, there’s a very clear problem with effective blending and masking of the frames in areas with motion:</p>
<figure>
<img src="https://pixls.us/articles/rawtherapee-and-pentax-pixel-shift/IMGP0597-Area01-combined.png" width='600' height='400' alt="Pixel Shift Comparison #2" >
<figcaption>

</figcaption>
</figure>



<hr>
<p>Things look much worse for Adobe Camera Raw when looking at high-motion areas like the water spray at the foot of the waterfall, though SilkyPix does a much better job here.</p>
<p>The ACR version of a single frame for reference:</p>
<figure>
<img src="https://pixls.us/articles/rawtherapee-and-pentax-pixel-shift/IMGP0597-Area02-ACR-MotionOff.jpg" width="300" height="200" alt="Pixel Shift Comparison #2">
</figure>

<p>Both the SOOC and SilkyPix versions handle all of the movement well here.  RawTherapee also does a great job blending the frames despite all of the movement.  Adobe Camera Raw is not doing well at all…</p>
<figure>
<img src="https://pixls.us/articles/rawtherapee-and-pentax-pixel-shift/IMGP0597-Area02-combined.png" width='600' height='400' alt="Pixel Shift Comparison #2">
</figure>



<hr>
<p>Finally, in a frame full of movement, such as the surface of the water.</p>
<p>The ACR version of a single frame for reference:</p>
<figure>
<img src="https://pixls-discuss.s3.amazonaws.com/original/2X/b/bfd2480b8f2f9467e0e0db33ba8e3791085a7ed3.jpg" width="300" height="200" alt="Pixel Shift Comparison #3">
</figure>

<p>In a frame full of movement the SOOC, ACR, and SilkyPix processing all struggle to combine a clean set of frames.  They exhibit a pixel pattern from the processing, and the ACR version begins to introduce odd colors:</p>
<figure>
<img src="https://pixls.us/articles/rawtherapee-and-pentax-pixel-shift/IMGP0597-Area03-combined.png" width='600' height='400' alt="Pixel Shift Comparison #3">
</figure>



<hr>
<p>As mentioned earlier, a unique feature of RawTherapee is the ability to show the motion mask. Here is an example of the motion mask for this image</p>
<figure>
<img src="https://pixls.us/articles/rawtherapee-and-pentax-pixel-shift/IMGP0597.RT-PS-1-masked.jpg" width="640" height="428" alt='Pixel Shift Motion Mask'>
<figcaption>
The motion mask generated by RawTherapee for the above image.
</figcaption>
</figure>

<p>Also worth mentioning is the “Smooth Transitions” feature in RawTherapee.
When there are regions with and without motion, the regions with motion are masked and filled in with data from a demosaiced frame of your choice.
The other regions are taken from the Pixel Shift combined image.
This can occasionally lead to harsh transitions between the two.</p>
<p>For instance, a transition as processed in SilkyPix:</p>
<figure>
<img src="https://pixls.us/articles/rawtherapee-and-pentax-pixel-shift/smooth-transition-silkypix.png" width='439' height='568' alt='Pixel Shift Transition SilkyPix'>
</figure>

<p>RawTherapee’s “Smooth Transitions” feature does a much better job handling the transition:</p>
<figure>
<img src="https://pixls.us/articles/rawtherapee-and-pentax-pixel-shift/smooth-transition-rt.png" width='439' height='568' alt='Pixel Shift Transition RawTherapee'>
</figure>



<h3 id="in-conclusion">In Conclusion<a href="#in-conclusion" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>In another example of the power and community of Free/Libre and Open Source Software we have a great enhancement to a project based on feedback and input from the users.  In this case, it all started with a <a href="https://discuss.pixls.us/t/support-for-pentax-pixel-shift-files-3489/2560">post on the RawTherapee forums</a>.</p>
<p>Thanks to the hard work of <a href="https://discuss.pixls.us/users/heckflosse/summary">@Ingo</a> and <a href="https://discuss.pixls.us/users/ilias_giarimis/summary">@Ilias</a> Pentax shooters now have a Pixel Shift capable software that is not only FLOSS but also produces better results than the proprietary solutions!</p>
<p>Not so coincidentally, community member <a href="https://discuss.pixls.us/users/nosle">@nosle</a> gave permission to use one of his PS files for everyone to try processing on the <a href="https://discuss.pixls.us/t/play-pixelshift/3142">Play pixelshift thread</a>.
If you’d like to practice consider heading over to get his file and feedback from others!</p>
<p>Pixel Shift is currently in the development branch of RawTherapee and is slated for release with version 5.1.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Find us at SCaLE 15x]]></title>
            <link>https://pixls.us/blog/2017/02/find-us-at-scale-15x/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2017/02/find-us-at-scale-15x/</guid>
            <pubDate>Mon, 27 Feb 2017 00:00:00 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2017/02/find-us-at-scale-15x/stickers.jpg" /><br/>
                <h1>Find us at SCaLE 15x</h1> 
                  
                <p>The <a href="https://www.socallinuxexpo.org/scale/15x">Southern California Linux Expo (SCaLE) 15x</a> is  returning to the Pasadena Convention Center on March 2-5, 2017. SCaLE is one of the largest community-organized conferences in North America, with some 3,500 attendees last year.</p>
<!-- more -->
<figure>
    <a href="https://www.socallinuxexpo.org/scale/15x" title="SCaLE 15x">
        <img src="https://pixls.us/blog/2017/02/find-us-at-scale-15x/scale_15x_logo.png" alt="SCaLE Logo">
    </a>
</figure>

<p>If you’re attending the conference this year, find me, <a href="https://discuss.pixls.us/users/paperdigits/activity">@paperdigits</a> and lets talk shop or grab a meal!</p>
<figure>
    <img src="https://pixls.us/blog/2017/02/find-us-at-scale-15x/paperdigits.jpg" alt='@paperdigits'>
    <figcaption>Don’t judge me, it was the morning.</figcaption>
</figure>
You can ping me on the <a href="https://discuss.pixls.us">forum</a>, <a href="https://twitter.com/paperdigits">on twitter</a>, or on Matrix/riot.im at @paperdigits:matrix.org.

If meeting isn’t enough for you, I’ll have stickers!

<figure class='big-vid'>
    <img src="https://pixls.us/blog/2017/02/find-us-at-scale-15x/stickers.jpg" alt='Get yourself some stickers! ' />
</figure>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[From the Community Vol. 2]]></title>
            <link>https://pixls.us/blog/2017/02/from-the-community-vol-2/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2017/02/from-the-community-vol-2/</guid>
            <pubDate>Fri, 10 Feb 2017 00:00:00 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2017/02/from-the-community-vol-2/grain-lede.jpg" /><br/>
                <h1>From the Community Vol. 2</h1> 
                  
                <p>Welcome to the second installment of <em>From the Community</em>, a (hopefully) quarterly-ish blog post to highlight a few of the things our community members have been doing!</p>
<!-- more -->
<h2 id="improving-grain-simulation"><a href="#improving-grain-simulation" class="header-link-alt">Improving grain simulation</a></h2>
<p><a href="https://discuss.pixls.us/users/arctic/activity">@arctic</a> has posted some research about how to <a href="https://discuss.pixls.us/t/lets-improve-grain/2709">better simulate grain in our digital images</a> and the ensuing conversation is both fascinating and way above my head! This discussion is thus far raw processor independent and more input and code is welcome!</p>
<figure class='big-vid'>
    <img src='https://pixls-discuss.s3.amazonaws.com/original/2X/4/443723d5e75f6eedd0a0aa13bdf738af805e101d.png' alt='Examples of grain from raw processing programs'>
</figure>

<h2 id="a-tutorial-on-rbg-color-mixing"><a href="#a-tutorial-on-rbg-color-mixing" class="header-link-alt">A tutorial on RBG color mixing</a></h2>
<p>We’ve somewhat recently welcomed the painters into the fold on the <a href="https://discuss.pixls.us/c/digital-painting">pixls’ forum</a> and <a href="https://discuss.pixls.us/users/Elle/activity">@Elle</a> rewarded us all with a tutorial RGB color mixing. She delves into subjects such as mixing color pigments like a traditional painter and how to handle that in the digital darkroom. You can <a href="https://discuss.pixls.us/t/a-short-tutorial-on-rgb-color-mixing-and-glazing-grids/2961">read the whole article here</a>.</p>
<h2 id="working-to-support-pentax-pixel-shift-files-in-rawtherapee"><a href="#working-to-support-pentax-pixel-shift-files-in-rawtherapee" class="header-link-alt">Working to support Pentax Pixel Shift files in RawTherapee</a></h2>
<p>There has been a lot of on-going work to <a href="https://discuss.pixls.us/t/support-for-pentax-pixel-shift-files-3489/2560">bring support for Pentax Pixel Shift files in RawTherapee</a>; the thread has now reached 234 posts and it is inspiring to see the community and developers coming together to bring support for an interesting technology. The feature set has been evolving pretty rapidly and it will be exiting when it makes it to a stable release.</p>
<figure class='big-vid'>
    <img src='https://pixls-discuss.s3.amazonaws.com/original/2X/d/d42ce8c659f6fe795d7993c6ee8b3a17b15258dd.png' alt='An example pixel shift file'>
</figure>

<h2 id="midi-controller-support-for-darktable"><a href="#midi-controller-support-for-darktable" class="header-link-alt">Midi controller support for Darktable</a></h2>
<p>Some preliminary work has begun to bring generic <a href="https://discuss.pixls.us/t/midi-controller-for-darktable/2582/47">midi controller support to darktable</a>. The funding for the midi controller to spur the development of this feature is a direct result of the members of the forum <a href="https://pixls.us/support/">directly giving to further community causes</a>. Once the darktable developers are finished with the midi controller, it’ll be offered to other developers to use to help implement support!</p>
<figure class='big-vid'>
    <img src='https://pixls-discuss.s3.amazonaws.com/original/2X/5/5662e17ae67735964d76e67aaa59dfff706dda14.jpg' alt='A Korg midi controller'>
</figure>

<h2 id="methods-for-dealing-with-clipped-highlights"><a href="#methods-for-dealing-with-clipped-highlights" class="header-link-alt">Methods for dealing with clipped highlights</a></h2>
<p><a href="https://discuss.pixls.us/users/Morgan_Hardwood/activity">@Morgan_Hardwood</a> has written a <a href="https://discuss.pixls.us/t/dealing-with-clipped-highlights-an-example/2976">very nice post detailing several methods for dealing with clipped highlights in RawTherapee</a>. These include tone-mapping, highlights and shadows, and using the CIECAM02 mode.</p>
<figure class='big-vid'>
    <img src='https://pixls-discuss.s3.amazonaws.com/original/2X/5/5f20c7ff6ae3ef7f08e00ce05fc9944251266d84.jpg' alt='Working with clipped highlights'>
</figure>

  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[New Year, New Raw Samples Website]]></title>
            <link>https://pixls.us/blog/2017/01/new-year-new-raw-samples-website/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2017/01/new-year-new-raw-samples-website/</guid>
            <pubDate>Thu, 12 Jan 2017 17:10:38 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2017/01/new-year-new-raw-samples-website/lede_IMG_5355.jpg" /><br/>
                <h1>New Year, New Raw Samples Website</h1> 
                <h2>A replacement for rawsamples.ch</h2>  
                <p>Happy New Year, and I hope everyone has had a wonderful holiday!</p>
<p>We’ve been busy working on various things ourselves, including migrating <a href="http://rawpedia.rawtherapee.com">RawPedia</a> to a new server as well as building a replacement raw sample database/website to alleviate the problems that <a href="http://rawsamples.ch">rawsamples.ch</a> was having…</p>
<!-- more -->
<h2 id="rawsamples-ch-replacement"><a href="#rawsamples-ch-replacement" class="header-link-alt">rawsamples.ch Replacement</a></h2>
<p><a href="http://rawsamples.ch">Rawsamples.ch</a> is a website with the goal to:</p>
<blockquote>
<p> …provide RAW-Files of nearly all available Digitalcameras mainly to software-developers.  [sic]</p>
</blockquote>
<p>It was created by Jakob Rohrbach and had been running since March 2007, having amassed over 360 raw files in that time from various manufacturers and cameras. Unfortunately, back in 2016 the site was hit with a SQL-injection that ended up corrupting the database for the <a href="https://www.joomla.org/">Joomla</a> install that hosted the site. To compound the pain, there were no database backups… :(</p>
<p>On the good side, the <a href="https://pixls.us">PIXLS.US</a> community has some dangerous folks with idle hands. Our friendly, neighborhood @andabata (<a href="https://www.flickr.com/photos/andabata" title="andabata&#39;s Flickr page">Kees Guequierre</a>) had some time off at the end of the year and a desire to build something. You may know @andabata as the fellow responsible for the super-useful <a href="https://dtstyle.net/">dtstyle</a> website, which is chock full of <a href="http://darktable.org">darktable</a> styles to peruse and download (if you haven’t heard of it before &ndash; you’re welcome!). He’s also my go-to for macro photography and is responsible for this awesome image used on a slide for the <a href="http://libregraphicsmeeting.org/2016/">Libre Graphics Meeting</a>:</p>
<figure>
<img src="https://pixls.us/blog/2017/01/new-year-new-raw-samples-website/pixls-11.jpg" alt='PIXLS.US LGM Slide'>
</figure>

<p>Luckily, he decided to build a site where contributors could upload sample raw files from their cameras for everyone to use &ndash; particularly developers. We downloaded the archive of the raw files kept at rawsamples.ch to include with files that we already had. The biggest difference between the files from rawsamples.ch and <a href="https://raw.pixls.us">raw.pixls.us</a> is the licensing.  The existing files, and the preference for any new contributions, are licensed as <a href="https://creativecommons.org/publicdomain/zero/1.0/" title="Creative Commons Zero - Public Domain">Creative Commons Zero - Public Domain</a> (as opposed to <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" title="Creative Commons Attribution-NonCommercial-ShareAlike">CC-BY-NC-SA</a>).</p>
<p>After some hacking, with input and guidance from <a href="http://darktable.org">darktable</a> developer <a href="https://github.com/LebedevRI">Roman Lebedev</a>, the site was finally ready.
The repository for it can be found on GitHub: <a href="https://github.com/pixlsus/raw">raw.pixls.us repo</a>.</p>
<h2 id="raw-pixls-us"><a href="#raw-pixls-us" class="header-link-alt"><a href="https://raw.pixls.us">raw.pixls.us</a></a></h2>
<p>The site is now live at <a href="https://raw.pixls.us">https://raw.pixls.us</a>.</p>
<p>You can <a href="https://raw.pixls.us#repo">look at the submitted files</a> and search/sort through all of them (and download the ones you want).</p>
<p>In addition to browsing the archive, it would be fantastic if you were able to supplement the database by uploading sample images.  Many of the files from the rawsamples.ch archive are licensed <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" title="Creative Commons Attribution-NonCommercial-ShareAlike">CC-BY-NC-SA</a>, but we’d rather have the files licensed <a href="https://creativecommons.org/publicdomain/zero/1.0/" title="Creative Commons Zero - Public Domain">Creative Commons Zero - Public Domain</a>.  CC0 is preferable because if the sample raw files are separated from the database, they can safely be redistributed without attribution. So if you have a camera that is already in the list with the more restrictive license, then please consider uploading a replacement for us!</p>
<p><strong>We are looking for shots that are:</strong></p>
<ul>
<li>Lens mounted on the camera</li>
<li>Lens cap off</li>
<li>In focus</li>
<li>Properly exposed (not over/under)</li>
<li>Landscape orientation</li>
<li>Licensed under the <a href="https://creativecommons.org/publicdomain/zero/1.0/" title="Creative Commons Zero - Public Domain">Creative Commons Zero</a></li>
</ul>
<p><strong>We are <em>not</em> looking for:</strong></p>
<ul>
<li>Series of images with different ISO, aperture, shutter, wb, or lighting<br>(Even if it’s a shot of a color target)</li>
<li>DNG files created with Adobe DNG Converter</li>
</ul>
<p>Please take a moment and see if you can provide samples to help the developers!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Welcome Digital Painters]]></title>
            <link>https://pixls.us/blog/2016/12/welcome-digital-painters/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/12/welcome-digital-painters/</guid>
            <pubDate>Mon, 05 Dec 2016 21:50:29 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2016/12/welcome-digital-painters/lede_Fisherman.jpg" /><br/>
                <h1>Welcome Digital Painters</h1> 
                <h2>You mean there's art outside photography?</h2>  
                <p>Yes, there really is art outside photography. :)</p>
<p>The history and evolution of painting has undergone a similar transformation as most things adapting to a digital age. As photographers, we adapted techniques and tools commonly used in the darkroom to software, and found new ways to extend what was possible to help us achieve a vision.  Just as we tried to adapt skills to a new environment, so too did traditional artists, like painters. </p>
<!-- more -->
<figure>
<img src="https://pixls.us/blog/2016/12/welcome-digital-painters/patdavid-by-deveze.jpg" alt='Pat David Painting by Gustavo Deveze' width='400' height='470'>
<figcaption>
<a href="https://pixls.us/images/Pat-David-Headshot-Crop-2048-Q60.jpg" title="Pat David&#39;s Headshot">My headshot</a>, as painted by <a href="http://www.deveze.com.ar/" title="Gustavo Deveze&#39;s website">Gustavo Deveze</a>
</figcaption>
</figure>

<p>These artists adapted by not only emulating the results of various techniques, but by pushing forward the boundaries of what was possible through these new (<em>Free Software</em>) tools.</p>
<h2 id="impetus"><a href="#impetus" class="header-link-alt">Impetus</a></h2>
<p>Digital painting discussions with Free Software lacks a good outlet for collaboration that can open the discussion for others to learn from and participate in.  This is a similar situation the Free Software + photography world was in that prompted the creation of <a href="https://pixls.us">pixls.us</a>.</p>
<p>Due to this, both <a href="http://americogobbo.com.br">Americo Gobbo</a> and <a href="http://ninedegreesbelow.com/">Elle Stone</a> reached out to us to see if we could create a new category in the community about Digital Painting with a focus on promoting serious discussion around techniques, processes, and associated tools.</p>
<p>Both of them have been working hard on advancing the capabilities and quality of various Free Software tools for years now.  Americo brings with him the interest of other painters who want to help accelerate the growth and adoption of Free Software projects for painting (and more) in a high-quality and professional capacity. A little background about them:</p>
<p><strong><a href="http://americogobbo.com.br">Americo Gobbo</a></strong> studied Fine Arts in Bologna, Italy. Today he lives and works in Brazil, where he continues to develop studies and create experimentation with painting and drawing mainly within the digital medium in which he tries to replicate the traditional effects and techniques from the real world to the virtual.</p>
<figure>
<img src="https://pixls.us/blog/2016/12/welcome-digital-painters/Imaginary Landscape - Americo Gobbo.png" alt='Imaginary Landscape Painting by Americo Gobbo' width='610' height='377'>
<figcaption>
Imaginary Landscape - Wet sketches, experiments on GIMP 2.9.+ <br>
<a href="http://americogobbo.com.br">Americo Gobbo</a>, 2016. 
</figcaption>
</figure>

<p><strong><a href="http://ninedegreesbelow.com/">Elle Stone</a></strong> is an amateur photographer with a long-standing interest in the history of photography and print making, and in combining painting and photography. She’s been contributing to GIMP development since 2012, mostly in the areas of color management and proper color mixing and blending.</p>
<figure>
<img src="https://pixls.us/blog/2016/12/welcome-digital-painters/Leaves in May - Elle Stone.jpg" alt='Leaves in May Image by Elle Stone' width='480' height='626'>
<figcaption>
Leaves in May, GIMP-2.9 (GIMP-CCE)<br> 
<a href="http://ninedegreesbelow.com/">Elle Stone</a>, 2016.
</figcaption>
</figure>

<h2 id="artists"><a href="#artists" class="header-link-alt">Artists</a></h2>
<p>With this introductory post to the new Digital painting category forum we feature Gustavo Deveze, who is a Visual Artist using free software. Deveze’s work is characterized by mixing different medias and techniques. With future posts we want to continue featuring artists using free software.</p>
<h3 id="gustavo-deveze"><a href="#gustavo-deveze" class="header-link-alt">Gustavo Deveze</a></h3>
<p>Gustavo Deveze is a visual artist and lives in Buenos Aires. He trained as a draftsman at the National School of Fine Arts “Manuel Belgrano”, and filmmaker at <a href="http://idac.edu.ar/">IDAC - Instituto de Arte Cinematográfica</a> in Avellaneda, Argentina.</p>
<p>His works utilize different materials and supports and he is published by different publishers. Although in the last years he works mainly in digital format and with free software.
He has participated in national and international shows and exhibitions of graphics and cinema with many awards. His last exposition can be seen on issuu.com:
<a href="https://issuu.com/gustavodeveze/docs/inadecuado2edicion">https://issuu.com/gustavodeveze/docs/inadecuado2edicion</a></p>
<p>Website: <a href="http://www.deveze.com.ar">http://www.deveze.com.ar</a></p>
<ul>
<li>Blog: <a href="http://jeneverito.blogspot.com.ar/">http://jeneverito.blogspot.com.ar/</a></li>
<li>Google+: <a href="https://plus.google.com/107589083968107443043">https://plus.google.com/107589083968107443043</a></li>
<li>Facebook: <a href="https://www.facebook.com/gustavo.deveze">https://www.facebook.com/gustavo.deveze</a></li>
</ul>
<figure>
<img src="https://pixls.us/blog/2016/12/welcome-digital-painters/The Emperors happiness.jpg" title="Cudgels and Bootlickers: The Emperor's happiness - Gustavo Deveze" alt="Cudgels and Bootlickers: The Emperor's happiness - Gustavo Deveze" width='640' height='640'>
<figcaption>Cudgels and Bootlickers: The Emperor’s happiness - <a href="http://www.deveze.com.ar/" title="Gustavo Deveze&#39;s website">Gustavo Deveze</a>.
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2016/12/welcome-digital-painters/Lets be clear.jpg"  title="Let's be clear: the village's idiot is not tall... - Gustavo Deveze" alt="Let's be clear: the village's idiot is not tall... - Gustavo Deveze" width='640' height='640'>
<figcaption>Let’s be clear: the village’s idiot is not tall… - <a href="http://www.deveze.com.ar/" title="Gustavo Deveze&#39;s website">Gustavo Deveze</a>.
</figcaption>
</figure>


<h2 id="digital-painting-category"><a href="#digital-painting-category" class="header-link-alt">Digital Painting Category</a></h2>
<p>The new Digital Painting category is for discussing painting techniques, processes, and associated tools in a digital environment using Free/Libre software. Some relevant topics might include:</p>
<ul>
<li><p>Emulating non-digital art, drawing on diverse historical and cultural genres and styles of art.</p>
</li>
<li><p>Emulating traditional “wet darkroom” photography, drawing on the rich history of photographic and printmaking techniques.</p>
</li>
<li><p>Exploring ways of making images that were difficult or impossible before the advent of new algorithms and fast computers to run them on, including averaging over large collections of images.</p>
</li>
<li><p>Discussion of topics that transcend “just photography” or “just painting”, such as composition, creating a sense of volume or distance, depicting or emphasizing light and shadow, color mixing, color management, and so forth.</p>
</li>
<li><p>Combining painting and photography: Long before digital image editing artists already used photographs as aids to and part of making paintings and illustrations, and photographers incorporated painting techniques into their photographic processing and printmaking.</p>
</li>
<li><p>An important goal is also to encourage artists to submit tutorials and videos about Digital Painting with Free Software and to also submit high-quality finished works.</p>
</li>
</ul>
<h2 id="say-hello-"><a href="#say-hello-" class="header-link-alt">Say Hello!</a></h2>
<p>Please feel free to stop into the new [Digital Painting category][dp-forum], introduce yourself, and say hello! I look forward to seeing what our fellow artists are up to.</p>
<p><small>All images not otherwise specified are licensed [CC-BY-NC-SA][]</small>
[CC-BY-NC-SA]: <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">https://creativecommons.org/licenses/by-nc-sa/4.0/</a> 
[dp-forum]: <a href="https://discuss.pixls.us/c/digital-painting">https://discuss.pixls.us/c/digital-painting</a></p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[A Masashi Wakui look with GIMP]]></title>
            <link>https://pixls.us/articles/a-masashi-wakui-look-with-gimp/</link>
            <guid isPermaLink="true">https://pixls.us/articles/a-masashi-wakui-look-with-gimp/</guid>
            <pubDate>Mon, 28 Nov 2016 19:25:21 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/lede_Akihabara.jpg" /><br/>
                <h1>A Masashi Wakui look with GIMP</h1> 
                <h2>A color bloom fit for night urban landscapes</h2>  
                <p>This tutorial explains how to achieve an effect based on the post processing by <a href="https://www.flickr.com/photos/megane_wakui/">photographer Masashi Wakui</a>.  His primary subjects appear as urban landscape views of Japan where he uses some pretty and aggressive color toning to complement his scenes along with a soft ‘bloom’ effect on the highlights. The results evoke a strong feeling of an almost cyberpunk or futuristic aesthetic (particularly for fans of <a href="http://www.imdb.com/title/tt0083658/">Bladerunner</a> or <a href="http://www.imdb.com/title/tt0094625">Akira</a>!).</p>
<figure>
<a href="https://www.flickr.com/photos/megane_wakui/24803565399/in/dateposted/" title="Untitled by Masashi Wakui"><img src="https://c8.staticflickr.com/2/1706/24803565399_6b41ea3a17_z.jpg" width="640" height="426" alt="Untitled"></a>

<a href="https://www.flickr.com/photos/megane_wakui/24405269789/in/dateposted/" title="Untitled by Masashi Wakui"><img src="https://c6.staticflickr.com/2/1464/24405269789_4a80f97545_z.jpg" width="640" height="427" alt="Untitled"></a>

<a href="https://www.flickr.com/photos/megane_wakui/22817821874/in/dateposted/" title="Untitled by Masashi Wakui"><img src="https://c3.staticflickr.com/1/742/22817821874_267a642ff9_z.jpg" width="640" height="427" alt="Untitled"></a>
</figure>

<p>This tutorial started its life in the <a href="https://discuss.pixls.us/t/technique-inspired-by-masashi-wakui-post/2618" title="Technique inspired by masashi wakui post">pixls.us forum</a>, which was inspired by <a href="https://discuss.pixls.us/t/achieve-the-masashi-wakui-look/634" title="Achieve the Masashi Wakui look">a forum post</a> seeking assistance on replicating the color grading and overall look/feel of Masashi’s photography.</p>
<h2 id="prerequisites">Prerequisites<a href="#prerequisites" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>To follow along will require a couple of plugins for GIMP. </p>
<p>The <a href="http://registry.gimp.org/node/28644">Luminosity Mask</a> filter will be used to target color grading to specific tones. You can find out more about <em>luminosity masks</em> in GIMP at <a href="http://blog.patdavid.net/2011/10/getting-around-in-gimp-luminosity-masks.html">Pat David’s blog post</a> and his <a href="http://blog.patdavid.net/2013/11/getting-around-in-gimp-luminosity-masks.html">follow-up blog post</a>.  If you need to install the script, directions can be found (along with the scripts) at the <a href="https://github.com/pixlsus/GIMP-Scripts#installing-gimp-scripts-scheme-scm">PIXLS.US GIMP scripts git repository</a>.</p>
<p>You will also need the <a href="http://registry.gimp.org/node/11742">Wavelet decompose</a> plugin. The easiest way to get this plugin is to use the one available in <a href="https://gmic.eu">G’MIC</a>. As a bonus you’ll get access to many other incredible filters as well! Once you’ve installed <a href="https://gmic.eu">G’MIC</a> the filter can be found under<br><code>Details → Split details [wavelets]</code>.</p>
<p>We will do some basic toning and then apply Gimp’s wavelet decompose filter to do some magic.
Two things will be used from the wavelet decompose results:</p>
<ul>
<li>the residual</li>
<li>the coarsest wavelet scale (number 8 in this case)</li>
</ul>
<p>The basic idea is to use the residual of the the wavelet decompose filter to color the image. What this does is average and blur the colors. The trick strengthens the effect of the surroundings being colored by the lights. The number of wavelet scales to use depends on the pixel size of the picture; the relative size of the coarsest wavelet scale compared to the picture is the defining parameter. The wavelet scale 8 will then produce overemphasised local contrasts, which will accentuate the lights further. This works nicely in pictures with lights as the brightest areas will be around lights. Used on daytime picture this effect will also accentuate brighter areas which will lead to a kind of “glow” effect. I tried this as well and it does look good on some pictures while on others it looks just wrong. Try it!</p>
<p>We will be applying all the following steps to this picture, taken in Akihabara, Tokyo.</p>
<figure class="big-vid">
    <a href="Akihabara_original.jpg">
      <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/Akihabara_base.jpg" alt="The unaltered photograph" width="960" height="590">
    </a>
    <figcaption>
    The starting image (<a href='Akihabara_original.jpg' title='Download the full resolution version to follow along'>download full resolution</a>).
    </figcaption>
</figure>

<ol>
<li><p>Apply the <em>luminosity mask</em> filter to the base picture. We will use this later.</p>
<p><span class='Cmd'>Filters → Generic → Luminosity Masks</span></p>
</li>
<li><p>Duplicate the base picture (Ctrl+Shift+D).</p>
<p><span class='Cmd'>Layer → Duplicate Layer</span></p>
</li>
<li><p>Tone the shadows of the duplicated picture using the <em>tone curve</em> by lowering the reds in the shadows. If you want your shadows to be less green, slightly raise the blues in the shadows.</p>
<p><span class='Cmd'>Colors → Curves</span></p>
<figure>
    <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/Curves_toning.png" alt="The toning curves" width="372" height="526">
</figure>

<figure class="big-vid">
  <a href="Akihabara_tonedshadows.jpg">
    <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/Akihabara_tonedshadows_sm.jpg" alt="The photograph with the toning curve applied" width="900" height="553">
  </a>
</figure>
</li>
<li><p>Apply a <em>layer mask</em> to the duplicated and toned picture. Choose the DD luminosity mask from a channel.</p>
<p><span class='Cmd'>Layer → Mask → Add Layer Mask</span></p>
<figure>
 <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/Mask-DD.png" alt='Luminosity Mask Added' width='293' height='370'>
</figure>
</li>
<li><p>With both layers visible, create a new layer from what is visible. Call this layer the “blended” layer.</p>
<p><span class='Cmd'>Layer → New from Visible</span></p>
<figure class="big-vid">
  <a href="Akihabara_blended.jpg">
    <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/Akihabara_blended_sm.jpg" alt="The photograph after the blended layer" width='900' height='553'>
  </a>
</figure>
</li>
<li><p>Apply the <em>wavelet decompose</em> filter to the “blended” layer and choose 9 as number of detail scales.  Set the G’MIC <em>output</em> mode to “New layer(s)” (see below).</p>
<p><span class='Cmd'>Filters → G’MIC<br>
Details → Split Details [wavelets]</span></p>
<figure class='big-vid'>
  <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/gmic-wavelet.png" alt="G'MIC Split Details Wavelet Decompose dialog" width='900' height='457'>
<figcaption>
Remember to set G’MIC to output the results on <em>New Layer(s)</em>.
</figcaption>
</figure>
</li>
<li><p>Make the <strong>blended</strong> and <strong>blended [residual]</strong> layers visible. Then set the mode of the <strong>blended [residual]</strong> layer to <em>color</em>. This will give you a picture with averaged, blurred colors.</p>
<figure class="big-vid">
  <a href="Akihabara_color_100.jpg">
    <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/Akihabara_color_100_sm.jpg" alt="The fully colored photograph" width='899' height='553'>
  </a>
</figure>
</li>
<li><p>Turn the opacity of the <strong>blended [residual]</strong> down to 70%, or any other value to your taste, to bring back some color detail.</p>
<figure class="big-vid">
  <a href="Akihabara_color_70.jpg">
    <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/Akihabara_color_70_sm.jpg" alt="The partially colored photograph" width='899' height='553'>
  </a>
</figure>
</li>
<li><p>Turn on the <strong>blended [scale #8]</strong> layer, set the mode to <em>grain&nbsp;merge</em>, and see how the lights start shining. Adjust opacity to taste.</p>
<figure class="big-vid">
  <a href="Akihabara_scale_8.jpg">
    <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/Akihabara_scale_8_sm.jpg" alt="The augmented contrast layer" width='899' height='553'>
  </a>
</figure>
</li>
<li><p>Optional: Turn the wavelet scale 3 (or any other) on to sharpen the picture and blend to taste.</p>
</li>
<li><p>Make sure the following layers are visible:</p>
<ul>
<li>blended</li>
<li>residual</li>
<li>wavelet scale 8</li>
<li>Any other wavelet scale you want to use for sharpening</li>
</ul>
</li>
<li><p>Make a new layer from visible</p>
<p><span class='Cmd'>Layer → New from Visible</span></p>
</li>
<li><p>Raise and slightly crush the shadows using the tone curve.</p>
<figure>
   <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/Curves_raiseshadows.png" alt='Raise the shadow curve' width='372' height='526'>
</figure>
</li>
<li><p>Optional: Adjust saturation to taste. If there are predominantly white lights and the
colors come mainly from other objects, the residual will be washed out, as is
the case with this picture. </p>
<p>I noticed that the reds and yellows were very dominant compared to greens and blues.  So using the <strong>Hue-Saturation</strong> dialog I raised the master saturation by <em>+70</em> and lowered the yellow saturation by <em>-50</em> and lowered the red saturation by <em>-40</em> all using an overlap of _60_.</p>
</li>
</ol>
<p>The final result:</p>
<figure class="big-vid">
      <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/Akihabara_final_sm.jpg" alt="The final image!" width="960" height="590" data-swap-src="Akihabara_base.jpg">
    <figcaption>
    The final result.  (Click to compare to original.)<br>
    <a href="Akihabara_final.jpg">Download the full size result.</a>
    </figcaption>
</figure>

  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Giving Thanks]]></title>
            <link>https://pixls.us/blog/2016/11/giving-thanks/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/11/giving-thanks/</guid>
            <pubDate>Tue, 22 Nov 2016 16:16:49 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2016/11/giving-thanks/Thanksgiving-Brownscombe-1123.jpg" /><br/>
                <h1>Giving Thanks</h1> 
                <h2>For an awesome community!</h2>  
                <p>Here in the U.S., we have a big holiday coming up this week: <a href="https://en.wikipedia.org/wiki/Thanksgiving_(United_States)">Thanksgiving</a>.
Serendipitously, this holiday also happens to fall when a few neat things are happening around the community, and what better time is there to recognize some folks and to give thanks of our own?  <em>No time like the present!</em></p>
<!-- more -->
<h2 id="a-special-thanks"><a href="#a-special-thanks" class="header-link-alt">A Special Thanks</a></h2>
<p>I feel a special “Thank You” should first go to a photographer and fantastic supporter of the community, <a href="https://plus.google.com/+DimitriosPsychogios">Dimitrios Psychogios</a>.  Last year for our trip to <a href="https://pixls.us/blog/2016/01/libre-graphics-meeting-london/">Libre Graphics Meeting, London</a> he stepped up with an awesome donation to help us bring some fun folks together.</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2016/11/giving-thanks/LGM2016-Crew.jpg" alt='LGM2016 Dinner'>
<figcaption>
Fun folks together.<br>
Mairi, the darktable nerds, a RawTherapee nerd, and a PhotoFlow nerd.<br>
(and the nerd taking the photo, patdavid)
</figcaption>
</figure>

<p>This year he was incredibly kind by offering a donation to the community (completely unsolicited) that covers our hosting and infrastructure costs for an entire year!  So on behalf of the community, <strong>Thank You for your support, Dimitrios</strong>!</p>
<p>I’ll be creating a page soon that will list our supporters as a means of showing our gratitude. Speaking of supporters and a new page on the site…</p>
<h2 id="a-support-page"><a href="#a-support-page" class="header-link-alt">A Support Page</a></h2>
<p>Someone had asked about the possibility of donating to the community on a post.  We were <a href="https://discuss.pixls.us/t/midi-controller-for-darktable/2582">talking about providing support</a> in <a href="http://www.darktable.org">darktable</a> for using a midi controller deck and the costs for some of the options weren’t too extravagant.  This got us thinking that enough small donations could probably cover something like this pretty easily, and if it was community hardware we could make sure it got passed around to each of the projects that would be interested in creating support for it.</p>
<figure>
<img src="https://pixls.us/blog/2016/11/giving-thanks/nanokontrol2.jpg" alt='KORG NanoControl2'>
<figcaption>
An example midi-controller that we might get support<br>for in darktable and other projects.
</figcaption>
</figure>

<p>That conversation had me thinking about ways to allow folks to support the community.  In particular, ways to make it easy to provide support on an on-going basis if possible (in addition to simple, single donations).  There are goal-oriented options out there that folks are probably already familiar with (Kickstarter, Indiegogo and others) but the model for us is less goal-oriented and more about continuous support. </p>
<p>Patreon was an option as well (and I already had a skeleton Patreon account set up), but the fees were just too much in the end.  They wanted a flat 5% along with the regular PayPal fees.  The general consensus among the staff was that we wanted to maximize the funds getting to the community.</p>
<p>The best option in the end was to create a merchant account on PayPal and manually set up the various payment options.  I’ve set them up similar to how a service like Patreon might run with four different <em>recurring</em> funding levels and an option for a single one-time payment of whatever a user would like.  Recurring levels are nice because they make it easier to plan with.</p>
<h3 id="we-re-not-asking"><a href="#we-re-not-asking" class="header-link-alt">We’re Not Asking</a></h3>
<p>Our requirements for the infrastructure of the site are modest and we haven’t actively pursued support or donations for the site before.  <em>That hasn’t changed.</em></p>
<p>We’re not asking for support now.  The <em>best</em> way that someone can help the community is by <em>being an active part of it.</em></p>
<blockquote>
<p>Engaging others, sharing what you’ve done or learned, and helping other users out wherever you can. This is the best way to support the community.</p>
</blockquote>
<p>I purposely didn’t talk about funding before because I don’t want folks to have to worry or think about it.  And before you ask: no, we are not and will not run any advertising on the site. I’d honestly rather just keep paying for things out of my pocket instead.</p>
<p>We’re not asking for support, <em>but we’ll accept it</em>.</p>
<p>With that being said, I understand that there’s still some folks that would like to contribute to the infrastructure or help us to get hardware to add support in projects and more.  So if you do want to contribute, the page for doing so can be found here:</p>
<p><a href="https://pixls.us/support">https://pixls.us/support</a></p>
<p>There are four recurring funding levels of $1, $3, $5, and $10 per month.
There is also a one-time contribution option as well.</p>
<p>We also have an <a href="https://www.amazon.com//ref=as_li_ss_tl?ref_=nav_custrec_signin&amp;&amp;linkCode=ll2&amp;tag=pixls.us-20&amp;linkId=418b8960b708accf468db7964fc2d4b5" title="Go to Amazon.com using our affiliate link">Amazon Affiliate</a> link option.  If you’re not familiar with it, you simply click the link to go to Amazon.com.  Then anything you buy for the next 24 hours will give us some small percentage of your purchase price.  It doesn’t affect the price of what you’re buying at all. So if you were going to purchase something from Amazon anyway, and don’t mind - then by all means use our link first to help out!</p>
<hr>
<h2 id="1000-users"><a href="#1000-users" class="header-link-alt">1000 Users</a></h2>
<p>This week we also finally hit 1,000 users registered on <a href="https://discuss.pixls.us">discuss</a>! Which is just bananas to me.  I am super thankful for each and every member of the community that has taken the time to participate, share, and generally make one of the better parts of my day catching up on what’s been going on.  You all rock!</p>
<div class='fluid-vid'>
<iframe width="560" height="315" src="https://www.youtube.com/embed/StTqXEQ2l-Y" frameborder="0" allowfullscreen></iframe>
</div>

<p>While we’re talking about a number “1” with bunch of zeros after it, we recently made some neat improvements to the forums…</p>
<h2 id="100-megabytes"><a href="#100-megabytes" class="header-link-alt">100 Megabytes</a></h2>
<p>We are a photography community and it seemed stupid to have to restrict users from uploading full quality images or raw files.  Previously it was a concern because the server the forums are hosted on have limited disk space (40GB).  Luckily, <a href="http://www.discourse.org/">Discourse</a> has an option for storing all uploads to the forum on <a href="https://aws.amazon.com/s3/">Amazon S3</a> buckets.</p>
<p>I went ahead and created some S3 buckets so that any uploads to the forums will now be hosted on Amazon instead of taking up precious space on the server. The costs are quite reasonable (around $0.30/GB right now), and it also means that I’ve been able to bump the upload size to 100MB for forum posts! You can now just drag and drop full resolution raw files directly into the post editor to include the file!</p>
<figure>
<img src="https://pixls.us/blog/2016/11/giving-thanks/drag-drop-320.gif" alt='Drag and Drop files in discuss'>
<figcaption>
70MB GIMP .xcf file?  Just drag-and-drop to upload, no problem! :)
</figcaption>
</figure>


<h2 id="travis-ci-automation"><a href="#travis-ci-automation" class="header-link-alt">Travis CI Automation</a></h2>
<p>On a slightly geekier note, did you know that the code for the entire website is available on <a href="https://github.com/pixlsus/website">Github</a>?  It’s also licensed liberally (<a href="https://github.com/pixlsus/website/blob/master/LICENSE">CC-BY-SA</a>), so no reason not to come and fiddle with things with us!  One of the features of using Github is integration with <a href="https://travis-ci.org">Travis CI</a> (Continuous Integration).</p>
<p>What this basically means is that every commit to the Github repo for the website gets picked up by Travis and built to test that everything is working ok.  You can actually see the <a href="https://travis-ci.org/pixlsus/website/builds">history of the website builds</a> there.</p>
<p>I’ve now got it set up so that when a build is successful on Travis, it will automatically publish the results to the main webserver and make it live. Our build system, <a href="http://www.metalsmith.io/">Metalsmith</a>, is a static site generator.  This means that we build the entire website on our local computers when we make changes, and then publish all of those changes to the webserver.  This change automates that process for us now by handling the building and publishing if everything is ok.</p>
<p>In fact, if everything is working the way I <em>think</em> it should, this very blog post will be the first one published using the new automated system!  Hooray!</p>
<p>You can poke me or @paperdigits on discuss if you want more details or feel like playing with the website.</p>
<h2 id="mica"><a href="#mica" class="header-link-alt">Mica</a></h2>
<p>Speaking of @paperdigits, I want to close this blog post with a great big “<strong>Thank You!</strong>“ to him as well. He’s the only other person insane enough to try and make sense of all the stuff I’ve done building the site so far, and he’s been extremely helpful hacking at the website code, writing articles, make good infrastructure suggestions, taking the initiative on things (t-shirts and github repos), and generally being awesome all around.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[João Almeida's darktable Presets]]></title>
            <link>https://pixls.us/blog/2016/11/jo-o-almeida-s-darktable-presets/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/11/jo-o-almeida-s-darktable-presets/</guid>
            <pubDate>Mon, 14 Nov 2016 18:19:19 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2016/11/jo-o-almeida-s-darktable-presets/portra400_after.jpg" /><br/>
                <h1>João Almeida's darktable Presets</h1> 
                <h2>A gorgeous set of film emulation for darktable</h2>  
                <p>I realize that I’m a little late to this, but photographer <a href="http://www.joaoalmeidaphotography.com/">João Almeida</a> has created a wonderful set of film emulation presets for <a href="http://www.darktable.org/">darktable</a> that he uses in his own workflow for personal and commisioned work. Even more wonderful is that he has graciously <a href="http://www.joaoalmeidaphotography.com/en/t3mujinpack-film-darktable/">released them for everyone to use</a>.</p>
<!-- more -->
<p>These film emulations started as a personal side project for João, and he adds a disclaimer to them that he did not optimize them all for each brand or model of his cameras.  His end goal was for these to be as simple as possible by using a few <a href="http://www.darktable.org/">darktable</a> modules. He describes it best on <a href="http://www.joaoalmeidaphotography.com/en/t3mujinpack-film-darktable/">his blog post about them</a>:</p>
<blockquote>
<p>The end goal of these presets is to be as simple as possible by using few Darktable modules, it works solely by manipulating Lab Tone Curves for color manipulation, black &amp; white films rely heavily on Channel Mixer. Since I what I was aiming for was the color profiles of each film, other traits related with processing, lenses and others are unlikely to be implemented, this includes: grain, vignetting, light leaks, cross-processing, etc.</p>
</blockquote>
<p>Some before/after samples from his blog post:</p>
<figure>
<img src="https://pixls.us/blog/2016/11/jo-o-almeida-s-darktable-presets/portra400_after.jpg" data-swap-src='portra400_before-1.jpg' alt='João Almeida Portra 400 sample'>
<figcaption>
João Portra 400<br>
(Click to compare to original)
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2016/11/jo-o-almeida-s-darktable-presets/kodachrome64_after.jpg" data-swap-src='kodachrome64_before-1.jpg' alt='João Alemida Kodachrome 64 sample'>
<figcaption>
João Kodachrome 64<br>
(Click to compare to original)
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2016/11/jo-o-almeida-s-darktable-presets/velvia50__after.jpg" data-swap-src='velvia50_before.jpg' alt='João Alemida Velvia 50 sample'>
<figcaption>
João Velvia 50<br>
(Click to compare to original)
</figcaption>
</figure>

<p>You can read more on <a href="http://www.joaoalmeidaphotography.com/en/t3mujinpack-film-darktable/">João’s website</a> and you can see many more <a href="https://www.flickr.com/photos/tags/t3mujinpack">images on Flickr with the #t3mujinpack tag</a>. The full list of film emulations included with his pack:</p>
<ul>
<li>AGFA APX 25, 100</li>
<li>Fuji Astia 100F</li>
<li>Fuji Neopan 1600, Acros 100</li>
<li>Fuji Pro 160C, 400H, 800Z</li>
<li>Fuji Provia 100F, 400F, 400X</li>
<li>Fuji Sensia 100</li>
<li>Fuji Superia 100, 200, 400, 800, 1600, HG 1600</li>
<li>Fuji Velvia 50, 100</li>
<li>Ilford Delta 100, 400, 3200</li>
<li>Ilford FP4 125</li>
<li>Ilford HP5 Plus 400</li>
<li>Ilford XP2</li>
<li>Kodak Ektachrome 100 GX, VS</li>
<li>Kodak Ektar 100</li>
<li>Kodak Elite Chrome 400</li>
<li>Kodak Kodachrome 25, 64, 200</li>
<li>Kodak Portra 160 NC, VC</li>
<li>Kodak Portra 400 NC, UC, VC</li>
<li>Kodak Portra 800</li>
<li>Kodak T-Max 3200</li>
<li>Kodak Tri-X 400</li>
</ul>
<p>If you see João around the forums stop and say hi (and maybe a thank you). Even better, if you find these useful, consider buying him a beer (donation link is on his blog post)!</p>
<h3 id="related-reading"><a href="#related-reading" class="header-link-alt">Related Reading</a></h3>
<ul>
<li><a href="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/">color manipulation with the colour checker lut module (darktable)</a></li>
<li><a href="http://gmic.eu/film_emulation/">Pat David’s film emulation LUTs (G’MIC)</a></li>
<li><a href="https://discuss.pixls.us/t/common-color-curves-portra-provia-velvia/2154">Common Color Curves (Portra, Provia, Velvia) (RawTherapee)</a></li>
<li><a href="https://github.com/pmjdebruijn/colormatch">Pascal’s colormatch</a></li>
</ul>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Aligning Images with Hugin]]></title>
            <link>https://pixls.us/articles/aligning-images-with-hugin/</link>
            <guid isPermaLink="true">https://pixls.us/articles/aligning-images-with-hugin/</guid>
            <pubDate>Fri, 04 Nov 2016 19:12:04 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/aligning-images-with-hugin/hugin_lede.jpg" /><br/>
                <h1>Aligning Images with Hugin</h1> 
                <h2>Easily process your bracketed exposures</h2>  
                <p><a href="http://hugin.sourceforge.net/">Hugin</a> is an excellent tool for for aligning and stitching images. In this article, we’ll focus on aligning a stack of images. Aligning a stack of images can be useful for achieving several results, such as:</p>
<ul>
<li>bracketed exposures to make an HDR or fused exposure (using enfuse/enblend), or manually blending the images together in an image editor</li>
<li>photographs taken at different focal distances to extend the depth of field, which can be very useful when taking macros</li>
<li>photographs taken over a period of time to make a time-lapse movie</li>
</ul>
<p>For the example images included with this tutorial, the <em>focal length</em> is <strong>12mm</strong> and the <em>focal length multiplier</em> is <strong>1</strong>. A big thank you to <a href="https://discuss.pixls.us/users/isaac/activity">@isaac</a> for providing these images.</p>
<p>You can download a zip file of all of the sample <em>Beach Umbrellas</em> images here:</p>
<p><a href="https://s3.amazonaws.com/pixls-files/Outdoor_Beach_Umbrella.zip">Download <strong>Outdoor_Beach_Umbrella.zip</strong></a> (62MB)</p>
<p>Other sample images to try with this tutorial can be <a href="#image-files">found at the end of the post</a>.</p>
<p>These instructions were adapted from the <a href="https://discuss.pixls.us/t/only-a-small-testimony/2130/5">original forum post</a> by <a href="https://discuss.pixls.us/users/Carmelo_DrRaw/activity">@Carmelo_DrRaw</a>; many thanks to him as well.</p>
<p>We’re going to align these bracked exposures so we can blend them:</p>
<figure class="big-vid">
    <a href="side-by-side-example.jpg">
      <img src="https://pixls.us/articles/aligning-images-with-hugin/side-by-side-example.jpg" alt='Blend Examples' width='907' height='230'>
    </a>
</figure>



<ol>
<li><p>Select <strong>Interface</strong> → <strong>Expert</strong> to set the  interface to <strong>Expert</strong> mode. This will expose all of the options offered by Hugin.</p>
</li>
<li><p>Select the <strong>Add images…</strong> button to load your bracketed images. Select your images from the file chooser dialog and click <strong>Open</strong>.</p>
</li>
<li><p>Set the optimal setting for aligning images:</p>
<ul>
<li>Feature Matching Settings: Align image stack</li>
<li>Optimize Geometric: Custom parameters</li>
<li>Optimize Photometric: Low dynamic range</li>
</ul>
</li>
<li><p>Select the <strong>Optimizer</strong> tab.</p>
</li>
<li><p>In the <strong>Image Orientation</strong> section, select the following variables for each image:</p>
<ul>
<li>Roll</li>
<li>X (TrX) [horizontal translation]</li>
<li>Y (TrY) [vertical translation]</li>
</ul>
<p>You can <code>Ctrl</code> + left mouse click to enable or disable the variables.</p>
<figure class="big-vid">
 <a href="roll_x_y_hugin.png">
   <img src="https://pixls.us/articles/aligning-images-with-hugin/roll_x_y_hugin.png" alt='roll x y Hugin' width='878' height='714'>
 </a>
</figure>

<p>Note that you do not need to select the parameters for the anchor image:</p>
<figure class="big-vid">
 <a href="anchor_image_hugin.png">
   <img src="https://pixls.us/articles/aligning-images-with-hugin/anchor_image_hugin.png" alt='Hugin anchor image' width='882' height='742'>
 </a>
</figure>
</li>
<li><p>Select <strong>Optimize now!</strong> and wait for the software to finish the calculations. Select <strong>Yes</strong> to apply the changes.</p>
</li>
<li><p>Select the <strong>Stitcher</strong> tab.</p>
</li>
<li><p>Select the <strong>Calculate Field of View</strong> button.</p>
</li>
<li><p>Select the <strong>Calculate Optimal Size</strong> button.</p>
</li>
<li><p>Select the <strong>Fit Crop to Images</strong> button.</p>
</li>
<li><p>To have the maximum number of post-processing options, select the following image outputs:</p>
<ul>
<li>Panorama Outputs: Exposure fused from any arrangement<ul>
<li>Format: TIFF</li>
<li>Compression: LZW</li>
</ul>
</li>
<li>Panorama Outputs: High dynamic range<ul>
<li>Format: EXR</li>
</ul>
</li>
<li><p>Remapped Images: No exposure correction, low dynamic range</p>
<figure class="big-vid">
 <a href="image_export_hugin.png">
   <img src="https://pixls.us/articles/aligning-images-with-hugin/image_export_hugin.png" alt='Hugin Image Export' width='840' height='928'>
 </a>
</figure>
</li>
</ul>
</li>
<li><p>Select the <strong>Stitch!</strong> button and choose a place to save the files. Since Hugin generates quite a few temporary images, save the PTO file in it’s own folder.</p>
</li>
</ol>
<p>Hugin will output the following images:</p>
<ul>
<li>a tif file blended by enfuse/enblend</li>
<li>an HDR image in the EXR format</li>
<li>the individual images after remapping and without any exposure correction that you can import into the GIMP as layers and blend manually.</li>
</ul>
<p>You can see the result of the image blended with enblend/enfuse:</p>
  <figure class="big-vid">
    <a href="beach_umbrella_fused.jpg">
      <img src="https://pixls.us/articles/aligning-images-with-hugin/beach_umbrella_fused.jpg" alt='Beach Umbrella Fused' width='960' height='718'>
    </a>
  </figure>

<p>With the output images, you can:</p>
<ul>
<li>edit the enfuse/enblend tif file further in the GIMP or RawTherapee</li>
<li>tone map the EXR file in LuminanceHDR</li>
<li>manually blend the remapped tif files in the GIMP or PhotoFlow</li>
</ul>
<hr>
<h2 id="image-files">Image files<a href="#image-files" class="header-link"><i class="fa fa-link"></i></a></h2>
<ul>
<li>Camera: Olympus E-M10 mark ii</li>
<li>Lens: Samyang 12mm F2.0</li>
</ul>
<h3 id="indoor_guitars">Indoor_Guitars<a href="#indoor_guitars" class="header-link"><i class="fa fa-link"></i></a></h3>
<p><a href="https://s3.amazonaws.com/pixls-files/Indoor_Guitars.zip"><strong>Download Indoor_Guitars.zip</strong></a> (75MB)</p>
<ul>
<li>5 brackets</li>
<li>&plusmn;0.3 EV increments</li>
<li>f5.6</li>
<li>focus at about 1m</li>
<li>center priority metering</li>
<li>exposed for guitars, bracketed for the sky, outdoor area, and indoor area</li>
<li>manual mode (shutter speed recorded in EXIF)</li>
<li>shot in burst mode, handheld</li>
</ul>
<h3 id="outdoor_beach_umbrella">Outdoor_Beach_Umbrella<a href="#outdoor_beach_umbrella" class="header-link"><i class="fa fa-link"></i></a></h3>
<p><a href="https://s3.amazonaws.com/pixls-files/Outdoor_Beach_Umbrella.zip"><strong>Download Outdoor_Beach_Umbrella.zip</strong></a> (62MB)</p>
<ul>
<li>3 brackets</li>
<li>&plusmn;1 EV increments</li>
<li>f11</li>
<li>focus at infinity</li>
<li>center priority metering</li>
<li>exposed for the water, bracketed for umbrella and sky</li>
<li>manual mode (shutter speed recorded in EXIF)</li>
<li>shot in burst mode, handheld</li>
</ul>
<h3 id="outdoor_sunset_over_ocean">Outdoor_Sunset_Over_Ocean<a href="#outdoor_sunset_over_ocean" class="header-link"><i class="fa fa-link"></i></a></h3>
<p><a href="https://s3.amazonaws.com/pixls-files/Outdoor_Sunset_Over_Ocean.zip"><strong>Download Outdoor_Sunset_Over_Ocean.zip</strong></a> (60MB)</p>
<ul>
<li>3 brackets</li>
<li>&plusmn;1 EV increments</li>
<li>f11</li>
<li>focus at infinity</li>
<li>center priority metering</li>
<li>exposed for the darker clouds, bracketed for darker water and lighter sky areas and sun</li>
<li>manual mode (shutter speed recorded in EXIF)</li>
<li>shot in burst mode, handheld</li>
</ul>
<h4 id="licencing-information">Licencing Information<a href="#licencing-information" class="header-link"><i class="fa fa-link"></i></a></h4>
<ul>
<li>Images created by <a href="https://discuss.pixls.us/users/isaac/activity">Isaac I. Ullah</a>, 2016, and released under the <a href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0</a> licence (<a class='cc' href='http://creativecommons.org/licenses/by-sa/4.0/'>cba</a>).</li>
</ul>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[The Royal Photographic Society Journal]]></title>
            <link>https://pixls.us/blog/2016/11/the-royal-photographic-society-journal/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/11/the-royal-photographic-society-journal/</guid>
            <pubDate>Wed, 02 Nov 2016 14:36:20 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2016/11/the-royal-photographic-society-journal/RPS_Logo_WithCrest_RGB.png" /><br/>
                <h1>The Royal Photographic Society Journal</h1> 
                <h2>Who let us in here?</h2>  
                <p>The <a href="http://www.rps.org/rps-journals/about"><em>Journal of the Photographic Society</em></a> is the journal for one of oldest photographic societies in the world: the <a href="http://www.rps.org/">Royal Photographic Society</a>. First published in 1853, the <a href="http://www.rps.org/rps-journals/about"><em>RPS Journal</em></a> is the oldest photographic periodical in the world (just edging out the <a href="http://www.bjp-online.com/about-british-journal-of-photography/"><em>British Journal of Photography</em></a> by about a year).</p>
<p>So you can imagine my doubt when confronted with an email about using some material from <a href="pixls.us">pixls.us</a> for their latest issue…</p>
<!-- more -->
<hr>
<p>If the name sounds familiar to anyone it may be from a recent post by <a href="http://blog.joemcnally.com/">Joe McNally</a> who is featured prominently in the September 2016 issue.  He <a href="http://blog.joemcnally.com/2016/10/13/royal-photographic-society/">was also just inducted</a> as a fellow into the society!</p>
<figure>
<img src="https://pixls.us/blog/2016/11/the-royal-photographic-society-journal/RPS_Journal_09_2016_COVER.jpg" alt='RPS Journal 2016-09 Cover' width='640' height='886'>
</figure>

<hr>
<p>It turns out my initial doubts were completely unfounded, and they really wanted to run a page based off one of our tutorials.
The editors liked the <a href="https://pixls.us/articles/an-open-source-portrait-mairi/">Open Source Portrait</a> tutorial.  In particular, the section on using <a href="https://pixls.us/articles/an-open-source-portrait-mairi/#skin-retouching-with-wavelet-decompose"><em>Wavelet Decompose</em></a> to touch up the skin tones:</p>
<figure>
<img src="https://pixls.us/blog/2016/11/the-royal-photographic-society-journal/INDEPTH_RPS_NOV16.jpg" alt='RPS Journal 2016-11 PD'>
<figcaption>
Yay Mairi!
</figcaption>
</figure>


<p>How cool is that?  I actually searched the archive and the only other mention I can find of <a href="https://www.gimp.org">GIMP</a> (or any other F/OSS) is from a <a href="http://archive.rps.org/archive/volume-149/755209?q=GIMP#page/125">“Step By Step” article written by Peter Gawthrop</a> (Vol. 149, February 2009).  I think it’s pretty awesome that we can promote a little more exposure for Free Software alternatives.  Especially in more mainstream publications and to a broader audience!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Arnold Newman Portraits]]></title>
            <link>https://pixls.us/blog/2016/10/arnold-newman-portraits/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/10/arnold-newman-portraits/</guid>
            <pubDate>Fri, 28 Oct 2016 17:39:58 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2016/10/arnold-newman-portraits/newman-stravinsky.jpg" /><br/>
                <h1>Arnold Newman Portraits</h1> 
                <h2>The beginnings of "Environmental Portraits"</h2>  
                <p>Anyone that has spent any time around me would realize that I’m particularly fond of portraits. From the wonderful works of <a href="https://www.google.com/search?q=martin+schoeller&amp;tbm=isch">Martin Schoeller</a> to the sublime <a href="https://www.google.com/search?q=dan+winters&amp;tbm=isch">Dan Winters</a>, I am simply fascinated by a well executed portrait. So I thought it would be fun to take a look at some selections from the “father” of environmental portraits - <a href="http://arnoldnewman.com/">Arnold Newman</a>.</p>
<!-- more -->
<figure>
<img src="https://pixls.us/blog/2016/10/arnold-newman-portraits/Newman Self Portrait.jpg" alt='Arnold Newman, Self Portrait, Baltimore MD, 1939' width='640' height='658'>
<figcaption>
<a href="http://arnoldnewman.com/">Arnold Newman</a>, Self Portrait, Baltimore MD, 1939
</figcaption>
</figure>

<p>Newman wanted to become a painter before needing to drop out of college after only two years to take a job shooting portraits in a photo studio in Philadelphia. This experience apparently taught him what he did <em>not</em> want to do with photography…</p>
<p>Luckily it may have started defining what he <em>did</em> want to do with his photography. Namely, his approach to capturing his subjects alongside (or within) the context of the things that made them notable in some way.  This would became known as “Environmental Portraiture”. He described it best in an interview for <a href="https://books.google.com/books?id=qWOpWDKpUjgC&amp;pg=PA36#v=onepage&amp;q&amp;f=true">American Photo</a> in 2000:</p>
<blockquote>
<p>I didn’t just want to make a photograph with some things in the background.  The surroundings had to add to the composition and the understanding of the person.  No matter who the subject was, it had to be an interesting photograph.  Just to simply do a portrait of a famous person doesn’t mean a thing. <sup><a href="https://books.google.com/books?id=qWOpWDKpUjgC&amp;pg=PA36#v=onepage&amp;q&amp;f=true">1</a></sup></p>
</blockquote>
<p>Though he has felt that the term might be unnecessarily restrictive (and possibly overshadows his other pursuits including abstractions and photojournalism), there’s no denying the impact of the results. Possibly his most famous portrait, of composer Igor Stravinsky, illustrates this wonderfully.  The overall tones are almost monotone (flat - pun intended, and likely intentional on behalf of Newman) and are dominated by the stark duality of the white wall with the black piano.</p>
<figure>
<img src="https://pixls.us/blog/2016/10/arnold-newman-portraits/Igor Stravinsky, New York, NY, 1946.jpg" alt='Igor Stravinsky by Arnold Newman' width='640' height='332'>
<figcaption>
<em>Igor Stravinsky, New York, NY, 1946</em> by <a href="http://arnoldnewman.com/">Arnold Newman</a>
</figcaption>
</figure>

<p>Newman realized that the open lid of the piano <em>“…is like the shape of a musical flat symbol&mdash;strong, linear, and beautiful, just like Stravinsky’s work.”</em> <sup><a href="https://books.google.com/books?id=qWOpWDKpUjgC&amp;pg=PA36#v=onepage&amp;q&amp;f=true">1</a></sup> The geometric construction of the image instantly captures the eye and the aggressive crop makes the final composition even more interesting. In this case the crop was a fundamental part of the original composition as shot, but it was not uncommon for him to find new life in images with different crops.</p>
<p>In a similar theme his portraits of both <a href="https://en.wikipedia.org/wiki/Salvador_Dal%C3%AD">Salador Dalí</a> and <a href="https://en.wikipedia.org/wiki/John_F._Kennedy">John F. Kennedy</a> show a willingness to allow the crop to bring in different defining characteristics of his subjects. In the case of Dalí it allows an abstraction to hang there mimicking the pose of the artist himself. Kennedy is mostly the only organic form, striking a relaxed pose, while dwarfed by the imposing architecture and hard lines surrounding him.</p>
<figure>
<img src="https://pixls.us/blog/2016/10/arnold-newman-portraits/Salvador Dali, New York, NY, 1951.jpg" alt='Salvador Dali, New York, NY, 1951' width='572' height='780'>
<figcaption>
Salvador Dali, New York, NY, 1951 by <a href="http://arnoldnewman.com/">Arnold Newman</a>
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2016/10/arnold-newman-portraits/John F. Kennedy, Washington D.C., 1953.jpg" alt='John F. Kennedy, Washington D.C., 1953' width='629' height='780'>
<figcaption>
John F. Kennedy, Washington D.C., 1953 by <a href="http://arnoldnewman.com/">Arnold Newman</a>
</figcaption>
</figure>

<p>He manages to bring the same deft handling of placing his subjects in the context of their work with other photographers as well.  His portrait of <a href="http://anseladams.com/">Ansel Adams</a> shows the photographer just outside his studio with the surrounding wilderness not only visible around the frame but reflected in the glass of the doors behind him (and the photographers glasses). Perhaps an indication of the nature of Adams work being to capture natural scenes through glass? </p>
<figure>
<img src="https://pixls.us/blog/2016/10/arnold-newman-portraits/Ansel Adams, 1975.jpg" alt='Ansel Adams, 1975 by Arnold Newman' width='599' height='780'>
<figcaption>
Ansel Adams, 1975 by <a href="http://arnoldnewman.com/">Arnold Newman</a>
</figcaption>
</figure>

<p>For anyone familiar with the pioneer of another form of photography, Newman’s portrait of (the usually camera shy) <a href="https://en.wikipedia.org/wiki/Henri_Cartier-Bresson">Henri Cartier-Bresson</a> will instantly evoke a sense of the artists candid street images.  In it, Bresson appears to take the place of one of his subjects caught briefly on the streets in a fleeting moment. The portrait has an almost spontaneous feeling to it, (again) mirroring the style of the work of its subject.</p>
<figure>
<img src="https://pixls.us/blog/2016/10/arnold-newman-portraits/Henri Cartier-Bresson, New York, NY, 1947.jpg" alt='Henri Cartier-Bresson, New York, NY, 1947' width='640' height='454'>
<figcaption>
Henri Cartier-Bresson, New York, NY, 1947 by <a href="http://arnoldnewman.com/">Arnold Newman</a>
</figcaption>
</figure>

<p>Eight years after his portrait of surrealist painter Dali, Newman shot another famous (abstraction) artist, <a href="https://en.wikipedia.org/wiki/Pablo_Picasso">Pablo Picasso</a>. This particular portrait is much more intimate and more classically composed, framing the subject as a headshot with little of the surrounding environment as before. I can’t help but think that the placement of the hand being similar in both images is intentional; a nod to the unconventional views both artists brought to the world.</p>
<figure>
<img src="https://pixls.us/blog/2016/10/arnold-newman-portraits/Pablo Picasso,Vallauris, France, 1954.jpg" alt='Pablo Picasso,Vallauris, France, 1954' width='609' height='780'>
<figcaption>
Pablo Picasso,Vallauris, France, 1954 by <a href="http://arnoldnewman.com/">Arnold Newman</a>
</figcaption>
</figure>

<hr>
<p>The eloquent <a href="https://en.wikipedia.org/wiki/Gregory_Heisler">Gregory Heisler</a> had a wonderful discussion about Newman for <a href="http://www.acpinfo.org/blog/2008/09/29/gregory-heisler-on-arnold-newman-the-man-and-his-impact-wednesday-oct-1st-7pm-the-high-museum/"><em>Atlanta Celebrates Photography</em></a> at the High Musuem in 2008:</p>
<div class='fluid-vid'>
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/IjY8XbGXmXw" frameborder="0" allowfullscreen></iframe>
</div>

<p>Arnold Newman produced an amazing body of work that warrants some time and consideration for anyone interested in portraiture. These few examples simply do not do his <a href="http://arnoldnewman.com/content/portraits-0">collection of portraits</a> justice.  If you have a few moments to peruse some amazing images head over to his website and have a look (I’m particularly fond of his extremely design-oriented portrait of chinese-american architect <a href="http://arnoldnewman.com/media-gallery/detail/58/315">I.M. Pei</a>):</p>
<figure>
<img src="https://pixls.us/blog/2016/10/arnold-newman-portraits/I.M. Pei, New York, NY, 1967.jpg" alt='I.M. Pei, New York, NY, 1967' width='640' height='773'>
<figcaption>
I.M. Pei, New York, NY, 1967 by <a href="http://arnoldnewman.com/">Arnold Newman</a>
</figcaption>
</figure>

<p>Of historical interest is a look at Newman’s contact sheet for the Stravinsky image showing various compositions and approaches to his subject with the piano. (I would have easily chosen the last image in the first row as my pick.) I have seen the second image in the second row cropped as indicated, which was also a very strong choice. I adore being able to investigate contact sheets from shoots like this - it helps me to humanize these amazing photographers while simultaneously allowing me an opportunity to learn a little about their thought process and how I might incorporate it into my own photography.</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2016/10/arnold-newman-portraits/Igor Stravinsky contact.jpg" alt='Igor Stravinsky contact sheet' width='960' height='694'>
</figure>

<p>To close, a quote from his interview with <em>American Photo</em> magazine back in 2000 that will likely remain relevant to photographers for a long time:</p>
<blockquote>
<p>But a lot of photographers think that if they buy a better camera they’ll be able to take better photographs.  A better camera won’t do a thing for you if you don’t have anything in your head or in your heart. <sup><a href="https://books.google.com/books?id=qWOpWDKpUjgC&amp;pg=PA36#v=onepage&amp;q&amp;f=true">1</a></sup></p>
</blockquote>
<p><small>
<sup>1</sup> Harris, Mark. <a href="https://books.google.com/books?id=qWOpWDKpUjgC&amp;pg=PA36#v=onepage&amp;q&amp;f=true">“Arnold Newman: The Stories Behind Some of the Most Famous Portraits of the 20th Century.”</a> <em>American Photo</em>, March/April 2000, pp. 36-38
</small></p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Highlight Bloom and Photoillustration Look]]></title>
            <link>https://pixls.us/articles/highlight-bloom-and-photoillustration-look/</link>
            <guid isPermaLink="true">https://pixls.us/articles/highlight-bloom-and-photoillustration-look/</guid>
            <pubDate>Wed, 12 Oct 2016 18:47:35 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/lede-woman.jpg" /><br/>
                <h1>Highlight Bloom and Photoillustration Look</h1> 
                <h2>Replicating a 'Lucisart'/Dave Hill type illustrative look</h2>  
                <p>Over in <a href="https://discuss.pixls.us/t/heres-some-kind-lucisart-processing-using-gmic-filters/2394" title="Topic on Discuss">the forums</a> community member <a href="https://discuss.pixls.us/users/sguyader/activity" title="sguyader on discuss">Sebastien Guyader</a> (@sguyader) posted a neat workflow for emulating a photo-illustrative look popularized by photographers like <a href="http://davehillphoto.com/classics-2005-2010/">Dave Hill</a> where the resulting images often seem to have a sort of hyper-real feeling to them. Some of this feeling comes from a local-contrast boost and slight ‘blooming’ of the lighter tones in the image (though arguably most of the look is due to lighting and compositing of multiple elements).</p>
<p>To illustrate, here are a few representative samples of Dave Hill’s work that reflects this feeling:</p>
<figure>
<a href='http://davehillphoto.com/classics-2005-2010/4sj9tswggio55wowsdzl7vtflvfjm4'>
    <img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/09_cliff_final.jpg" alt='Dave Hill Cliff' width='640' height='312'>
</a>
<a href='http://davehillphoto.com/classics-2005-2010/c8kqlov3w2osl8yvtqvro0ckl12q6m'>
    <img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/finishline_Lotion_Guy_Hot_Girl_092d.jpg" alt='Dave Hill Finishline Lotion' width='640' height='395'>
</a>
<a href='http://davehillphoto.com/classics-2005-2010/yg988exvuge6ek4290vge1s4rarujf'>
    <img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/track_6187a.jpg" alt='Dave Hill Track' width='640' height='427'>
</a>
<a href='http://davehillphoto.com/classics-2005-2010/4bt8vpcqi2vi1k8eve575sb861xk4m'>
    <img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/nick_saban_6443a.jpg" alt='Dave Hill Nick Saban' width='640' height='932'>
</a>
<figcaption>
A collection of example images. &copy;<a href="http://davehillphoto.com/classics-2005-2010/">Dave Hill</a>
</figcaption>
</figure>

<p>A video of Dave presenting on how he brought together the idea and images for the series the first image above is from:</p>
<div class='fluid-vid'>
<iframe width="560" height="315" src="https://www.youtube.com/embed/zSGY_N2Z_y0" frameborder="0" allowfullscreen></iframe>
</div>

<p>This effect is also popularized in Photoshop<sup><small>®</small></sup> filters such as <a href="https://www.google.com/search?q=photoshop+lucisart&amp;rlz=1C1CHBF_enUS707US707&amp;source=lnms&amp;tbm=isch&amp;sa=X&amp;ved=0ahUKEwi-no2l_NXPAhUBYT4KHbekC9QQ_AUICCgB&amp;biw=1353&amp;bih=1073#tbm=isch&amp;q=lucisart" title="Google Image search for &#39;Lucisart&#39;">LucisArt</a> in an effort to attain what some would (<em>erroneously</em>) call an “HDR” effect.  Really what they likely mean is a not-so-subtle tone-mapping. In particular the exaggerated local contrasts is often what garners folks attention.</p>
<p>We had <a href="https://pixls.us/articles/freaky-details-calvin-hollywood/">previously posted</a> about a method for exaggerating fine local contrasts and details using the <a href="https://pixls.us/articles/freaky-details-calvin-hollywood/">“Freaky Details”</a> method described by Calvin Hollywood. This workflow provides a similar idea but different results that many might find more appealing (it’s not as <em>gritty</em> as the Freaky Details approach).</p>
<p>Sebastien produced some great looking preview images to give folks a feeling for what the process would produce:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/bmw-vehicle-ride-bike-journey-1313343.jpg" alt='BMW' width='960' height='270' />
<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/ifa-f9-oldtimer-pkw-ddr-1661767.jpg" alt='IFA-F9' width='960' height='310' />
<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/fashion-woman-beauty-leisure-model-1636868.jpg" alt='Fashion Woman' width='960' height='320' />
<figcaption>
Images from <a href="https://pixabay.com">pixabay</a> (<a href="https://creativecommons.org/publicdomain/zero/1.0/deed.en" title="Creative Commons Zero - Public Domain">CC0, public domain</a>): <a href="https://pixabay.com/en/bmw-vehicle-ride-bike-journey-1313343/">Motorcycle</a>, <a href="https://pixabay.com/en/ifa-f9-oldtimer-pkw-ddr-1661767/">car</a>, <a href="https://pixabay.com/en/fashion-woman-beauty-leisure-model-1636868/">woman</a>.
</figcaption>
</figure>

<h2 id="replicating-a-dave-hill-lucasart-effect">Replicating a “Dave Hill”/“LucasArt” effect<a href="#replicating-a-dave-hill-lucasart-effect" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Sebastien’s approach relies only on having the always useful <a href="http://gmic.eu">G’MIC</a> plugin for <a href="https://www.gimp.org">GIMP</a>. The general workflow is to do a high-pass frequency separation, and to apply some effects like local contrast enhancement and some smoothing on the residual low-pass layer.  Then recombine the high+low pass layers to get the final result.</p>
<ol>
<li>Open the image.</li>
<li>Duplicate the base layer.<br>Rename it to <em>“Lowpass”</em>.</li>
<li>With the top layer (<em>“Lowpass”</em>) active, open G’MIC.</li>
<li>Use the <em>Photocomix smoothing</em> filter:
<p><span class="Cmd">Testing → Photocomix → Photocomix smoothing</span></p>
Set the <strong>Amplitude</strong> to <strong>10</strong>. Apply.<br>This is to taste, but a good startig place might be around 1% of the image dimensions (so a 2000px wide image - try using an Amplitude of 20).</li>
<li>Change the <em>“Lowpass”</em> layer blend mode to <em>Grain extract</em>.</li>
<li>Right-Click on the layer and choose <em>New from visible</em>.<br>Rename this layer from “<em>Visible</em>“ to something more memorable like <em>“Highpass”</em> and set its layer mode to <em>Grain merge</em>.<br>Turn off this layer visibility for now.</li>
<li>Activate the <em>“Lowpass”</em> layer and set its layer blend mode back to <em>Normal</em>.<br>The rest of the filters are applied to this <em>“Lowpass”</em> layer.</li>
<li>Open G’MIC again.<br>Apply the <em>Simple local contrast</em> filter:
<p><span class="Cmd">Details → Simple local contrast</span></p>
Using:<ul>
<li><strong>Edge Sensitivity</strong> to <strong>25</strong></li>
<li><strong>Iterations</strong> to <strong>1</strong></li>
<li><strong>Paint effect</strong> to <strong>50</strong></li>
<li><strong>Post-gamma</strong> to <strong>1.20</strong>  </li>
</ul>
</li>
<li>Open G’MIC again.<br>Now apply the <em>Graphic novel</em> filter:
<p><span class="Cmd">Artistic → Graphic novel</span></p>
Using:<ul>
<li>check the <strong>Skip this step</strong> checkbox for <strong>Apply Local Normalization</strong></li>
<li><strong>Pencil size</strong> to <strong>1</strong></li>
<li><strong>Pencil amplitude</strong> to <strong>100-200</strong></li>
<li><strong>Pencil smoother sharpness/edge protection/smoothness</strong><br>  to <strong>0</strong></li>
<li>Boost merging options <strong>Mixer</strong> to <strong>Soft light</strong></li>
<li><strong>Painter’s touch sharpness</strong> to <strong>1.26</strong></li>
<li><strong>Painter’s edge protection flow</strong> to <strong>0.37</strong></li>
<li><strong>Painter’s smoothness</strong> to <strong>1.05</strong></li>
</ul>
</li>
<li>Finally, make the <em>“Highpass”</em> layer visible again to bring back the fine details.</li>
</ol>
<h3 id="trying-it-out-">Trying It Out!<a href="#trying-it-out-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Let’s walk through the process. Sebastien got his sample images from the website <a href="https://pixabay.com">https://pixabay.com</a>, so I thought I would follow suit and find something suitable from there also.  After some searching I found this neat image from Jerzy Gorecki licensed <a href="https://creativecommons.org/publicdomain/zero/1.0/deed.en" title="Creative Commons Zero - Public Domain">Create Commons 0/Public Domain</a>.</p>
<figure>
<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/tut-01-base.jpg" alt='Model' width='640' height='815'/>
<figcaption>
The base image (<a href="https://pixabay.com/en/girl-hands-the-act-of-portrait-1527959/">link</a>).<br>From <a href="https://pixabay.com">pixabay</a>, (<a href="https://creativecommons.org/publicdomain/zero/1.0/deed.en" title="Creative Commons Zero - Public Domain">CC0 - Public Domain</a>): Jerzy Gorecki.
</figcaption>
</figure>

<h4 id="frequency-separation">Frequency Separation<a href="#frequency-separation" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The first steps (1&mdash;7) are to create a High/Low pass frequency separation of the image.  If you have a different method for obtaining the separation then feel free to use it.  Sebastien uses the Photocomix smoothing filter to create his low-pass layer (other options might be Gaussian blur, bi-lateral smoothing, or even wavelets).</p>
<p>The basic steps to do this are to duplicate the base layer, blur it, then set the layer blend mode to <strong>Grain extract</strong> and create a new layer from visible. The new layer will be the Highpass (high-frequency) details and should have its layer blend mode set to <strong>Grain merge</strong>.  The original blurred layer is the Lowpass (low-frequency) information and should have its layer blend mode set back to <strong>Normal</strong>.</p>
<p>So, following Sebastien’s steps, duplicate the base layer and rename the layer to “lowpass”.  Then open G’MIC and apply:</p>
<p><span class="Cmd">Testing → Photocomix → Photocomix smoothing</span></p>

<p>with an amplitude of around 20. Change this to suit your own taste, but about 1% of the image width is a decent starting point.  You’ll now have the base layer and the “lowpass” layer above it that has been smoothed:</p>
<figure>
<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/tut-02-photocomix-smooth.jpg" alt='Photocomix Smoothing' width='640' height='815'>
<figcaption>
“lowpass” layer after Photocomix smoothing with <strong>Amplitude</strong> set to 20.
</figcaption>
</figure>

<p>Setting the “lowpass” layer blend mode to <strong>Grain extract</strong> will reveal the high-frequency details:</p>
<figure>
<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/tut-02-photocomix-smooth-HP.png" alt='Grain Extract' width='271' height='197'>
<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/tut-03-photocomix-smooth-grain-extract.jpg" alt='HP' width='640' height='815'>
<figcaption>
The high-frequency details visible after setting the blurred “lowpass” layer blend mode to <strong>Grain extract</strong>.
</figcaption>
</figure>

<p>Now create a new layer from what is currently visible.  Either right-click the “lowpass” layer and choose “New from visible” or from the menus:</p>
<p><span class="Cmd">Layer → New from Visible</span></p>

<p>Rename this new layer from “Visible” to “highpass” and set its layer blend mode to <strong>Grain merge</strong>.  Select the “lowpass” layer and set its layer blend mode back to <strong>Normal</strong>.</p>
<figure>
<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/tut-03-frequency-separation.png" alt='Layers' width='271' height='237'>
</figure>

<p>The visible result should be back to what your starting image looked like.
The rest of the steps for this tutorial will operate on the “lowpass” layer.
You can leave the “highpass” filter visible during the rest of the steps to see what your results will look like.</p>
<h4 id="modifying-the-low-frequency-layer">Modifying the Low-Frequency Layer<a href="#modifying-the-low-frequency-layer" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>These next steps will modify the underlying low-frequency image information to smooth it out and give it a bit of a contrast boost. First the “Simple local contrast” filter will separate tones and do some preliminary smoothing, while the “Graphic novel” filter will provide a nice boost to light tones along with further smoothing.</p>
<h4 id="simple-local-contrast">Simple Local Contrast<a href="#simple-local-contrast" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>On the “lowpass” layer, open <a href="http://gmic.eu">G’MIC</a> and find the “Simple local contrast” filter:</p>
<p><span class="Cmd">Details → Simple local contrast</span></p>

<p>Change the following settings:</p>
<ul>
<li><strong>Edge Sensitivity</strong> to <strong>25</strong></li>
<li><strong>Iterations</strong> to <strong>1</strong></li>
<li><strong>Paint effect</strong> to <strong>50</strong></li>
<li><strong>Post-gamma</strong> to <strong>1.20</strong>  </li>
</ul>
<p>This will smooth out overall tones while simultaneously providing a nice local contrast boost. This is the step that causes small lighting details to “pop”:</p>
<figure>
<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/tut-04-simple-local-contrast.jpg" alt='Simple Local Contrast' data-swap-src='tut-01-base.jpg' width='640' height='815' >
<figcaption>
After applying the “Simple local contrast” filter.<br>(Click to compare to the original image)
</figcaption>
</figure>

<p>The contrast increase provides a nice visual punch to the image. The addition of the “Graphic novel” filter will push the overall image much closer to a feeling of a photo-illustration.</p>
<h4 id="graphic-novel">Graphic Novel<a href="#graphic-novel" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Still on the “lowpass” layer, re-open <a href="http://gmic.eu">G’MIC</a> and open the “Graphic Novel” filter:</p>
<p><span class="Cmd">Artistic → Graphic novel</span></p>

<p>Change the following settings:</p>
<ul>
<li>check the <strong>Skip this step</strong> checkbox for <strong>Apply Local Normalization</strong></li>
<li><strong>Pencil size</strong> to <strong>1</strong></li>
<li><strong>Pencil amplitude</strong> to <strong>100-200</strong></li>
<li><strong>Pencil smoother sharpness/edge protection/smoothness</strong><br>  to <strong>0</strong></li>
<li>Boost merging options <strong>Mixer</strong> to <strong>Soft light</strong></li>
<li><strong>Painter’s touch sharpness</strong> to <strong>1.26</strong></li>
<li><strong>Painter’s edge protection flow</strong> to <strong>0.37</strong></li>
<li><strong>Painter’s smoothness</strong> to <strong>1.05</strong></li>
</ul>
<p>The intent with this filter is to further smooth the overall tones, simplify details, and to give a nice boost to the light tones of the image:</p>
<figure>
<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/tut-05-graphic-novel.jpg" alt='Graphic Novel' data-swap-src='tut-04-simple-local-contrast.jpg' width='640' height='815'>
<figcaption>
After applying the “Graphic novel” filter.<br>(Click to compare to the local contrast result)
</figcaption>
</figure>

<p>The effect at 100% opacity can be a little strong.  If so, simply adjust the opacity of the “lowpass” layer to taste. In some cases it would probably be desirable to mask areas you don’t want the effect applied to.</p>
<p>I’ve included the GIMP .xcf.bz2 file of this image while I was working on it for this article.  You can <a href="girl-hands-the-act-of-portrait-1527959-full.xcf.bz2"><strong>download the file here</strong></a> (34.9MB). I did each step on a new layer so if you want to see the results of each effect step-by-step, simply turn that layer on/off:</p>
<figure>
<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/tut-04-xcf-sample.png" alt='Sample layers' width='271' height='320'>
<figcaption>
Example XCF layers
</figcaption>
</figure>

<p>Finally, a great big <strong>Thank You!</strong> to Sebastien Guyader (@sguyader) for <a href="https://discuss.pixls.us/t/heres-some-kind-lucisart-processing-using-gmic-filters/">sharing this with everyone</a> in the community!</p>
<h4 id="a-g-mic-command">A G’MIC Command<a href="#a-g-mic-command" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Of course, this wouldn’t be complete if someone didn’t come along with the direct <a href="http://gmic.eu">G’MIC</a> commands to get a similar result!  And we can thank Iain Fergusson (@Iain) for coming up with the commands:</p>
<pre><code>--gimp_anisotropic_smoothing[0] 10,0.16,0.63,0.6,2.35,0.8,30,2,0,1,1,0,1

-sub[0] [1]

-simplelocalcontrast_p[1] 25,1,50,1,1,1.2,1,1,1,1,1,1
-gimp_graphic_novelfxl[1] 1,2,6,5,20,0,1,100,0,1,0,0.78,1.92,0,0,2,1,1,1,1.26,0.37,1.05
-add
-c 0,255
</code></pre>  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[From the Community Vol. 1]]></title>
            <link>https://pixls.us/blog/2016/09/from-the-community-vol-1/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/09/from-the-community-vol-1/</guid>
            <pubDate>Sun, 04 Sep 2016 00:00:00 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2016/09/from-the-community-vol-1/photography-tile.png" /><br/>
                <h1>From the Community Vol. 1</h1> 
                  
                <p>Welcome to the first installment of <em>From the Community</em>, a (hopefully) quarterly blog post to highlight a few of the things our community members have been doing!</p>
<!-- more -->
<h2 id="rapid-photo-downloader-process-model"><a href="#rapid-photo-downloader-process-model" class="header-link-alt">Rapid Photo Downloader Process Model</a></h2>
<p><a href="https://discuss.pixls.us/t/the-rapid-photo-downloader-0-9-process-model/2114">@damonlynch has a great write up of Rapid Photo Download’s process model</a>. Rapid Photo Downloader is built using <a href="https://www.python.org/">Python</a>, so if you’re looking for a good way to add threads to your Python program, this write up has some good information for you, check it out!</p>
<figure class='big-vid'>
    <img src="https://pixls.us/blog/2016/09/from-the-community-vol-1/rpd-process-model.png" alt='rpd process model'>
</figure>

<h2 id="community-built-software-downloads-page"><a href="#community-built-software-downloads-page" class="header-link-alt">Community-built Software downloads page</a></h2>
<p>Free Software development tends to move at a pretty good pace, so there is always something new to try out! Not all of the new things warrant a new release, but our community steps up and builds the software so that others can use and test! Instead of random links to dropboxes and such, we’ve created a <a href="https://discuss.pixls.us/t/community-built-software/2137">Community-built Software page</a> to help centralize and make it easy for our users to help find and download the freshest builds of software from our great community members. Keep in mind that support may be limited for these builds and they’re considered testing, so quality may vary, but if you covet the newest, shiniest things, this is the place for you!</p>
<h2 id="glitch-art-filters-coming-to-g-mic"><a href="#glitch-art-filters-coming-to-g-mic" class="header-link-alt">Glitch art filters coming to G’MIC</a></h2>
<p><a href="https://discuss.pixls.us/t/on-the-road-to-1-7-6/2167">G’MIC will be getting some cool glitch art filters in 1.7.6</a>. <a href="https://discuss.pixls.us/users/thething">@thething</a> is interested in <a href="https://en.wikipedia.org/wiki/Glitch_art">glitch art</a> and <a href="https://discuss.pixls.us/t/glitch-art-filters/2159">requested some new filters in G’MIC</a>, and <a href="https://discuss.pixls.us/users/david_tschumperle">@David_Tschumperle</a> delivered very quickly!</p>
<p>You can flip blocks:</p>
<figure class='big-vid'>
    <img src="https://pixls.us/blog/2016/09/from-the-community-vol-1/gmic-block-flipping.png" alt='GMIC block flipping'>
</figure>

<p>and warp your images:</p>
<figure class='big-vid'>
    <img src="https://pixls.us/blog/2016/09/from-the-community-vol-1/gmic-warp.png" alt='GMIC image warping'>
</figure>

<h2 id="an-alternative-to-watermarking"><a href="#an-alternative-to-watermarking" class="header-link-alt">An Alternative to Watermarking</a></h2>
<p>Watermarking is ugly and takes focus away from your image. <a href="https://discuss.pixls.us/t/annotation-with-imagemagick-watermark-ish/1813">Why not try and add an attribution bar to your images?</a> In this post, <a href="https://discuss.pixls.us/users/patdavid">@patdavid</a> lays out how to add a bar underneath your image with your name, the image title, and a little logo. <a href="https://discuss.pixls.us/users/david_tschumperle">@David_Tschumperle</a> followed that effort up with an alternate implementation using G’MIC instead of imagemagic. Lastly, <a href="https://discuss.pixls.us/users/vato">@vato</a> rolled the imagemagick version into a <a href="https://discuss.pixls.us/t/annotation-with-imagemagick-watermark-ish/1813/6">bash script</a> with the necessary parameters exposed as variables at the beginning of the script.</p>
<p>Here is an example image by <a href="https://discuss.pixls.us/users/morgan_hardwood">@Morgan_Hardwood</a>:</p>
<figure class='big-vid'>
    <img src="https://pixls.us/blog/2016/09/from-the-community-vol-1/attrib-bar.jpg" alt='attribution bar example'>
</figure>

<h2 id="help-author-a-tutorial-for-beginners"><a href="#help-author-a-tutorial-for-beginners" class="header-link-alt">Help Author a Tutorial for Beginners</a></h2>
<p>Finally, <a href="https://discuss.pixls.us/t/article-idea-beginners-intro-to-free-software-photography/931">we’re still working on our beginner article</a> to help new users navigate the myriad of free software photography software that is out there. If you have ideas, or better yet, want to author a bit of content with our community, please join and help out! The post is community wiki and has complete revision control, so don’t be afraid to jump in and contribute!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[A Chiaroscuro Portrait]]></title>
            <link>https://pixls.us/articles/a-chiaroscuro-portrait/</link>
            <guid isPermaLink="true">https://pixls.us/articles/a-chiaroscuro-portrait/</guid>
            <pubDate>Wed, 27 Jul 2016 18:16:07 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/a-chiaroscuro-portrait/mairi-lede.jpg" /><br/>
                <h1>A Chiaroscuro Portrait</h1> 
                <h2>Following the Old Masters</h2>  
                <h2 id="introduction-concept-theory-">Introduction (Concept/Theory)<a href="#introduction-concept-theory-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>The term <a href="https://en.wikipedia.org/wiki/Chiaroscuro"><em>Chiaroscuro</em></a> is derived from the Italian <em>chiaro</em> meaning ‘clear, bright’ and <em>oscuro</em> meaning ‘dark, obscure’.  In art the term has come to refer to the use of bold contrasts between light and shadow, particularly across an entire composition, where they are a prominent feature of the work.</p>
<p>This interplay of shadow and light is particularly important in allowing the viewer to extrapolate volume from a flat image.  The use of a single light source helps to accentuate the perception of volume as well as adding drama and dynamics to the scene.</p>
<p>Historically the use of chiaroscuro can often be associated with the works of old masters such as <a href="https://en.wikipedia.org/wiki/Rembrandt">Rembrandt</a> and <a href="https://en.wikipedia.org/wiki/Caravaggio">Caravaggio</a>.  The use of such extreme lighting immediately evokes a sense of shape and volume, while focusing the attention of the viewer.</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/rembrandt-self.jpg" alt='Rembrandt Self Portrait' width='391' height='480'>
<figcaption>
<a href='https://commons.wikimedia.org/wiki/File:Rembrandt_van_Rijn_184.jpg'><em>Self Portrait with Gorget</em></a> by <a href="https://en.wikipedia.org/wiki/Rembrandt">Rembrandt</a>
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/pearl_earring.jpg" alt='Girl with a Pearl Earring' width='410' height='480'>
<figcaption>
<a href="https://en.wikipedia.org/wiki/Girl_with_a_Pearl_Earring"><em>Girl with a Pearl Earring</em></a> by <a href="https://en.wikipedia.org/wiki/Johannes_Vermeer">Johannes Vermeer</a>
</figcaption>
</figure>

<p>The aim of this tutorial will be to emulate the lighting characteristics of chiaroscuro in producing a portrait to evoke the feeling of an old master painting.</p>
<h3 id="equipment">Equipment<a href="#equipment" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>In examining chiaroscuro portraiture, it becomes apparent that a strong characteristic of the images is the use of single light source on the scene.  So this tutorial will focus on using a single source to illuminate the portrait.</p>
<p>Getting the keylight off the camera is essential.  The closer the keylight is to the axis of the camera the larger the reduction in shadows.  This is counter to the intention of this workflow.  Shadows are an essential component in producing this look, and on-camera lighting simply will not work.</p>
<p>The reason to choose a softbox versus the myriad of other light modifiers available is simple: control.  Umbrellas can soften the light, but due to their open nature have a tendency to spill light everywhere while doing so.  A softbox allows the light to be softened while also retaining a higher level of spill control.</p>
<p>Light spill can still occur with a softbox, so the best option is to bring the light in as close as possible to the subject.  Due to the inverse square nature of light attenuation, this will help to drop the background very dark (or black) when exposing properly for the subject.</p>
<figure class='big-vid'>
<a href='three-dots.jpg'>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/three-dots.jpg" alt='Inverse Square Light Fall Off' width='960' height='320'>
</a>
</figure>

<p><strong>Left</strong><br>For example, in the sample images above, a 20 inch softbox was initially located about 18 inches away from the subject (first).  The rear wall was approximately 48 inches away from the subject or just over twice the distance from the softbox.  Thus, on a proper exposure for the subject, the background would be around 3 stops lower in light.  This is seen as the background in the first image has dropped to a dark gray.</p>
<p><strong>Middle</strong><br>When the light distance to the subject is doubled and the light distance to the rear wall stays the same, the ratio is not as extreme between them.  The light distance from the subject is now 36 inches, while the light distance to the rear wall is still 48 inches.  When properly exposing for the subject, the rear wall is now only about 1 stop lower in light.</p>
<p><strong>Right</strong><br>In the final example, the distance from the light to both the subject and the rear wall are very close.  As such, a proper exposure for the subject almost brings the wall to a middle exposure.</p>
<p>What this example provides is a good visual guide for how to position the subject and light relative to the surroundings to create the desired look.  To accentuate the ratio between dark and light in the image it would be best to move the light as close to the subject as possible.</p>
<p>If there is nothing to reflect light on the shadow side of the subject, then the shadows would fall to very dark or black.  Usually, there are at least walls and ceilings in a space that will reflect some light, and the amount falling on the shadow side can be attenuated by either moving the subject nearer to a wall on that side, or using a bounce/reflector as desired.</p>
<h2 id="shooting">Shooting<a href="#shooting" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="planning">Planning<a href="#planning" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The setup for the shot would be to push the key light in very close to the model, while still allowing some bounce to slightly fill the shadows.</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/light-setup.png" alt='Mairi Light Setup' width='640' height='905' style='max-height:100vh;'>
</figure>

<p>As noted previously, having the key light close to the model would allow the rest of the scene to become much darker.  The softbox is arranged such that the face is almost completely vertical and the bottom edge is just above the models eyes.  This was to feather the lower edge of the light falloff along the front of the model.</p>
<p>There are 2 main adjustments that can be made to fine-tune the image result with this setup.</p>
<p>The first is the key light distance/orientation to the subject.  This will dictate the proper exposure for the subject.  For this image the intention is to push the key light in as close as possible without being in frame.  There is also the option of angling the key light relative to the subject.  In the diagram above, the softbox is actually angled away from the subject.  The intention here was to feather the edge of the light in order to control spill onto the rest of the model (putting more emphasis on her face).</p>
<p>The second adjustment, once the key light is in a good location, is the distance from the key light and subject together, to the surrounding walls (or a reflector if one is being used).  Moving both subject and keylight closer to the side wall will increase the amount of reflected light being bounced into the shadows.</p>
<h4 id="mood-board">Mood Board<a href="#mood-board" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>If possible, it can be extremely helpful to both the model and photographer to have a Mood Board available.  This is usually just a collection or collage of images that help to convey the desired feeling or desired result from the session.  For help in directing the model, the images do not necessarily need the same lighting setup.  The intention is to help the model understand what your vision is for the pose and facial expressions.</p>
<h3 id="the-shoot">The Shoot<a href="#the-shoot" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The lighting is set up and the model understands what type of look is desired, so all that’s left is to shoot the image!</p>
<figure class='big-vid'>
<a href='mairi-contact.jpg'>
    <img src="https://pixls.us/articles/a-chiaroscuro-portrait/mairi-contact.jpg" alt='Mairi Contact Sheet' width='960' height='685'>
</a>
</figure>

<p>In the end, I favored the last image in the sequence for a combination of the models head position/body language and the slight smile she has.</p>
<h2 id="postprocessing">Postprocessing<a href="#postprocessing" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Having chosen the final image from the contact sheet, it’s now time to proceed with developing the image and retouching as needed.</p>
<p>If you’d like to follow along you can download the raw .ORF file: </p>
<p><a href="Mairi_Troisieme.ORF"><strong>Mairi_Troisieme.ORF</strong></a> (13MB)</p>
<p>This file is licensed <a href="https://creativecommons.org/licenses/by-nc-sa/3.0/" title="Creative Commons By-Attribution Non-Commercial Share-Alike"><img src="https://pixls.us/articles/a-chiaroscuro-portrait/cc-by-nc-sa.png" height='15' style='display: inline; margin: 0; width: initial;'></a>
(<a href="https://creativecommons.org/licenses/by-nc-sa/3.0/" title="Creative Commons By-Attribution Non-Commercial Share-Alike">Creative Commons, By-Attribution, Non-Commercial, Share-Alike</a>), and is the same image that I shared with everyone on the forums for a PlayRaw processing practice.  You can see how other folks approached processing this image <a href="https://discuss.pixls.us/t/playraw-mairi-troisieme/967">in the topic on discuss</a>.  If you decide to try this out for yourself, come share your results with us!</p>
<h3 id="raw-development">Raw Development<a href="#raw-development" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>There are various <a href="https://pixls.us/software">Free raw processing tools</a> available and for this tutorial I will be using the wonderful <a href="http://www.darktable.org">darktable</a>.</p>
<figure>
<a href='http://www.darktable.org' title='darktable website'>
    <img src="https://pixls.us/articles/a-chiaroscuro-portrait/dtbg_logo.png" alt='darktable logo'>
</a>
</figure>

<h4 id="base-curve">Base Curve<a href="#base-curve" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Not surprisingly the initial image loaded without any modifications is a bit dark and rather flat looking.  By default darktable should have recognized that the file is from Olympus, and attempted to apply a sane base curve to the linear raw data.  If it doesn’t you can choose the preset “olympus like alternate”.</p>
<p>I found that the preset tended to crush the darkest tones a bit too much, and instead opted for a simple curve with a single point as seen here:</p>
<figure class='big-vid'>
<a href='darktable_0001.jpg'>
    <img src="https://pixls.us/articles/a-chiaroscuro-portrait/darktable_0001.jpg" alt='darktable base curve' width='960' height='526'>
</a>
</figure>

<p>Resist the temptation to try and adjust overall exposure and contrast with the base curve.  These parameters will be adjusted shortly in the appropriate modules.  The base curve is only intended to transform the linear raw rgb to something that looks good on your output device.  The base curve will affect how the contrasts, colors, and saturation all relate in the final output.  For the purposes of this tutorial, it is enough to simply choose a preset.</p>
<p>The next series of steps focus on adjusting various exposure parameters for the image.  Conceptually they start with the most broad adjustment, exposure, then to slightly more targeted adjustments such as contrast, brightness, and saturation, then finish with targeted tonal adjustments in tone curves.</p>
<p><a href="https://www.darktable.org/usermanual/ch03s04.html.php#base_curve">darktable manual: base curve</a></p>
<h4 id="exposure">Exposure<a href="#exposure" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Once the base curve is set, the next module to adjust would be the overall exposure of the image (and the black point).  This is done in the “exposure” module (below the base curve).</p>
<figure class='big-vid'>
<a href='darktable_0002.jpg'>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/darktable_0002.jpg" alt='darktable exposure' width='960' height='526'>
</a>
</figure>

<p>The important area to watch while adjusting the exposure for the image is the histogram.  The image was exposed a little dark, so increase the exposure overall for the image.  In the histogram, avoid clipping any channels by allowing them to be pushed outside the range.  In this case, the desire is to provide a nice mid-level brightness to the models face.  The exposure can be raised until the channels begin to clip on the far right of the histogram, then brought back down a bit to leave some headroom.</p>
<p>The darkest areas of the histogram on the left are clipped a bit, so raising the black level brings the detail back in the darkest shadows.  When in doubt try to let the histogram guide you with data from the image.  Particularly around the highest and lowest values (avoid clipping if possible).</p>
<p>An easy way to think of the exposure module is that it allows the entire image exposure to be shifted along with compressing/expanding the overall range by modifying the black point.</p>
<p><a href="https://www.darktable.org/usermanual/ch03s04.html.php#exposure">darktable manual: exposure</a></p>
<h4 id="contrast-brightness-saturation">Contrast Brightness Saturation<a href="#contrast-brightness-saturation" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Where the Exposure module shifts the overall image values from a global perspective, modules such as the “contrast brightness saturation” allow finer tuning of the image within the range of the exposure.</p>
<p>To emphasis the models face, while also strengthening the interplay of shadow and light on the image, drop the brightness down to taste.  I brought the brightness levels down quite a bit (-0.31) to push almost all of the image below medium brightness.</p>
<figure class='big-vid'>
<a href='darktable_0003.jpg'>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/darktable_0003.jpg" alt='darktable contrast brightness saturation' width='960' height='526'>
</a>
</figure>

<p>Overall this helps to emphasis the models face over the rest of the image initially.  While the rest of the image is comprised of various dark/neutral tones, the models face is not.  Pushing the saturation down as well will remove much of the color from the scene and face.  This is done to bring the skin tones back down to something slightly more natural looking, while also muting some of those tones.</p>
<figure class='big-vid'>
<a href='darktable_0004.jpg'>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/darktable_0004.jpg" alt='darktable contrast brightness saturation' width='960' height='526'>
</a>
</figure>

<p>The skin now looks a bit more natural but muted.  The background tones have become more neutral as well.  A very slight bump in contrast to taste finishes out this module.</p>
<p><a href="https://www.darktable.org/usermanual/ch03s04.html.php#contrast_brightness_saturation">darktable manual: contrast brightness saturation</a></p>
<h4 id="tone-curve">Tone Curve<a href="#tone-curve" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>A final modification to the exposure of the image is through a tone curve adjustment.  This gives us the ability to make some slight changes to particular tonal ranges.  In this case pushing the darker tones down a bit more while boosting the upper mid and high tones.</p>
<figure class='big-vid'>
<a href='darktable_0005.jpg'>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/darktable_0005.jpg" alt='darktable tone curve' width='960' height='526'>
</a>
</figure>

<p>This is actually a type of contrast increase, but controlled to specific tones based on the curve.  The darkest darks (bottom of the curve) get pushed a little bit darker, which will include most of the sweater, background, and shadow side of the models face.  The very slight rolling boost to the lighter tones primarily helps to allow the face to brighten up against the background even more.</p>
<p>The changes are very slight and to taste.  The tone curve is very sensitive to changes, and often only very small modifications are required to achieve a given result.</p>
<p><a href="https://www.darktable.org/usermanual/ch03s04s02.html.php#tone_curve">darktable manual: tone curve</a></p>
<h4 id="sharpen">Sharpen<a href="#sharpen" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>By default the sharpen module will apply a small amount of sharpening to the image.  The module uses unsharp mask for sharpening, so the radius parameter is the blur radius into the unsharp mask.  I wanted to sharpen lightly very fine details, so set the radius to ~1, with an amount around 0.9 and no threshold.  This produced results that are very hard to distinguish from the default settings, but appears to sharpen smaller structures just slightly more.</p>
<figure class='big-vid'>
<a href='darktable_0006.jpg'>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/darktable_0006.jpg" alt='darktable exposure' width='960' height='526'>
</a>
</figure>

<p>I personally include a final sharpening step as a side effect of using wavelet decompose for skin retouching later in the process with <a href="https://www.gimp.org">GIMP</a>.  As such I am not usually as concerned about sharpening here as much.  If I were, there are better modules for adjusting sharpening from wavelets using the equalizer module.</p>
<p><a href="https://www.darktable.org/usermanual/ch03s04s04.html.php#sharpen">darktable manual: sharpen</a></p>
<h4 id="denoise-profiled-">Denoise (profiled)<a href="#denoise-profiled-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The darktable team and its users profiled many different cameras for noise profiles at various ISOs to build a statistical model with brightness across the three color channels.  Using these profiles, darktable can then do a better job at efficiently denoising images.  In the case of my camera (Olympus OM-D E-M5), there was a profile already captured for ISO200.</p>
<figure class='big-vid'>
<a href='darktable_0007.jpg'>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/darktable_0007.jpg" alt='darktable denoise profiled' width='960' height='526'>
</a>
</figure>

<p>In this case, the chroma noise wasn’t too bad, and a very slight reduction in luma noise would be sufficient for the image.  As such, I used a non-local means with a large patch size (to retain sharpness) and a low strength.  This was all applied uniformly against the HSV lightness option.</p>
<p><a href="https://www.darktable.org/usermanual/ch03s04s04.html.php#denoise_profiled">darktable manual: denoise - profiled</a></p>
<h4 id="export">Export<a href="#export" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Finally!  The image tones and exposure are in a desirable state, so export the results to a new file.  I tend to use either TIF or PNG at 16 bit.  This is in case I want to work in a full 16 bit workflow with the latest <a href="https://www.gimp.org">GIMP</a>, or may want to in the future.</p>
<h3 id="gimp">GIMP<a href="#gimp" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>When there are still some pixel-level modifications that need to be done to the image, the go-to software is <a href="https://www.gimp.org">GIMP</a>.</p>
<ul>
<li>Skin retouching</li>
<li>spot healing/touchups</li>
<li>Background rebuild</li>
</ul>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/wilber-big.png" alt='GIMP - GNU Image Manipulation Program <3' width='300' height='224'>
</figure>


<h4 id="skin-retouching-with-wavelet-decompose">Skin Retouching with Wavelet Decompose<a href="#skin-retouching-with-wavelet-decompose" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>This step is not always needed, but who doesn’t want their skin to look a little nicer if possible?</p>
<p>The ability to modify an image based on detail scales isolated on their own layers is a very powerful tool.  The approach is similar to frequency separation, but has the advantage of providing multiple frequencies to modify simultaneously of progressively larger and larger detail scales.  This offers a large range of flexibility and an easier workflow vs. frequency separation (you can work on any detail scale simply by switching to a different layer).</p>
<p>I used to use the wonderful <a href="http://registry.gimp.org/node/11742">Wavelet Decompose</a> plugin from marcor on the GIMP plugin registry.  I have since switched to using the same result from <a href="http://gmic.eu">G’MIC</a> once David Tschumperlé added it in for me.  It can be found in G’MIC under:</p>
<p class='Cmd'>Details &rarr; Split details [wavelets]</p>

<p>Running <strong>Split details [wavelets]</strong> against the image to produce 5 wavelet scales and a residual layer yields (cropped):</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/wavelets-example.jpg" alt='Wavelet scales example decompose' width='640' height='960'>
</figure>

<p>The plugin (or script) will produce 5 layers of isolated details plus a residual layer of low-frequency color information.  Seen here in ascending size of detail scales.  The finest scales (1 &amp; 2) will be hard to discern the details as they are quite fine.</p>
<p>To help visualizing what the different scale levels look like here is a view of the same levels above, normalized:</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/wavelets-example-normalized.jpg" alt='Wavelet scales normalized' width='640' height='960'>
</figure>

<p>The normalized view shows clearly the various types of detail scales on each layer.</p>
<p>There are various types of changes that can be made to the final image from these details scales.  In this image, we are going to focus on evening out the skin tones overall.  The scales with the biggest impact on even skin tones for this image are 4 and 5.</p>
<p>A good workflow when smoothing overall skin tones and using wavelet scales is to work on smoothing from the largest detail scales and working down to finer scales.  Usually, a nice amount of pleasing tonal smoothing can be accomplished in the first couple of coarse detail scales.</p>
<h4 id="skin-retouching-zones">Skin Retouching Zones<a href="#skin-retouching-zones" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Different portions of a face will often require different levels of smoothing.  Below is a rough map of facial contours to consider when retouching.  Not all faces will require the exact same regions, but it is a good starting point to consider when approaching a new image.</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/skin-zones.jpg" alt='Skin retouching by zones' width='640' height='742'>
</figure>

<p>The selections are made with the Free Select Tool with the “Feather edges” option on and set to roughly 30px.</p>
<h4 id="smoothing">Smoothing<a href="#smoothing" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>A good starting point to consider is the forehead on the largest detail scale (5).  The basic workflow is to select a region of interest and a layer of detail, then to suppress the features on that detail level.  The method of suppressing features is a matter of personal taste but is usually done across the entire selection using a blur filter of some sort.</p>
<p>A good first choice would be to use a gaussian blur (or Selective Gaussian Blur) to smooth the selection.  A better choice, if G’MIC is installed, is to use a bilateral blur for its edge-preserving properties.  The rest of these examples will use the bilateral blur for smoothing.</p>
<p>Considering the forehead region:</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/forehead-orig-scale5-4.jpg" alt='Sking retouching wavelet scales forehead' width='640' height='1397'>
</figure>

<p>The first image is the original.  The second image is after running a bilateral blur (in G’MIC: Smooth [bilateral]), with the default parameter values:</p>
<ul>
<li>Spatial variance: 10</li>
<li>Value variance: 7</li>
<li>Iterations: 2</li>
</ul>
<p>These values were chosen from experience using this filter for the same purpose across many, many images.  The results of running a single blur on the largest wavelet scale is immediately obvious.  The unevenness of the skin and tones overall are smoothed in a pleasing way, while still retaining the finer details that allow the eye to see a realistic skin texture.</p>
<p>The last image is the result of working on the next detail scale layer down (Wavelet scale 4), with much softer blur parameters:</p>
<ul>
<li>Spatial variance: 5</li>
<li>Value variance: 2</li>
<li>Iterations: 1</li>
</ul>
<p>This pass does a good job of finishing off the skin tones globally.  The overall impression of the skin is much smoother than the original, but crucial fine details are all left intact (wrinkles, pores) to keep the it looking realistic.</p>
<p>This same process is repeated for each of the facial regions described.  In some cases the results of running the first bilateral blur on the largest scale level is enough to even out the tones (the cheeks and upper lip for example).  The chin got the same treatment as the forehead.  The process is entirely subjective, and will vary from person to person for the parameters.  Experimentation is encouraged here.</p>
<p>More importantly, the key word to consider while working on skin tones is <strong><em>moderation</em></strong>.  It is also important to check your results zoomed out, as this will give you an impression of the image as seen when scaled to something more web-sized.  A good rule of thumb might be: </p>
<blockquote>
<p>“If it looks good to you, go back and reduce the effect more”.</p>
</blockquote>
<p>The original vs. results after wavelet smoothing:</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/face-wavelet.jpg" alt='Mairi Face Wavelet' data-swap-src='face-original.jpg' width='640' height='741'>
<figcaption>
Wavelet Smoothed.<br>
Click to compare original
</figcaption>
</figure>

<noscript>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/face-original.jpg" alt='Mairi Face Original' width='640' height='741'>
<figcaption>
Original
</figcaption>
</figure>
</noscript>

<p>When the work is finished on the wavelet scales, a new layer from all of the visible layers can be created to continue touching up spot areas that may need it.</p>
<p class='Cmd'>Layer → New from Visible</p>


<h4 id="spot-touchups">Spot Touchups<a href="#spot-touchups" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The use of wavelets is good for a large-scale selection area smoothing but a different set of tools is required for spot touchups where needed.  For example, there is a stray hair that runs across the models forehead that can be removed using the Heal tool.</p>
<p>For best results when using the Heal tool, use a hard edged brush.  Soft edges can sometimes lead to a slight smearing in the feathered edge of a brush that is undesirable. Due to the nature of the heal algorithm sampling, it is also advisable to avoid trying to heal across hard/contrasty edges.</p>
<p>This is also a good tool to use for small blemishes that might have been tedious to repair across all of the wavelet scales from the previous section.  This is also a good time to repair hot-spots, fly-away hairs, or other small details.</p>
<h4 id="sweater-enhancement">Sweater Enhancement<a href="#sweater-enhancement" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The model is wearing a nicely textured sweater but the details and texture are a slightly muted.  A small increase in contrast and local details will help to bring some enhancement to the textures and tones.  One method of enhancing local details would be to use the Unsharp Mask enhancement with a high radius and low amount (HiRaLoAm is an acronym some might use for this).  </p>
<p>Create a duplicate of the “Spot Healing” layer that was worked on in the previous step, and apply an Unsharp Mask to the layer using HiRaLoAm values.</p>
<p>For example, a good starting point for parameters might be:</p>
<ul>
<li>Radius: 200</li>
<li>Amount: 0.25</li>
</ul>
<p>With these parameters the sharpen function will instead tend to increase local contrast more, providing more “presence” or “pop” to the sweater texture.</p>
<h4 id="background-rebuild">Background Rebuild<a href="#background-rebuild" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The background of the image is a little too uniformly dark and could benefit from some lightening and variation.  A nice lighter background gradient will enhance the subject a little.</p>
<p>Normally this could be obtained through the use of a second strobe (probably gridded or with a snoot) firing at the background.  In our case we will have to fake the same result through some masking.</p>
<p>First, a crop is chosen to focus the composition a little stronger on the subject.  I placed the center of the models face along the right-side golden section vertical and tried to place things near the center of the frame:</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/mairi-cropped.jpg" alt='Mairi cropped' width='640' height='800'>
</figure>

<p>The slight-centered crop is to emulate the type of crop that might be expected from a classical painting (thereby strengthening the overall theme of the portrait further).</p>
<h4 id="subject-isolation">Subject Isolation<a href="#subject-isolation" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>There are a few different methods to approach the background modification.  The method I describe here is simply one of them.</p>
<p>The image at this point is duplicated and the duplicate has the levels raised to brighten it up considerably.  In this way, a simple layer mask can control the brightness and where it occurs in the image at this point.</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/mairi-isolation.jpg" alt='Mairi isolation' width='640' height='799'>
</figure>

<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/mairi-isolation-layers.png" alt='Mairi isolation layers' width='259' height='286'>
</figure>

<p>This is what will give our background a gradient of light.  To get our subject back to dark will require masking the subject on a layer mask again.  A quick way to get a mask to work from is to add a layer mask to the “Over” layer, letting the background show through, but turning the subject opaque.</p>
<p>Add a layer mask to the “Over” layer as a “Grayscale copy of layer”, and check the “Invert mask” option:</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/mairi-isolation-add-layer-mask.png" alt='Mairi isolation add layer mask' width='297' height='383'>
</figure>

<p>With an initial mask in place, a quick use of the tool:</p>
<p class='Cmd'>Colors &rarr; Threshold</p>

<p>will allow you to modify the mask to define the shoulder of the model as a good transition.  The mask will be quite narrow.  Adjust the threshold until the lighter background is speckle-free and there is a good definition of the edge of the sweater against the background.</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/mairi-isolation-threshold.jpg" alt='Mairi threshold' width='640' height='311'>
</figure>

<p>Once the initial mask is in place it can be cleaned up further by making the subject entirely opaque (white on the mask), and the background fully transparent (black on the mask).  This can be done with paint tools easily.  For not much work a decent mask and result can be had:</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/mairi-isolation-final.jpg" alt='Mairi isolation final' width='640' height='799'>
</figure>

<p>This provides a nice contrast of the background being lighter behind the darker portions of the model and the opposite on the lighter subjects face.</p>
<h4 id="lighten-face-highlights">Lighten Face Highlights<a href="#lighten-face-highlights" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Speaking of the subjects face, there’s a nice simple method for applying a small accent on the highlighted portions of the models face in order to draw more attention to her.</p>
<p>Duplicate the lightened layer that was used to create the background gradient, move it to the top of the layer stack, and remove the layer mask from it.</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/mairi-lighten-layers.png" alt='Mairi Lighten Face Layers' width='258' height='282'>
</figure>

<p>Set the layer mode of the copied layer to “Lighten only.</p>
<p>As before, add a new layer mask to it, “Grayscale copy of layer”, but don’t check the “Invert mask” option.  This time use the Levels tool:</p>
<p class='Cmd'>Colors → Levels</p>

<p>to raise the blacks of the mask up to about mid-way or more.  This will isolate the lightening mask to the brightest tones in the image that happen to correspond to the models face. You should see your adjustments modify the mask on-canvas in real-time.  When you are happy with the highlights, apply.</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/mairi-lighten.jpg" alt='Mairi Lighten Highlights' width='640' height='799'>
</figure>


<h4 id="last-sharpening-pass-grain">Last Sharpening Pass + Grain<a href="#last-sharpening-pass-grain" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Finally, using I like to apply a last pass of sharpening to the image, and to overlay some grain from a grain field I have to help add some structure to the image as well as mask any gradient issues when rebuilding the background.  For this particular image the grain step isn’t really needed as there’s already sufficient luma noise to provide its own structure.</p>
<p>Usually, I will use the smallest of the wavelet scales from the prior steps and sometimes the next largest scale as well (Wavelet scale 1 &amp; 2).  I’ll leave Wavelet scale 1 at 100% opacity, and scale 2 usually around 50% opacity (to taste, of course).</p>
<figure class='big-vid'>
<a href='mairi-final.jpg'>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/mairi-final_960.jpg" alt='Mairi Final' style='max-height: 100vh;' width='862' height='1077'>
</a>
</figure>

<p>Minor touchups that could still be done might include darkening the chair in the bottom right corner, darkening the gradient in the bottom left corner, and possibly adding a slight white overlay to the eyes to subtly give them a small pop.</p>
<p>As it stands now I think the image is a decent representation of a chiaroscuro portrait that mimics the style of a classical composition and interplay between light and shadows across the subject.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[HD Photo Slideshow with Blender]]></title>
            <link>https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/</guid>
            <pubDate>Tue, 12 Jul 2016 13:36:55 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/beck-roses.jpg" /><br/>
                <h1>HD Photo Slideshow with Blender</h1> 
                <h2>Because who doesn't love a challenge?</h2>  
                <p>While I was out at <a href="http://2016.texaslinuxfest.org/">Texas Linux Fest</a> this past weekend I got to watch a fun presentation from the one and only <a href="https://twitter.com/designbybeck">Brian Beck</a>.  He walked through an introduction to <a href="http://www.blender.org">Blender</a>, including an overview of creating his great <em>The Lady in the Roses</em> image that was a part of the <a href="http://librecal2015.libreart.info/en/">2015 Libre Calendar</a> project.</p>
<p>Coincidentally, during my trip home community member <a href="https://discuss.pixls.us/users/Fotonut/">@Fotonut</a> asked about software to create an HD slideshow with images.  The first answer that jumped into my mind was to consider using <a href="http://www.blender.org">Blender</a> (a very close second was <a href="http://www.openshot.org/">OpenShot</a> because I had just spent some time talking with Jon Thomas about it).</p>
<!-- more -->
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/beck-roses.jpg" alt='Brian Beck Roses' width='640' height='453'>
<figcaption>
<em>The Lady in the Roses</em> by Brian Beck <a class='cc' href='https://creativecommons.org/licenses/by/4.0/' title='Creative Commons By-Attribution 4.0'>cba</a>
</figcaption>
</figure>

<p>I figured this much Blender being talked about deserved at least a post to answer <a href="https://discuss.pixls.us/users/Fotonut/">@Fotonut</a>‘s question in greater detail.  I know that many community members likely abuse Blender in various ways as well &ndash; so please let me know if I get something way off!</p>
<h2 id="enter-blender"><a href="#enter-blender" class="header-link-alt">Enter Blender</a></h2>
<p>The reason that Blender was the first thing that popped into many folks minds when the question was posed is likely because it has been a go-to swiss-army knife of image and video creation for a long, long time.  For some it was the only viable video editing application for heavy use (not that there weren’t other projects out there as well).  This is partly due to to the fact that it integrates so much capability into a single project.</p>
<p>The part that we’re interested in for the context of Fotonut’s original question is the <a href="https://www.blender.org/manual/de/editors/sequencer/">Video Sequence Editor</a> (VSE).  This is a very powerful (though often neglected) part of Blender that lets you arrange audio and video (and image!) assets along a timeline for rendering and some simple effects.  Which is actually perfect for creating a simple HD slideshow of images, as we’ll see.</p>
<h3 id="the-plan"><a href="#the-plan" class="header-link-alt">The Plan</a></h3>
<p>Blenders interface is likely to take some getting used to for newcomers (right-click!) but we’ll be focusing on a <em>very</em> small subset of the overall program&mdash;so hopefully nobody gets lost.  The overall plan will be:</p>
<ol>
<li>Setup the environment for video sequence editing</li>
<li>Include assets (images) and how to manipulate them on the timeline</li>
<li>Add effects such as cross-fades between images</li>
<li>Setup exporting options</li>
</ol>
<p>There’s also an option of using a very helpful add-on for automatically resizing images to the correct size to maintain their aspect ratios. Luckily, Blender’s add-on system makes it trivially easy to set up.</p>
<h3 id="setup"><a href="#setup" class="header-link-alt">Setup</a></h3>
<p>On opening Blender for the first time we’re presented with the comforting view of the default cube in 3D space.  Don’t get too cozy, though.  We’re about to switch up to a different screen layout that’s already been created for us by default for Video Editing.</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/main-window.jpg" alt='Blender default main window' width='960' height='540'>
<figcaption>
The main blender default view.
</figcaption>
</figure>

<p>The developers were nice enough to include various default “Screen Layout” options for different tasks, and one of them happens to be for <em>Video Editing</em>.  We can click on the screen layout option on the top menu bar and choose the one we want from the list (<em>Video Editing</em>):</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/screen-layout.jpg" alt='Blender screen layout options' width='960' height='540'>
<figcaption>
Choosing a new Screen Layout option.
</figcaption>
</figure>

<p>Our screen will then change to the new layout where the top left pane is the F-curve window, the top right is the video preview, the large center section is the sequencer, and the very bottom is a timeline.  Blender will let you arrange, combine, and collapse all the various panes into just about any layout that you might want, including changing what each of them are showing.  For our example we will <em>mostly</em> leave it all as-is with the exception of the F-curve pane, which we won’t be using and don’t need.</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/video-editing-layout.jpg" alt='Blender video editing layout' width='960' height='540'>
<figcaption>
The Video Editing default layout.
</figcaption>
</figure>

<p>What we can do now is to define what the resolution and framerate of our project should be.  This is done in the <strong>Properties</strong> pane, which isn’t shown right now.  So we will change the <strong>F-Curve</strong> pane into the <strong>Properties</strong> pane by clicking on the button shown in red above to change the panel type.  We want to choose <strong>Properties</strong> from the options in the list:</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/change-to-properties.jpg" alt='Blender change pane to properties' width='601' height='528'>
</figure>

<p>Which will turn the old F-Curve pane into the <strong>Properties</strong> pane:</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/properties.jpg" alt='Blender properties' width='569' height='373'>
</figure>


<p>You’ll want to set the appropriate X and Y resolution for your intended output (don’t forget to set the scaling from the default 50% to 100% now as well) as well as your intended framerate.  Common rates might be 23.976 (23.98), 25, 30, or even 60 frames per second.  If your intended target is something like YouTube or an HD television you can probably safely use 30 or 60 (just remember that a higher frame rate means a longer render time!).</p>
<p>For our example I’m going to set the output resolution to 1920&nbsp;&times;&nbsp;1080 at 30fps.</p>
<h4 id="one-extra-thing"><a href="#one-extra-thing" class="header-link-alt">One Extra Thing</a></h4>
<p>Blender does need a little bit of help when it comes to using images on the sequence editor.  It has a habit of scaling images to whatever the output resolution is set to (ignoring the original aspect ratios). This can be fixed by simply applying a transform to the images but normally requires us to manually compute and enter the correct scaling factors to get the images back to their original aspect ratios.</p>
<p>I did find a nice small add-on <a href="http://blenderartists.org/forum/showthread.php?280731-VSE-Transform-tool">on this thread</a> at <a href="http://blenderartists.org">blenderartists.org</a> that binds some handy shortcuts onto the VSE for us. The author kgeogeo has the add-on <a href="https://github.com/kgeogeo/VSE_Transform_Tools">hosted on Github</a>, and you can download the <a href="http://www.python.org">Python</a> file directly from here: <a href="https://raw.githubusercontent.com/kgeogeo/VSE_Transform_Tools/master/VSE_Transform_Tool.py">VSE Transform Tool</a> (you can <strong>Right-Click</strong> and save the link).  Save the .py file somewhere easy to find.</p>
<p>To load the add-on manually we’re going to change the <strong>Properties</strong> panel to <strong>User Preferences</strong>:</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/change-to-pref.jpg" alt='Blender change to preferences' width='568' height='538'>
</figure>

<p>Click on the <strong>Add-ons</strong> tab to open that window and at the bottom of the panel is an option to “Install from File…”.  Click that and navigate to the <code>VSE_Transform_Tool.py</code> file that you downloaded previously.</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/add-ons.jpg" alt='Blender add-ons' width='570' height='423'>
</figure>

<p>Once loaded, you’ll still need to <em>Activate</em> the plugin by clicking on the box:</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/add-addon.jpg" alt='Blender adding add-ons' width='570' height='398'>
</figure>

<p>That’s it!  You’re now all set up to begin adding images and creating a slideshow.  You can set the <strong>User Preferences</strong> pane back to <strong>Properties</strong> if you want to.</p>
<h3 id="adding-images"><a href="#adding-images" class="header-link-alt">Adding Images</a></h3>
<p>Let’s have a look at adding images onto the sequencer.</p>
<p>You can add images by either choosing <strong>Add &rarr; Image</strong> from the VSE menu and navigating to your images location, choosing them:</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/add-image.jpg" alt='Blender VSE add image' width='585' height='276'>
</figure>

<p>Or by drag-and-dropping your images onto the sequencer timeline from Nautilus, Finder, Explorer, etc…</p>
<p>When you do, you’ll find that a strip now appears on the VSE window (purple in my case) that represents your image.  You should also see a preview of your video in the top-right preview window (sorry for the subject).</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/add-first-image.jpg" alt='Blender VSE add image' width='960' height='540'>
</figure>

<p>At this point we can use the handy add-on we installed previously by <strong>Right-Clicking</strong> on the purple strip to make sure it’s activated and then hitting the “T” key on the keyboard.  This will automatically add a transform to the image that scales it to the correct aspect ratio for you.  A small green <em>Transform</em> strip will appear above your purple image strip now:</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/add-transform.jpg" alt='Blender VSE add transform strip' width='327' height='276'>
</figure>

<p>Your image should now also be scaled to fit at the correct aspect ratio.</p>
<h4 id="adjusting-the-image"><a href="#adjusting-the-image" class="header-link-alt">Adjusting the Image</a></h4>
<p>If you scroll your mouse wheel in the VSE window, you will zoom in and out of time editor based on time (the x-axis in the sequencer window). You’ll notice that the time compresses or expands as you scroll the mouse wheel.</p>
<p>The middle-mouse button will let you pan around the sequencer.</p>
<p>The right-mouse button will select things.  You can try this now by extending how long your image is displayed in the video. <strong>Right-Click</strong> on the small arrow on the end of the purple strip to activate it.  A small number will appear above it indicating which frame it is currently on (26 in my example):</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/select-right.jpg" alt='Blender VSE' width='468' height='203'>
</figure>

<p>With the right handle active you can now either press “G” on the keyboard and drag the mouse to re-position the end of the strip, or <strong>Right-Click</strong> and drag to do the same thing. The timeline in seconds is shown along the bottom of the window for reference.  If we wanted to let the image be visible for 5 seconds total, we could drag the end to the 5+00 mark on the sequencer window.</p>
<p>Since I set the framerate to 30 frames per second, I can also drag the end to frame 150 (30fps * 5s = 150 frames).</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/five-seconds.jpg" alt='Blender VSE five seconds' width='582' height='170'>
</figure>

<p>When you drag the image strip, the transform strip will automatically adjust to fit (so you don’t have to worry about it).</p>
<p>If you had selected the center of the image strip instead of the handle on one end and tried to move it, you would find that you can move the entire strip around instead of one end.  This is how you can re-position image strips, which you may want to do when you add a second image to your sequencer.</p>
<p>Add a new image to your sequencer now following the same steps as above.</p>
<p>When I do, it adds a new strip back at the beginning of the timeline (basically where the current time is set):</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/second-image.jpg" alt='Blender VSE second image' width='624' height='211'>
</figure>

<p>I want to move this new strip so that it overlaps my first image by about half a second (or 15 frames).  Then I will pull the right handle to resize the display time to about 5 seconds also.</p>
<p>Click on the new strip (center, not the ends), and press the “G” key to move it.  Drag it right until the left side overlaps the previous image strip by a little bit:</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/second-image-drag.jpg" alt='Blender VSE drag strip' width='560' height='196'>
</figure>

<p>When you click on the strip right handle to modify it’s length, notice the window on the far right of the VSE.  The <strong>Edit Strip</strong> window should also show the strip “Length” parameter in case you want to change it by manually inputting a value (like 150):</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/second-image-edit.jpg" alt='Blender VSE adjust strip' width='600' height='250'>
</figure>

<p>I forgot to use the add-on to automatically fix the aspect ratio.  With the strip selected I can press “T” at any time to invoke the add-on and fix the aspect ratio.</p>
<h3 id="adding-a-transition-effect"><a href="#adding-a-transition-effect" class="header-link-alt">Adding a Transition Effect</a></h3>
<p>With the two image strips slightly overlapping, we now want to define a simple cross fade between the two images as a transition effect.  This is actually something alreayd built into the Blender VSE for us, and is easy to add.  We _do_ need to be careful to select the right things to get the transition working correctly, though.</p>
<p>Once you’ve added a transform effect to a strip, you’ll need to make sure that subsequent operations use the <em>transform</em> strip as opposed to the original image strip.</p>
<p>For instance, to add a cross fade transition between these two images, click the first image strip transform (green), then <strong>Shift-Click</strong> on the second image transform strip (green). Now they are both selected, so add a <em>Gamma Cross</em> by using the <strong>Add</strong> menu in the VSE (Add &rarr; Effect Strip… &rarr; Gamma Cross):</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/add-gamma-cross.jpg" alt='Blender VSE add gamma cross' width='600' height='531'>
</figure>

<p>This will add a <em>Gamma Cross</em> effect as a new strip that is locked to the two images overlap.  It will do a cross-fade between the two images for the duration of the overlap.  You can <strong>Left-Click</strong> now and scrub over the cross-fade strip to see it rendered in the preview window if you’d like:</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/gamma-cross-applied.jpg" alt='Blender Gamma Cross' width='500' height='442'>
</figure>

<p>At any time you can also use the hotkey “Alt-A” to view a render preview.  This may run slow if your machine is not super-fast, but it should run enough to give you a general sense of what you’ll get.</p>
<p>If you want to modify the transition effect by changing its length, you can just increase the overlap between the strips as desired (using the original image strip &mdash; if you try to drag the transform strip you’ll find it locked to the original image strip and won’t move).</p>
<h4 id="repeat-repeat"><a href="#repeat-repeat" class="header-link-alt">Repeat Repeat</a></h4>
<p>You can basically follow these same steps for as many images as you’d like to include.</p>
<h3 id="exporting"><a href="#exporting" class="header-link-alt">Exporting</a></h3>
<p>To generate your output you’ll still need to change a couple of things to get what you want…</p>
<h4 id="render-length"><a href="#render-length" class="header-link-alt">Render Length</a></h4>
<p>You may notice on the VSE that there are vertical lines outside of which things will appear slightly grayed out.  This is a visual indicator of the total start/end of the output.  This is controlled via the <strong>Start</strong> and <strong>End</strong> frame settings on the timeline (bottom pane):</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/start-end.jpg" alt='Blender VSE start and end' width='640' height='201'>
</figure>

<p>You’ll need to set the <strong>End</strong> value to match your last output frame from your video sequence.  You can find this value by selecting the last strip in your sequence and pressing the “G” key: the start/end frame numbers of that last strip will be visible (you’ll want the last frame value, of course).</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/last-frame.jpg" alt='Blender VSE end frame' width='509' height='299'>
<figcaption>
Current last frame of my video is 284
</figcaption>
</figure>

<p>In my example above, my anticipated last frame should be 284, but the last render frame is currently set to 250.  I would need to update that <strong>End</strong> frame to match my video to get output as expected.</p>
<h4 id="render-format"><a href="#render-format" class="header-link-alt">Render Format</a></h4>
<p>Back on the <strong>Properties</strong> panel (assuming you set the top-left panel back to <strong>Properties</strong> earlier&mdash;if not do so now), if we scroll down a bit we should see a section dedicated to <em>Output</em>.</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/output-options.jpg" alt='Blender Properties Output Options' width='570' height='374'>
</figure>

<p>You can change the various output options here to do frame-by-frame dumps or to encode everything into a video container of some sort. You can set the output directory to be something different if you don’t want it rendered into /tmp here.</p>
<p>For my example I will encode the video with <a href="https://en.wikipedia.org/wiki/H.264/MPEG-4_AVC">H.264</a>:</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/output-h264.jpg" alt='Blender output h264' width='585' height='347'>
</figure>

<p>By choosing this option, Blender will then expose a new section of the <strong>Properties</strong> panel for setting the <em>Encoding</em> options:</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/encoding-panel.jpg" alt='Blender output encoding options' width='570' height='347'>
</figure>

<p>I will often use the H264 preset and will enable the <em>Lossless Output</em> checkbox option. If I don’t have the disk space to spare I can also set different options to shrink the resulting filesize down further.  The <em>Bitrate</em> option will have the largest effect on final file size and image quality.</p>
<p>When everything is ready (or you just want to test it out), you can render your output by scrolling back to the top of the <strong>Properties</strong> window and pressing the <em>Animation</em> button, or by hitting <strong>Ctrl-F12</strong>.</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/render-button.jpg" alt='Blender Render Button' width='570' height='374'>
</figure>


<h3 id="the-results"><a href="#the-results" class="header-link-alt">The Results</a></h3>
<p>After adding portraits of all of the GIMP team from LGM London and adding gamma cross fade transitions, here are my results:</p>
<div class='big-vid'>
<iframe width="853" height="480" src="https://www.youtube-nocookie.com/embed/i56iRHp9mkk?rel=0" frameborder="0" allowfullscreen></iframe>
</div>

<p><br></p>
<h2 id="in-summary"><a href="#in-summary" class="header-link-alt">In Summary</a></h2>
<p>This may seem overly complicated, but in reality much of what I covered here is the setup to get started and the settings for output.  Once you’ve done this successfully it becomes pretty quick to use.  One thing you can do is set up the environment the way you like it and then save the .blend file to use as a template for further work like this in the future.  The next time you need to generate a slideshow you’ll have everything all ready to go and will only need to start adding images to the editor.</p>
<p>While looking for information on some VSE shortcuts I <em>did</em> run across a really interesting looking set of functions that I want to try out: <a href="http://blendervelvets.org/">the Blender Velvets</a>. I’m going to go off and give it a good look when I get a chance as there’s quite a few interesting additions available. </p>
<p>For Blender users: did I miss anything?</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Texas Linux Fest 2016]]></title>
            <link>https://pixls.us/blog/2016/07/texas-linux-fest-2016/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/07/texas-linux-fest-2016/</guid>
            <pubDate>Mon, 04 Jul 2016 11:48:16 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2016/07/texas-linux-fest-2016/txlf-lede.png" /><br/>
                <h1>Texas Linux Fest 2016</h1> 
                <h2>Everything's Bigger in Texas!</h2>  
                <p>While in London this past April I got a chance to hang out a bit with <a href="https://lwn.net/">LWN.net</a> editor and fellow countryman, <a href="https://plus.google.com/110044519468273778141">Nathan Willis</a>.  (It sounds like the setup for a bad joke: <em>“An Alabamian and Texan meet in a London pub…”</em>). Which was awesome because even though we were both at LGM2014, we never got a chance to sit down and chat.</p>
<!-- more -->
<p>So it was super-exciting for me to hear from Nate about possibly doing a photowalk and Free Software photo workshop at the <a href="http://2016.texaslinuxfest.org/">2016 Texas Linux Fest</a>, and as soon as I cleared it with my boss, I agreed!</p>
<figure>
<img src="https://pixls.us/blog/2016/07/texas-linux-fest-2016/dot-eyes-open.jpg" alt='Dot at LGM 2014'>
<figcaption>
My Boss</figcaption>
</figure>

<p><em><strong>So…</strong> mosey on down</em> to Austin, Texas on July 8-9 for <a href="http://2016.texaslinuxfest.org/">Texas Linux Fest</a> and join <a href="http://www.shallowsky.com/">Akkana Peck</a> and myself for a photowalk first thing of the morning on Friday (July 8) to be immediately followed by workshops from both of us.  I’ll be talking about Free Software photography workflows and projects and Akkana will be focusing on a GIMP workshop.</p>
<p>This is part of a larger “Open Graphics” track on the entire first day that also includes <a href="http://gould.cx/ted/">Ted Gould</a> creating technical diagrams using <a href="https://inkscape.org/">Inkscape</a>, <a href="http://2016.texaslinuxfest.org/node/103">Brian Beck</a> doing a <a href="http://www.blender.org">Blender</a> tutorial, and <a href="http://2016.texaslinuxfest.org/node/55">Jonathon Thomas</a> showing off <a href="http://www.openshot.org/">OpenShot 2.0</a>.  You can find the <a href="http://2016.texaslinuxfest.org/content/schedule">full schedule on their website</a>.</p>
<p>I hope to see some of you there!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Color Manipulation with the Colour Checker LUT Module]]></title>
            <link>https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/</guid>
            <pubDate>Wed, 29 Jun 2016 13:44:08 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/darktable-lut-lede.jpg" /><br/>
                <h1>Color Manipulation with the Colour Checker LUT Module</h1> 
                <h2>hanatos tinkering in darktable again...</h2>  
                <p>I was lucky to get to spend some time in London with the darktable crew.
Being the wonderful nerds they are, they were constantly working on <em>something</em> while we were there.
One of the things that Johannes was working on was the colour checker module for darktable.</p>
<p>Having recently acquired a Fuji camera, he was working on matching color styles from the built-in rendering on the camera.
Here he presents some of the results of what he was working on.</p>
<p><em>This was originally published on the <a href="http://www.darktable.org/2016/05/colour-manipulation-with-the-colour-checker-lut-module/">darktable blog</a>, and is being republished here with permission.</em> &mdash;Pat</p>
<!-- more -->
<hr>
<h2 id="motivation"><a href="#motivation" class="header-link-alt">motivation</a></h2>
<p>for raw photography there exist great presets for nice colour rendition:</p>
<ul>
<li>in-camera colour processing such as canon picture styles</li>
<li>fuji film-emulation-like presets (provia velvia astia classic-chrome)</li>
<li><a title="pat david's film emulation luts" href="http://gmic.eu/film_emulation/">pat david’s film emulation luts</a></li>
</ul>
<p>unfortunately these are eat-it-or-die canned styles or icc lut profiles. you
have to apply them and be happy or tweak them with other tools. but can we
extract meaning from these presets? can we have understandable and tweakable
styles like these?</p>
<p>in a first attempt, i used a non-linear optimiser to control the parameters of
the modules in darktable’s processing pipeline and try to match the output of
such styles. while this worked reasonably well for some of pat’s film luts, it
failed completely on canon’s picture styles. it was very hard to reproduce
generic colour-mapping styles in darktable without parametric blending.</p>
<p>that is, we require a generic colour to colour mapping function. this should be
equally powerful as colour look up tables, but enable us to inspect it and
change small aspects of it (for instance only the way blue tones are treated).</p>
<h2 id="overview"><a href="#overview" class="header-link-alt">overview</a></h2>
<p>in git master, there is a new module to implement generic colour mappings: the
colour checker lut module (lut: look up table). the following will be a
description how it works internally, how you can use it, and what this is good
for.</p>
<p>in short, it is a colour lut that remains understandable and editable. that is,
it is not a black-box look up table, but you get to see what it actually does
and change the bits that you don’t like about it.</p>
<p>the main use cases are precise control over source colour to target colour
mapping, as well as matching in-camera styles that process raws to jpg in a
certain way to achieve a particular look. an example of this are the fuji film
emulation modes. to this end, we will fit a colour checker lut to achieve their
colour rendition, as well as a tone curve to achieve the tonal contrast.</p>
<figure>
<img src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/target.jpg" alt="target" width="560" height="416" />
</figure>

<p>to create the colour lut, it is currently necessary to take a picture of an
<a title="wolf faust's it8 charts" href="http://targets.coloraid.de">it8 target</a> (well, technically we support any similar target, but
didn’t try them yet so i won’t really comment on it). this gives us a raw
picture with colour values for a few colour patches, as well as a in-camera jpg
reference (in the raw thumbnail..), and measured reference values (what we know
it <strong>should</strong> look like).</p>
<p>to map all the other colours (that fell in between the patches on the chart) to
meaningful output colours, too, we will need to interpolate this measured
mapping.</p>
<h2 id="theory"><a href="#theory" class="header-link-alt">theory</a></h2>
<p>we want to express a smooth mapping from input colours \(\mathbf{s}\) to target
colours \(\mathbf{t}\), defined by a couple of sample points (which will in our
case be the 288 patches of an it8 chart).</p>
<p>the following is a quick summary of what we implemented and much better
described in JP’s siggraph course <a href="#ref0">[0]</a>.</p>
<h3 id="radial-basis-functions"><a href="#radial-basis-functions" class="header-link-alt">radial basis functions</a></h3>
<p>radial basis functions are a means of interpolating between sample points
via</p>
<p>$$f(x) = \sum_i c_i\cdot\phi(| x - s_i|),$$</p>
<p>with some appropriate kernel \(\phi(r)\) (we’ll get to that later) and a set of
coefficients \(c_i\) chosen to make the mapping \(f(x)\) behave like we want it at
and in between the source colour positions \(s_i\). now to make
sure the function actually passes through the target colours, i.e. \(f(s_i) =
t_i\), we need to solve a linear system. because we want the function to take
on a simple form for simple problems, we also add a polynomial part to it. this
makes sure that black and white profiles turn out to be black and white and
don’t oscillate around zero saturation colours wildly. the system is</p>
<p>$$ \left(\begin{array}{cc}A &amp;P\\P^t &amp; 0\end{array}\right) \cdot \left(\begin{array}{c}\mathbf{c}\\\mathbf{d}\end{array}\right) = \left(\begin{array}{c}\mathbf{t}\\0\end{array}\right)$$</p>
<p>where</p>
<p>$$ A=\left(\begin{array}{ccc}
\phi(r_{00})&amp; \phi(r_{10})&amp; \cdots \\
\phi(r_{01})&amp; \phi(r_{11})&amp; \cdots \\
\phi(r_{02})&amp; \phi(r_{12})&amp; \cdots \\
\cdots &amp; &amp; \cdots
\end{array}\right),$$</p>
<p>and \(r_{ij} = | s_i - t_j |\) is the distance (CIE 76 \(\Delta\)E,
\(\sqrt{(L_s - L_t)^2 + (a_s - a_t)^2 + (b_s - b_t)^2}\) ) between
source colour \(s_i\) and target colour \(t_j\), in our case</p>
<p>$$P=\left(\begin{array}{cccc}
L_{s_0}&amp; a_{s_0}&amp; b_{s_0}&amp; 1\\
L_{s_1}&amp; a_{s_1}&amp; b_{s_1}&amp; 1\\
\cdots
\end{array}\right)$$</p>
<p>is the polynomial part, and \(\mathbf{d}\) are the coefficients to the polynomial
part. these are here so we can for instance easily reproduce \(t = s\) by setting
\(\mathbf{d} = (1, 1, 1, 0)\) in the respective row. we will need to solve this
system for the coefficients \(\mathbf{c}=(c_0,c_1,\cdots)^t\) and \(\mathbf{d}\).</p>
<p>many options will do the trick and solve the system here. we use singular value
decomposition in our implementation. one advantage is that it is robust against
singular matrices as input (accidentally map the same source colour to
different target colours for instance).</p>
<h3 id="thin-plate-splines"><a href="#thin-plate-splines" class="header-link-alt">thin plate splines</a></h3>
<p>we didn’t yet define the radial basis function kernel. it turns out so-called
thin plate splines have very good behaviour in terms of low oscillation/low curvature
of the resulting function. the associated kernel is</p>
<p>$$\phi(r) = r^2 \log r.$$</p>
<p>note that there is a similar functionality in gimp as a gegl colour mapping
operation (which i believe is using a shepard-interpolation-like scheme).</p>
<h3 id="creating-a-sparse-solution"><a href="#creating-a-sparse-solution" class="header-link-alt">creating a sparse solution</a></h3>
<p>we will feed this system with 288 patches of an it8 colour chart. that means,
with the added four polynomial coefficients, we have a total of 292
source/target colour pairs to manage here. apart from performance issues when
executing the interpolation, we didn’t want that to show up in the gui like
this, so we were looking to reduce this number without introducing large error.</p>
<p>indeed this is possible, and literature provides a nice algorithm to do so, which
is called <strong>orthogonal matching pursuit</strong> <a href="#ref1">[1]</a>.</p>
<p>this algorithm will select the most important hand full of coefficients \(\in
\mathbf{c},\mathbf{d}\), to keep the overall error low. In practice we run it up
to a predefined number of patches (\(24=6\times 4\) or \(49=7\times 7\)), to make
best use of gui real estate.</p>
<h2 id="the-colour-checker-lut-module"><a href="#the-colour-checker-lut-module" class="header-link-alt">the colour checker lut module</a></h2>
<figure>
<img  src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/clut-iop.png" alt="clut-iop" width="522" height="592"  />
</figure>


<h3 id="gui-elements"><a href="#gui-elements" class="header-link-alt">gui elements</a></h3>
<p>when you select the module in darkroom mode, it should look something like the
image above (configurations with more than 24 patches are shown in a 7\(\times\)7 grid
instead). by default, it will load the 24 patches of a colour checker classic
and initialise the mapping to identity (no change to the image).</p>
<ul>
<li>the grid shows a list of coloured patches. the colours of the patches are
the source points \(\mathbf{s}\).</li>
<li>the target colour \(t_i\) of the selected patch \(i\) is shown as
offset controlled by sliders in the ui under the grid of patches.</li>
<li>an outline is drawn around patches that have been altered, i.e. the source
and target colours differ.</li>
<li>the selected patch is marked with a white square, and the number shows
in the combo box below.</li>
</ul>
<h3 id="interaction"><a href="#interaction" class="header-link-alt">interaction</a></h3>
<p>to interact with the colour mapping, you can change both source and target
colours. the main use case is to change the target colours however, and start
with an appropriate <strong>palette</strong> (see the presets menu, or download a style
somewhere).</p>
<ul>
<li>you can change lightness (L), green-red (a), blue-yellow (b), or saturation
(C) of the target colour via sliders.</li>
<li>select a patch by left clicking on it, or using the combo box, or using the
colour picker</li>
<li>to change source colour, select a new colour from your image by using the
colour picker, and shift-left-click on the patch you want to replace.</li>
<li>to reset a patch, double-click it.</li>
<li>right-click a patch to delete it.</li>
<li>shift-left-click on empty space to add a new patch (with the currently
picked colour as source colour).</li>
</ul>
<hr>
<h2 id="example-use-cases"><a href="#example-use-cases" class="header-link-alt">example use cases</a></h2>
<h3 id="example-1-dodging-and-burning-with-the-skin-tones-preset"><a href="#example-1-dodging-and-burning-with-the-skin-tones-preset" class="header-link-alt">example 1: dodging and burning with the skin tones preset</a></h3>
<p>to process the following image i took of pat in the overground, i started with
the <strong>skin tones</strong> preset in the colour checker module (right click on nothing in
the gui or click on the icon with the three horizontal lines in the header and
select the preset).</p>
<p>then, i used the colour picker (little icon to the right of the patch# combo
box) to select two skin tones: very bright highlights and dark shadow tones.
the former i dragged the brightness down a bit, the latter i brightened up a
bit via the lightness (L) slider. this is the result:</p>
<figure>
<img src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/pat_crop_02.png" alt="original" width='250' height='375' style='width:250px; display: inline; margin-right: 0.5rem;' />
<img src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/pat_crop_03_flat.png" alt="dialed down contrast in skin tones"  width='250' height='375' style='width:250px; display: inline;' />
</figure>



<h3 id="example-2-skin-tones-and-eyes"><a href="#example-2-skin-tones-and-eyes" class="header-link-alt">example 2: skin tones and eyes</a></h3>
<p>in this image, i started with the fuji classic chrome-like style (see below for
a download link), to achieve the subdued look in the skin tones. then, i
picked the iris colour and saturated this tone via the saturation slider.</p>
<p>as a side note, the flash didn’t fire in this image (iso 800) so i needed to
stop it up by 2.5ev and the rest is all natural lighting..</p>
<figure>
<a href='mairi_crop_01.jpg'><img src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/mairi_crop_01.jpg" alt="original" width="300" height="449" style='width: 300px;' /></a>
</figure>


<figure>
<a href='mairi_crop_02.jpg'><img src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/mairi_crop_02.jpg" alt="+2.5ev classic chrome" width="300" height="449" style='width:300px; display:inline;' /></a>
<a href='mairi_crop_03.jpg'><img src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/mairi_crop_03.jpg" alt="saturated eyes" width="300" height="449" style='width:300px; display:inline;'/></a>
</figure>



<h2 id="use-darktable-chart-to-create-a-style"><a href="#use-darktable-chart-to-create-a-style" class="header-link-alt">use <code>darktable-chart</code> to create a style</a></h2>
<p>as a starting point, i matched a colour checker lut interpolation function to
the in-camera processing of fuji cameras. these have the names of old film and
generally do a good job at creating pleasant colours. this was done using the
<code>darktable-chart</code> utility, by matching raw colours to the jpg output (both in Lab space in the darktable pipeline).</p>
<p>here is the <a href="https://jo.dreggn.org/blog/darktable-fuji-styles.tar.xz" title="fuji-like styles">link to the fuji styles</a>, and <a href="https://www.darktable.org/usermanual/ch02s03s08.html.php" title="darktable user manual on styles">how to use them</a>.
i should be doing pat’s film emulation presets with this, too, and maybe
styles from other cameras (canon picture styles?). <code>darktable-chart</code> will
output a dtstyle file, with the mapping split into tone curve and colour
checker module. this allows us to tweak the contrast (tone curve) in isolation
from the colours (lut module).</p>
<p>these styles were created with the X100T model, and reportedly they work so/so
with different camera models. the idea is to create a Lab-space mapping which
is well configured for all cameras. but apparently there may be sufficient
differences between the output of different cameras after applying their colour
matrices (after all these matrices are just an approximation of the real camera
to XYZ mapping).</p>
<p>so if you’re really after maximum precision, you may have to create the styles
yourself for your camera model. here’s how:</p>
<h3 id="step-by-step-tutorial-to-match-the-in-camera-jpg-engine"><a href="#step-by-step-tutorial-to-match-the-in-camera-jpg-engine" class="header-link-alt">step-by-step tutorial to match the in-camera jpg engine</a></h3>
<p>note that this is essentially similar to <a href="https://github.com/pmjdebruijn/colormatch">pascal’s colormatch script</a>, but will result in an editable style for darktable instead of a fixed icc lut.</p>
<ul>
<li><p>need an it8 (sorry, could lift that, maybe, similar to what we do for <a title="fit basecurves for darktable" href="http://www.darktable.org/2013/10/about-basecurves/">basecurve fitting</a>)</p>
</li>
<li><p>shoot the chart with your camera:</p>
<ul>
<li>shoot raw + jpg</li>
<li>avoid glare and shadow and extreme angles, potentially the rims of your image altogether</li>
<li>shoot a lot of exposures, try to match L=92 for G00 (or look that up in
  your it8 description)</li>
</ul>
</li>
<li><p>develop the images in darktable:</p>
<ul>
<li>lens and vignetting correction needed on both or on neither of raw + jpg</li>
<li>(i calibrated for vignetting, see <a title="calibrate vignetting for lensfun" href="http://wilson.bronger.org/lens_calibration_tutorial/#id3">lensfun</a>)</li>
<li>output colour space to Lab (set the secret option in <code>darktablerc</code>:
<code>allow_lab_output=true</code>)</li>
<li>standard input matrix and camera white balance for the raw, srgb for jpg.</li>
<li>no gamut clipping, no basecurve, no anything else.</li>
<li>maybe do <a title="perspective correction in darktable" href="http://www.darktable.org/2016/03/a-new-module-for-automatic-perspective-correction/">perspective correction</a> and crop the chart</li>
<li>export as float pfm</li>
</ul>
</li>
<li><p><code>darktable-chart</code></p>
<ul>
<li>load the pfm for the raw image and the jpg target in the second tab</li>
<li>drag the corners to make the mask match the patches in the image</li>
<li>maybe adjust the security margin using the slider in the top right, to
avoid stray colours being blurred into the patch readout</li>
<li>you need to select the gray ramp in the combo box (not auto-detected)</li>
<li>export csv</li>
</ul>
</li>
</ul>
<figure>
<a href='darktable-lut-tool-crop-01.jpg'><img src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/darktable-lut-tool-crop-01.jpg" alt="darktable-lut-tool-crop-01" width='640' height='655' /></a>
<a href='darktable-lut-tool-crop-02.jpg'><img src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/darktable-lut-tool-crop-02.jpg" alt="darktable-lut-tool-crop-02" width='640' height='655' /></a>
<a href='darktable-lut-tool-crop-03.jpg'><img src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/darktable-lut-tool-crop-03.jpg" alt="darktable-lut-tool-crop-03" width='640' height='655' /></a>
<a href='darktable-lut-tool-crop-04.jpg'><img src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/darktable-lut-tool-crop-04.jpg" alt="darktable-lut-tool-crop-04" width="640" height="655"   /></a>
</figure>

<p>edit the csv in a text editor and manually add two fixed fake patches <code>HDR00</code>
and <code>HDR01</code>:</p>
<pre><code>name;fuji classic chrome-like
description;fuji classic chrome-like colorchecker
num_gray;24
patch;L_source;a_source;b_source;L_reference;a_reference;b_reference
A01;22.22;13.18;0.61;21.65;17.48;3.62
A02;23.00;24.16;4.18;26.92;32.39;11.96
...
HDR00;100;0;0;100;0;0
HDR01;200;0;0;200;0;0
...
</code></pre><p>this is to make sure we can process high-dynamic range images and not destroy
the bright spots with the lut. this is needed since the it8 does not deliver
any information out of the reflective gamut and for very bright input. to fix
wide gamut input, it may be needed to enable gamut clipping in the input colour
profile module when applying the resulting style to an image with highly
saturated colours. <code>darktable-chart</code> does that automatically in the style it
writes.</p>
<ul>
<li>fix up style description in csv if you want</li>
<li>run <code>darktable-chart --csv</code></li>
<li>outputs a <code>.dtstyle</code> with everything properly switched off, and two modules on: colour checker + tonecurve in Lab</li>
</ul>
<h3 id="fitting-error"><a href="#fitting-error" class="header-link-alt">fitting error</a></h3>
<p>when processing the list of colour pairs into a set of coefficients for the
thin plate spline, the program will output the approximation error, indicated
by average and maximum CIE 76 \(\Delta\)E for the input patches (the it8 in the
examples here). of course we don’t know anything about colours which aren’t
represented in the patch. the hope would be that the sampling is dense enough
for all intents and purposes (but nothing is holding us back from using a
target with even more patches).</p>
<p>for the fuji styles, these errors are typically in the range of mean \(\Delta
E\approx 2\) and max \(\Delta E \approx 10\) for 24 patches and a bit less for 49.
unfortunately the error does not decrease very fast in the number of patches
(and will of course drop to zero when using all the patches of the input chart).</p>
<pre><code>provia 24:rank 28/24 avg DE 2.42189 max DE 7.57084
provia 49:rank 53/49 avg DE 1.44376 max DE 5.39751

astia-24:rank 27/24 avg DE 2.12006 max DE 10.0213
astia-49:rank 52/49 avg DE 1.34278 max DE 7.05165

velvia-24:rank 27/24 avg DE 2.87005 max DE 16.7967
velvia-49:rank 53/49 avg DE 1.62934 max DE 6.84697

classic chrome-24:rank 28/24 avg DE 1.99688 max DE 8.76036
classic chrome-49:rank 53/49 avg DE 1.13703 max DE 6.3298

mono-24:rank 27/24 avg DE 0.547846 max DE 3.42563
mono-49:rank 52/49 avg DE 0.339011 max DE 2.08548

</code></pre><h3 id="future-work"><a href="#future-work" class="header-link-alt">future work</a></h3>
<p>it is possible to match the reference values of the it8 instead of a reference
jpg output, to calibrate the camera more precisely than the colour matrix
would.</p>
<ul>
<li>there is a button for this in the <code>darktable-chart</code> tool</li>
<li>needs careful shooting, to match brightness of reference value closely.</li>
<li>at this point it’s not clear to me how white balance should best be handled here.</li>
<li>need reference reflectances of the it8 (wolf faust ships some for a few illuminants).</li>
</ul>
<p>another next step we would like to take with this is to match real film footage
(porta etc). both reference and film matching will require some global exposure
calibration though.</p>
<h2 id="references"><a href="#references" class="header-link-alt">references</a></h2>
<ul>
<li><a name="ref0"></a>[0] Ken Anjyo and J. P. Lewis and Frédéric Pighin, “Scattered data interpolation for computer graphics” in Proceedings of SIGGRAPH 2014 Courses, Article No. 27, 2014. <a href="http://scribblethink.org/Courses/ScatteredInterpolation/scatteredinterpcoursenotes.pdf">pdf</a></li>
<li><a name="ref1"></a>[1] J. A. Tropp and A. C. Gilbert, “Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit”, in IEEE Transactions on Information Theory, vol. 53, no. 12, pp. 4655-4666, Dec. 2007.</li>
</ul>
<h2 id="links"><a href="#links" class="header-link-alt">links</a></h2>
<ul>
<li><a title="pat david's film emulation luts" href="http://gmic.eu/film_emulation/">pat david’s film emulation luts</a></li>
<li><a title="fuji-like styles" href="darktable-fuji-styles.tar.xz">download fuji styles</a></li>
<li><a title="darktable user manual on styles" href="https://www.darktable.org/usermanual/ch02s03s08.html.php">darktable’s user manual on styles</a></li>
<li><a title="wolf faust's it8 charts" href="http://targets.coloraid.de">it8 target</a></li>
<li><a title="colormatch" href="https://github.com/pmjdebruijn/colormatch">pascal’s colormatch</a></li>
<li><a title="calibrate vignetting for lensfun" href="http://wilson.bronger.org/lens_calibration_tutorial/#id3">lensfun calibration</a></li>
<li><a title="perspective correction in darktable" href="http://www.darktable.org/2016/03/a-new-module-for-automatic-perspective-correction/">perspective correction in darktable</a></li>
<li><a title="fit basecurves for darktable" href="http://www.darktable.org/2013/10/about-basecurves/">fit basecurves for darktable</a></li>
</ul>
<script type='text/javascript' src='https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=default&ver=1.2.1'></script>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Sharing is Caring]]></title>
            <link>https://pixls.us/blog/2016/06/sharing-is-caring/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/06/sharing-is-caring/</guid>
            <pubDate>Wed, 22 Jun 2016 15:10:14 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2016/06/sharing-is-caring/SHARING.jpg" /><br/>
                <h1>Sharing is Caring</h1> 
                <h2>Letting it all hang out</h2>  
                <p>It was always my intention to make the entire PIXLS.US website available under a permissive license.  The content is already all licensed <a href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons, By Attribution, Share-Alike</a> (unless otherwise noted).  I just hadn’t gotten around to actually posting the site source.</p>
<p>Until now(<em>ish</em>).  I say “<em>ish</em>“ because I apparently released the code back in April and am just now getting around to talking about it.</p>
<p>Also, we finally have a category specifically for all those <a href="http://www.darktable.org">darktable</a> weenies on <a href="https://discuss.pixls.us">discuss</a>!</p>
<!-- more -->
<h2 id="don-t-laugh"><a href="#don-t-laugh" class="header-link-alt">Don’t Laugh</a></h2>
<p>I finally got around to pushing my code for this site up to <a href="https://github.com/pixlsus/">Github</a> on April 27 (I’m basing this off git logs because my memory is likely suspect).  It took a while, but better late than never?  I think part of the delay was a bit of minor embarrassment on my part for being so sloppy with the site code.  In fact, I’m still embarrassed - so don’t laugh at me too hard (and if you do, at least don’t point while laughing too).</p>
<figure>
<img src="https://pixls.us/blog/2016/06/sharing-is-caring/carrie-laugh-at-u.jpg" alt='Carrie White'>
<figcaption>
Brian De Palma’s <a href="http://www.imdb.com/title/tt0074285/">interpretation of my fears…</a></figcaption>
</figure>

<p>So really this post is just a reminder to anyone that was interested that this site is available on Github:  </p>
<p><a href="https://github.com/pixlsus/">https://github.com/pixlsus/</a></p>
<p>In fact, we’ve got a couple of other repositories under the <a href="https://github.com/pixlsus">Github Organization PIXLS.US</a> including this website, presentation assets, lighting diagram SVG’s, and more. If you’ve got a Github account or wanted to join in with hacking at things, by all means send me a note and we’ll get you added to the organization asap.</p>
<p><em>Note</em>: you don’t need to do anything special if you just want to grab the site code.  You can do this quickly and easily with:</p>
<p><code>git clone https://github.com/pixlsus/website.git</code></p>
<p>You actually don’t even need a Github account to clone the repo, but you will need one if you want to fork it on Github itself, or to send pull-requests.  You can also feel free to simply email/post patches to us as well:</p>
<p><code>git format-patch testing --stdout &gt; your_awesome_work.patch</code></p>
<p>Being on Github means that we also now have <a href="https://github.com/pixlsus/website/issues">an issue tracker</a> to report any bugs or enhancements you’d like to see for the site.</p>
<p>So no more excuses - if you’d like to lend a hand just dive right in!  We’re all here to help! :)</p>
<h3 id="speaking-of-helping"><a href="#speaking-of-helping" class="header-link-alt">Speaking of Helping</a></h3>
<p>Speaking of which, I wanted to give a special shout-out to community member <a href="https://discuss.pixls.us/users/paperdigits/activity">@paperdigits</a> (<a href="http://silentumbrella.com/">Mica</a>), who has been active in sharing presentation materials in the <a href="https://github.com/pixlsus/Presentations">Presentations repo</a> and has been actively hacking at the website. Mica’s recommendations and pull requests are helping to make the site code cleaner and better for everyone, and I really appreciate all the help (even if I _am_ scared of change).</p>
<p><em>Thank you, Mica!</em>  You <strong>rock</strong>!</p>
<h2 id="those-stinky-darktable-people"><a href="#those-stinky-darktable-people" class="header-link-alt">Those Stinky darktable People</a></h2>
<p>Yes, after member Claes <a href="https://discuss.pixls.us/t/why-no-darktable-section/1575">asked the question on discuss</a> about why we didn’t have a <a href="http://www.darktable.org">darktable</a> category on the forums, I relented and <a href="https://discuss.pixls.us/c/software/darktable">created one</a>.  Normally I want to make sure that any category is going to have active people to maintain and monitor the topics there.  I feel like having an empty forum can sometimes be detrimental to the perception of a project/community.</p>
<figure>
<img src='https://discuss.pixls.us/uploads/default/original/2X/b/b2076a2e18c4126bf25c6a852424ce3a3333b480.png' alt='darktable logo'>
</figure>

<p>In this case, any topics in the <a href="https://discuss.pixls.us/c/software/darktable">darktable category</a> will <em>also</em> show up in the more general <a href="https://discuss.pixls.us/c/software/">Software</a> category as well.  This way the visibility and interactions are still there, but with the added benefit that we can now choose to see <em>only</em> darktable posts, ignore them, or let all those <a href="https://discuss.pixls.us/t/why-no-darktable-section/1575/4">stinky users</a> do what they want in there.</p>
<p>Besides, now we can say that we’ve sufficiently appeased <a href="https://discuss.pixls.us/users/morgan_hardwood/activity">Morgan Hardwood</a>‘s organizational needs…</p>
<p>So, come on by and say hello in the brand new <a href="https://discuss.pixls.us/c/software/darktable"><strong>darktable category</strong></a>!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Sharing Galore]]></title>
            <link>https://pixls.us/blog/2016/06/sharing-galore/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/06/sharing-galore/</guid>
            <pubDate>Tue, 21 Jun 2016 18:30:29 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2016/06/sharing-galore/2016-06-16_oak.jpg" /><br/>
                <h1>Sharing Galore</h1> 
                <h2>or, Why This Community is Awesome</h2>  
                <p>Community member and <a href="http://www.rawtherapee.com">RawTherapee</a> hacker Morgan Hardwood brings us a great tutorial + assets from one of his strolls near the <a href="https://en.wikipedia.org/wiki/S%C3%B6der%C3%A5sen_National_Park">Söderåsen National Park</a> (Sweden!). <a href="https://discuss.pixls.us/users/ofnuts/activity">Ofnuts</a> is apparently trying to get me to burn the forum down by sharing his raw file of a questionable subject.  After bugging <a href="http://opensource.graphics/">David Tschumperlé</a> he managed to find a neat solution to generating a median (pixel) blend of a large number of images without making your computer throw itself out a window.</p>
<p>So much neat content being shared for everyone to play with and learn from!  Come see what everyone is doing!</p>
<!-- more -->
<h2 id="old-oak-a-tutorial"><a href="#old-oak-a-tutorial" class="header-link-alt">Old Oak - A Tutorial</a></h2>
<p>Sometimes you’re just hanging out minding your own business and talking photography with friends and other Free Software nuts when someone comes running by and drops a great tutorial in your lap.  Just as Morgan Hardwood <a href="https://discuss.pixls.us/t/old-oak-a-tutorial/1627">did on the forums</a> a few days ago!</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2016/06/sharing-galore/2016-06-16_oak.jpg" alt='Old Oak by Morgan Hardoowd'>
<figcaption>
<em>Old Oak by Morgan Hardwood</em> <a href='https://creativecommons.org/licenses/by-sa/4.0/' class='cc'>cbsa</a>
</figcaption>
</figure>

<p>He introduces the image and post:</p>
<blockquote>
<p>There is an old oak by the southern entrance to the <a href="https://en.wikipedia.org/wiki/S%C3%B6der%C3%A5sen_National_Park">Söderåsen National Park</a>. Rumor has it that this is the oak under which Gandalf sat as he smoked his pipe and penned the famous saga about J.R.R. Tolkien. I don’t know about that, but the valley <a href="http://lotr.wikia.com/wiki/Rhosgobel_Rabbits">rabbits</a> sure love it.</p>
</blockquote>
<p>The image itself is a treat.  I personally love images where the lighting does interesting things and there are some gorgeous things going on in this image.  The diffused light flooding in under the canopy on the right with the edge highlights from the light filtering down make this a pleasure to look at.</p>
<p>Of course, Morgan doesn’t stop there.  You should absolutely <a href="https://discuss.pixls.us/t/old-oak-a-tutorial/1627">go read his entire post</a>.  He not only walks through his entire thought process and workflow starting at his rationale for lens selection (50mm f/2.8) all the way through his corrections and post-processing choices. To top it all off, he has graciously shared his assets for anyone to follow along! He provides the raw file, the <a href="http://50.87.144.65/~rt/w/index.php?title=Flat_Field">flat-field</a>, a shot of his color target + <a href="http://www.ludd.ltu.se/~torger/dcamprof.html">DCP</a>, and finally his RawTherapee .PP3 file with all of his settings!  Whew!</p>
<p>If you’re interested I urge you to go check out (and participate!) in his topic on the forums: <a href="https://discuss.pixls.us/t/old-oak-a-tutorial/1627"><strong>Old Oak - A Tutorial</strong></a>.</p>
<h2 id="i-will-burn-this-place-to-the-ground"><a href="#i-will-burn-this-place-to-the-ground" class="header-link-alt">I Will Burn This Place to the Ground</a></h2>
<p>Speaking of sharing material, <a href="https://discuss.pixls.us/users/ofnuts/activity">Ofnuts</a> has decided that he apparently wants me to burn the forums to the ground, put the ashes in a spaceship, fly the spaceship into the sun, and to detonate the entire solar system into a singularity.  Why do I say this?</p>
<figure>
<img src='https://discuss.pixls.us/uploads/default/optimized/2X/4/436f016f25eb0a0f857c2cb182bb1ae55ca623ca_1_690x620.jpg' alt='Kill It With Fire!'>
<figcaption>
Kill it with fire!
</figcaption>
</figure>

<p>Because he started a topic appropriately entitled: <a href="https://discuss.pixls.us/t/nsfpaoa-not-suitable-for-pat-and-other-arachnophobes/1644"><em>“NSFPAOA (Not Suitable for Pat and Other Arachnophobes)”</em></a>, in which he shares his raw .CR2 file for everyone to try their hand at processing that cute little spider above. There have already been quite a few awesome interpretations from folks in the community like:</p>
<figure>
<a href='https://discuss.pixls.us/t/nsfpaoa-not-suitable-for-pat-and-other-arachnophobes/1644/3'><img src='https://discuss.pixls.us/uploads/default/optimized/2X/6/6001e6f45f51c2933f7bdbdcc67e39a740bc94d4_1_690x488.jpg' alt='CarVac Version'></a>
<figcaption>
A version by CarVac
</figcaption>
</figure>

<figure>
<a href='https://discuss.pixls.us/t/nsfpaoa-not-suitable-for-pat-and-other-arachnophobes/1644/4'><img src='https://discuss.pixls.us/uploads/default/optimized/2X/d/d1aa2d2f753a9f318e1ff417f97d2e94f2ba7fc4_1_690x492.jpg' alt='MLC Morgin Version'></a>
<figcaption>
By MLC/Morgin
</figcaption>
</figure>

<figure>
<a href='https://discuss.pixls.us/t/nsfpaoa-not-suitable-for-pat-and-other-arachnophobes/1644/9'><img src='https://discuss.pixls.us/uploads/default/original/2X/8/80a4c80facb6d7c677d8bf9a721eb93282c6c1c0.jpg' alt='By Jonas Wagner'></a>
<figcaption>
By Jonas Wagner
</figcaption>
</figure>

<figure>
<a href='https://discuss.pixls.us/t/nsfpaoa-not-suitable-for-pat-and-other-arachnophobes/1644/18'><img src='https://discuss.pixls.us/uploads/default/optimized/2X/3/3ae66bbae7d97c36c153437782225feae10b1411_1_690x565.jpg' alt='iarga'></a>
<figcaption>
By iarga
</figcaption>
</figure>

<figure>
<a href='https://discuss.pixls.us/t/nsfpaoa-not-suitable-for-pat-and-other-arachnophobes/1644/19'><img src='https://discuss.pixls.us/uploads/default/optimized/2X/6/6d27b1ec6e8cb5a8acc64d41039ef3e90a5d2f7b_1_690x460.jpg' alt='by PkmX'></a>
<figcaption>
By PkmX
</figcaption>
</figure>

<figure>
<a href='https://discuss.pixls.us/t/nsfpaoa-not-suitable-for-pat-and-other-arachnophobes/1644/22'><img src='https://discuss.pixls.us/uploads/default/optimized/2X/9/93942c9bb786532c39a4bd47e0832dbb72c5fbbd_1_690x388.jpg' alt='by Kees Guequierre'></a>
<figcaption>
By Kees Guequierre
</figcaption>
</figure>

<p>Of course, I had a chance to try processing it as well.  Here’s what I ended up with:</p>
<figure>
<img src="https://pixls.us/blog/2016/06/sharing-galore/640px-Bonfire_Flames.JPG" alt='Flames'></figure>

<p>Ahhhh, just writing this post is a giant bag of <strong>NOPE</strong><sup>*</sup>. If you’d like to join in on the fun(?) and share your processing as well - go <a href="https://discuss.pixls.us/t/nsfpaoa-not-suitable-for-pat-and-other-arachnophobes/1644">check out the topic</a>! </p>
<p>Now let’s move on to something more cute and fuzzy, like an ALOT…</p>
<p><small><sup>*</sup> I kid, I’m not really an arachnophobe (<em>within reason</em>), but I can totally see why someone would be.</small></p>
<h2 id="median-blending-alot-of-images-with-g-mic"><a href="#median-blending-alot-of-images-with-g-mic" class="header-link-alt">Median Blending ALOT of Images with G’MIC</a></h2>
<figure>
<a href='http://hyperboleandahalf.blogspot.com/2010/04/alot-is-better-than-you-at-everything.html'><img src="https://pixls.us/blog/2016/06/sharing-galore/ALOT.png" alt='Hyperbole and a Half ALOT'></a>
<figcaption>
The ALOT. Borrowed from <a href='http://hyperboleandahalf.blogspot.com/2010/04/alot-is-better-than-you-at-everything.html'>Allie Brosh</a> and here because I really wanted an excuse to include it.
</figcaption>
</figure>

<p>I count myself lucky to have so many smart friends that I can lean on to figure out or help me do things (more on that in the next post).  One of those friends is <a href="http://gmic.eu">G’MIC</a> creator and community member <a href="http://opensource.graphics">David Tschumperlé</a>.</p>
<p>A few years back he helped me with some artwork I was generating with <a href="http://www.imagemagick.org">imagemagick</a> at the time.  I was averaging images together to see what an amalgamation would look like.  For instance, here is what all of the <a href="http://www.si.com/sports-illustrated/photo/2016/02/13/every-cover-si-swimsuit-edition">Sports Illustrated swimsuit edition</a> <small>(NSFW)</small> covers (through 2000) look like, all at once:</p>
<p><a href="https://www.flickr.com/photos/patdavid/9018489869/in/album-72157630890087884/" title="Sport Illustrated Swimsuit Covers Through 2000"><img src="https://c6.staticflickr.com/4/3767/9018489869_77875a6cc1_c.jpg" width="605" height="800" alt="Sport Illustrated Swimsuit Covers Through 2000"></a></p>
<p>A natural progression of this idea was to consider doing a median blend vs. mean.  The problem is that a mean average is very easy and fast to calculate as you advance through the image stack, but the median is not.  This is relevant because I began to look at these for videos (in particular music videos), where the image stack was 5,000+ images for a video easily (that is ALOT of frames!).</p>
<p>It’s relatively easy to generate a running average for a series of numbers, but generating the median value requires that the entire stack of numbers be loaded and sorted.  This makes it prohibitive to do on a huge number of images, particularly at HD resolutions.</p>
<p>So it’s awesome that, yet again, David has found a solution to the problem!  He explains it in greater detail on his topic:</p>
<p><a href="https://discuss.pixls.us/t/a-guide-about-computing-the-temporal-average-median-of-video-frames-with-gmic/1566">A guide about computing the temporal average/median of video frames with G’MIC</a></p>
<p>He basically chops up the image frame into regions, then computes the pixel-median value for those regions.  Here’s an example of his result:</p>
<figure>
<img src='https://discuss.pixls.us/uploads/default/original/2X/e/e5116c80eecb0554b5616f4b73443c40618d198c.jpg' alt='P!nk Try Mean/Median'>
<figcaption>
Mean/Median samples from P!nk - Try music video.
</figcaption>
</figure>

<p>Now I can start utilizing median blends more often in my experiments, and I’m quite sure folks will find other interesting uses for this type of blending!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Display Color Profiling on Linux]]></title>
            <link>https://pixls.us/articles/display-color-profiling-on-linux/</link>
            <guid isPermaLink="true">https://pixls.us/articles/display-color-profiling-on-linux/</guid>
            <pubDate>Thu, 09 Jun 2016 22:50:08 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/display-color-profiling-on-linux/pixels.jpg" /><br/>
                <h1>Display Color Profiling on Linux</h1> 
                <h2>A work in progress</h2>  
                <p><small style='color:#aaa;'><em>This article by <a href="https://encrypted.pcode.nl/">Pascal de Bruijn</a> was originally <a href="https://encrypted.pcode.nl/blog/2013/11/24/display-color-profiling-on-linux/">published on his site</a> and is reproduced here with permission. &nbsp;&mdash;Pat</em></small></p>
<hr>
<p><strong>Attention:</strong> This article is a work in progress, based on my own practical experience up until the time of writing, so you may want to check back periodically to see if it has been updated.</p>
<p>This article outlines how you can calibrate and profile your display on Linux, assuming you have the right <a href="http://argyllcms.com/doc/instruments.html">equipment</a> (either a colorimeter like for example the i1 Display Pro or a spectrophotometer like for example the ColorMunki Photo). For a general overview of what color management is and details about some of its parlance you may want to read <a href="https://encrypted.pcode.nl/blog/2012/01/29/color-management-on-linux/">this</a> before continuing.</p>
<!-- more -->
<h2 id="a-fresh-start">A Fresh Start<a href="#a-fresh-start" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>First you may want to check if any kind of color management is already active on your machine, if you see the following then you’re fine:</p>
<pre><code>$ xprop -display :0.0 -len 14 -root _ICC_PROFILE
_ICC_PROFILE: no such atom on any window.
</code></pre><p>However if you see something like this, then there is already another color management system active:</p>
<pre><code>$ xprop -display :0.0 -len 14 -root _ICC_PROFILE
_ICC_PROFILE(CARDINAL) = 0, 0, 72, 212, 108, 99, 109, 115, 2, 32, 0, 0, 109, 110
</code></pre><p>If this is the case you need to figure out what and why… For GNOME/Unity based desktops this is fairly typical, since they extract a simple profile from the display hardware itself via <a href="https://encrypted.pcode.nl/blog/2013/04/14/display-profiles-generated-from-edid/">EDID</a> and use that by default. I’m guessing KDE users may want to look into <a href="http://dantti.wordpress.com/2013/05/01/colord-kde-0-3-0-released/">this</a> before proceeding. I can’t give much advice about other desktop environments though, as I’m not particularly familiar with them. That said, I tested most of the examples in this article with XFCE 4.10 on <a href="http://xubuntu.org/">Xubuntu</a> 14.04 “Trusty”.</p>
<h2 id="display-types">Display Types<a href="#display-types" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Modern flat panel displays are comprised of two major components for purposes of our discussion, the backlight and the panel itself. There are various types of backlights, White <a href="https://en.wikipedia.org/wiki/Light-emitting_diode">LED</a> (most common nowadays), <a href="https://en.wikipedia.org/wiki/Cold_cathode">CCFL</a> (most common a few years ago), RGB LED and Wide Gamut CCFL, the latter two of which you’d typically find on higher end displays. The backlight primarily defines a displays <a href="https://en.wikipedia.org/wiki/Gamut">gamut</a> and maximum brightness. The panel on the other hand primarily defines the maximum contrast and acceptable viewing angles. Most common types are variants of <a href="https://en.wikipedia.org/wiki/Liquid-crystal_display#In-plane_switching_.28IPS.29">IPS</a> (usually good contrast and viewing angles) and <a href="https://en.wikipedia.org/wiki/Liquid-crystal_display#Twisted_nematic_.28TN.29">TN</a> (typically mediocre contrast and poor viewing angles).</p>
<h2 id="display-setup">Display Setup<a href="#display-setup" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>There are two main cases, there are laptop displays, which usually allow for little configuration, and regular desktop displays. For regular displays there are a few steps to prepare your display to be profiled, first you need to reset your display to its factory defaults. We leave the contrast at its default value. If your display has a feature called dynamic contrast you need to disable it, this is <em>critical</em>, if you’re unlucky enough to have a display for which this cannot be disabled, then there is no use in proceeding any further. Then we set the color temperature setting to custom and set the R/G/B values to equal values (often 100/100/100 or 255/255/255). As for the brightness, set it to a level which is comfortable for prolonged viewing, typically this means reducing the brightness from its default setting, this will often be somewhere around 25&ndash;50 on a 0&ndash;100 scale. Laptops are a different story, often you’ll be fighting different lighting conditions, so you may want to consider profiling your laptop at its full brightness. We’ll get back to the brightness setting later on.</p>
<p>Before continuing any further, let the display settle for at least half an hour (as its color rendition may change while the backlight is warming up) and make sure the display doesn’t go into power saving mode during this time.</p>
<p>Another point worth considering is cleaning the display before starting the calibration and profiling process, do keep in mind that displays often have relatively fragile coatings, which may be deteriorated by traditional cleaning products, or easily scratched using regular cleaning cloths. There are specialist products <a href="https://www.klearscreen.com/iKlear.aspx">available</a> for safely cleaning computer displays.</p>
<p>You may also want to consider dimming the ambient lighting while running the calibration and profiling procedure to prevent (potential) glare from being an issue.</p>
<h2 id="software">Software<a href="#software" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>If you’re in a GNOME or Unity environment it’s highly recommend to use <a href="https://projects.gnome.org/gnome-color-manager/">GNOME Color Manager</a> (with <a href="http://www.freedesktop.org/software/colord/">colord</a> and <a href="http://argyllcms.com/">argyll</a>). If you have recent versions (3.8.3, 1.0.5, 1.6.2 respectively), you can profile and setup your display completely graphically via the Color applet in System Settings. It’s fully wizard driven and couldn’t be much easier in most cases. This is what I personally use and recommend. The rest of this article focuses on the case where you are not using it.</p>
<p>Xubuntu users in particular can get experimental packages for the latest <a href="http://argyllcms.com/">argyll</a> and optionally <a href="https://github.com/agalakhov/xiccd">xiccd</a> from my <a href="https://launchpad.net/~pmjdebruijn/+archive/xiccd-testing">xiccd-testing</a> PPAs. If you’re using a different distribution you’ll need to source help from its respective community.</p>
<h2 id="report-on-the-uncalibrated-display">Report On The Uncalibrated Display<a href="#report-on-the-uncalibrated-display" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>To get an idea of the displays uncalibrated capabilities we use argyll’s <a href="http://www.argyllcms.com/doc/dispcal.html">dispcal</a>:</p>
<pre><code>$ dispcal -H -y l -R
Uncalibrated response:
Black level = 0.4179 cd/m^2
50%   level = 42.93 cd/m^2
White level = 189.08 cd/m^2
Aprox. gamma = 2.14
Contrast ratio = 452:1
White     Visual Daylight Temperature = 7465K, DE 2K to locus =  3.2
</code></pre><p>Here we see the display has a fairly high uncalibrated native whitepoint at almost 7500<a href="https://en.wikipedia.org/wiki/Color_temperature#Categorizing_different_lighting">K</a>, which means the display is bluer than it should be. When we’re done you’ll notice the display becoming more yellow. If your displays uncalibrated native whitepoint is below <a href="https://en.wikipedia.org/wiki/Illuminant_D65">6500K</a> you’ll notice it becoming more blue when loading the profile.</p>
<p>Another point to note is the fairly high white level (brightness) of almost 190 <a href="https://en.wikipedia.org/wiki/Candela_per_square_metre">cd/m<sup>2</sup></a>, it’s fairly typical to target 120 <a href="https://en.wikipedia.org/wiki/Candela_per_square_metre">cd/m<sup>2</sup></a> for the final calibration, keeping in mind that we’ll lose 10 <a href="https://en.wikipedia.org/wiki/Candela_per_square_metre">cd/m<sup>2</sup></a> or so because of the calibration itself. So if your display reports a brightness significantly higher than 130 <a href="https://en.wikipedia.org/wiki/Candela_per_square_metre">cd/m<sup>2</sup></a> you may want to considering turning down the brightness another notch.</p>
<h2 id="calibrating-and-profiling-your-display">Calibrating And Profiling Your Display<a href="#calibrating-and-profiling-your-display" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>First we’ll use argyll’s <a href="http://argyllcms.com/doc/dispcal.html">dispcal</a> to measure and adjust (calibrate) the display, compensating for the displays <a href="https://en.wikipedia.org/wiki/White_point">whitepoint</a> (targeting <a href="https://en.wikipedia.org/wiki/CIE_Standard_Illuminant_D65">6500K</a>) and <a href="https://en.wikipedia.org/wiki/Gamma_correction">gamma</a> (targeting industry standard 2.2, more info on gamma <a href="http://argyllcms.com/doc/gamma.html">here</a>):</p>
<pre><code>$ dispcal -v -m -H -y l -q l -t 6500 -g 2.2 asus_eee_pc_1215p
</code></pre><p>Next we’ll use argyll’s <a href="http://argyllcms.com/doc/targen.html">targen</a> to generate measurement patches to determine its <a href="https://en.wikipedia.org/wiki/Gamut">gamut</a>:</p>
<pre><code>$ targen -v -d 3 -G -f 128 asus_eee_pc_1215p
</code></pre><p>Then we’ll use argyll’s <a href="http://argyllcms.com/doc/dispread.html">dispread</a> to apply the calibration file generated by <a href="http://argyllcms.com/doc/dispcal.html">dispcal</a>, and measure (profile) the displays gamut using the patches generated by <a href="http://argyllcms.com/doc/targen.html">targen</a>:</p>
<pre><code>$ dispread -v -N -H -y l -k asus_eee_pc_1215p.cal asus_eee_pc_1215p
</code></pre><p>Finally we’ll use argyll’s <a href="http://argyllcms.com/doc/colprof.html">colprof</a> to generate a standardized ICC (version 2) color profile:</p>
<pre><code>$ colprof -v -D &quot;Asus Eee PC 1215P&quot; -C &quot;Copyright 2013 Pascal de Bruijn&quot; \
          -q m -a G -n c asus_eee_pc_1215p
Profile check complete, peak err = 9.771535, avg err = 3.383640, RMS = 4.094142
</code></pre><p>The parameters used to generate the ICC color profile are fairly conservative and should be fairly robust. They will likely provide good results for most use-cases. If you’re after better accuracy you may want to try replacing -a G with -a S or even -a s, but I very strongly recommend starting out using -a G.</p>
<p>You can inspect the contents of a standardized ICC (version 2 only) color profile using argyll’s <a href="http://argyllcms.com/doc/iccdump.html">iccdump</a>:</p>
<pre><code>$ iccdump -v 3 asus_eee_pc_1215p.icc
</code></pre><p>To try the color profile we just generated we can quickly load it using argyll’s <a href="http://argyllcms.com/doc/dispwin.html">dispwin</a>:</p>
<pre><code>$ dispwin -I asus_eee_pc_1215p.icc
</code></pre><p>Now you’ll likely see a color shift toward the yellow side. For some possibly aged displays you may notice it shifting toward the blue side.</p>
<p>If you’ve used a colorimeter (as opposed to a spectrophotometer) to profile your display and if you feel the profile might be off, you may want to consider reading <a href="http://argyllcms.com/doc/WideGamutColmters.html">this</a> and <a href="http://argyllcms.com/doc/CrushedDisplyBlacks.html">this</a>.</p>
<h2 id="report-on-the-calibrated-display">Report On The Calibrated Display<a href="#report-on-the-calibrated-display" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Next we can use argyll’s <a href="http://www.argyllcms.com/doc/dispcal.html">dispcal</a> again to check our newly calibrated display:</p>
<pre><code>$ dispcal -H -y l -r
Current calibration response:
Black level = 0.3432 cd/m^2
50%   level = 40.44 cd/m^2
White level = 179.63 cd/m^2
Aprox. gamma = 2.15
Contrast ratio = 523:1
White     Visual Daylight Temperature = 6420K, DE 2K to locus =  1.9
</code></pre><p>Here we see the calibrated displays whitepoint nicely around 6500K as it should be.</p>
<h2 id="loading-the-profile-in-your-user-session">Loading The Profile In Your User Session<a href="#loading-the-profile-in-your-user-session" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>If your desktop environment is XDG <a href="http://standards.freedesktop.org/autostart-spec/autostart-spec-latest.html">autostart</a> compliant, you may want to considering creating a .desktop file which will load the ICC color profile during all users session login:</p>
<pre><code>$ cat /etc/xdg/autostart/dispwin.desktop
[Desktop Entry]
Encoding=UTF-8
Name=Argyll dispwin load color profile
Exec=dispwin -I /usr/share/color/icc/asus_eee_pc_1215p.icc
Terminal=false
Type=Application
Categories=
</code></pre><p>Alternatively you could use <a href="http://www.freedesktop.org/software/colord/">colord</a> and <a href="https://github.com/agalakhov/xiccd">xiccd</a> for a more sophisticated setup. If you do make sure you have recent versions of both, particularly for <a href="https://github.com/agalakhov/xiccd">xiccd</a> as it’s still a fairly young project.</p>
<p>First we’ll need to start <a href="https://github.com/agalakhov/xiccd">xiccd</a> (in the background), which detects your connected displays and adds it to <a href="http://www.freedesktop.org/software/colord/">colord</a>‘s device inventory:</p>
<pre><code>$ nohup xiccd &amp;
</code></pre><p>Then we can query <a href="http://www.freedesktop.org/software/colord/">colord</a> for its list of available devices:</p>
<pre><code>$ colormgr get-devices
</code></pre><p>Next we need to query <a href="http://www.freedesktop.org/software/colord/">colord</a> for its list of available profiles (or alternatively search by a profile’s full filename):</p>
<pre><code>$ colormgr get-profiles
$ colormgr find-profile-by-filename /usr/share/color/icc/asus_eee_pc_1215p.icc
</code></pre><p>Next we’ll need to assign our profile’s object path to our display’s object path:</p>
<pre><code>$ colormgr device-add-profile \
   /org/freedesktop/ColorManager/devices/xrandr_HSD121PHW1_70842_pmjdebruijn_1000 \
   /org/freedesktop/ColorManager/profiles/icc_e7fc40cb41ddd25c8d79f1c8d453ec3f
</code></pre><p>You should notice your displays color shift within a second or so (<a href="https://github.com/agalakhov/xiccd">xiccd</a> applies it asynchronously), assuming you haven’t already applied it via <a href="http://www.argyllcms.com/doc/dispwin.html">dispwin</a> earlier (in which case you’ll notice no change).</p>
<p>If you suspect <a href="https://github.com/agalakhov/xiccd">xiccd</a> isn’t properly working, you may be able to debug the issue by stopping all <a href="https://github.com/agalakhov/xiccd">xiccd</a> background processes, and starting it in debug mode in the foreground:</p>
<pre><code>$ killall xiccd
$ G_MESSAGES_DEBUG=all xiccd
</code></pre><p>Also in <a href="https://github.com/agalakhov/xiccd">xiccd</a>‘s case you’ll need to create a .desktop file to load <a href="https://github.com/agalakhov/xiccd">xiccd</a> during all users session login:</p>
<pre><code>$ cat /etc/xdg/autostart/xiccd.desktop
[Desktop Entry]
Encoding=UTF-8
Name=xiccd
GenericName=X11 ICC Daemon
Comment=Applies color management profiles to your session
Exec=xiccd
Terminal=false
Type=Application
Categories=
OnlyShowIn=XFCE;
</code></pre><p>You’ll note that <a href="https://github.com/agalakhov/xiccd">xiccd</a> does not need any parameters, since it will query <a href="http://www.freedesktop.org/software/colord/">colord</a>‘s database what profile to load.</p>
<p>If your desktop environment is not XDG autostart compliant, you need to ask them how to start custom commands (<a href="http://www.argyllcms.com/doc/dispwin.html">dispwin</a> or <a href="https://github.com/agalakhov/xiccd">xiccd</a> respectively) during session login.</p>
<h2 id="dual-screen-caveats">Dual Screen Caveats<a href="#dual-screen-caveats" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Currently having a dual screen color managed setup is complicated at best. Most programs use the <a href="http://www.burtonini.com/computing/x-icc-profiles-spec-0.1.html">_ICC_PROFILE</a> atom to get the system display profile, and there’s only one such atom. To resolve this issue <a href="http://www.oyranos.org/wiki/index.php?title=ICC_Profiles_in_X_Specification_0.4">new atoms</a> were defined to support multiple displays, but not all applications actually honor them. So with a dual screen setup there is always a risk of applications applying the profile for your first display to your second display or vice versa.</p>
<p>So practically speaking, if you need a <em>reliable</em> color managed setup, you should probably avoid dual screen setups altogether.</p>
<p>That said, most of argyll’s commands support a -d parameter for selecting which display to work with during calibration and profiling, but I have no personal experience with them whatsoever, since I purposefully don’t have a dual screen setup.</p>
<h2 id="application-support-caveats">Application Support Caveats<a href="#application-support-caveats" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>As my other <a href="https://encrypted.pcode.nl/blog/2012/01/29/color-management-on-linux/">article</a> explains display color profiles consist of two parts, one part (whitepoint &amp; gamma correction) is applied via X11 and thus benefits all applications. There is however a second part (gamut correction) that needs to be applied by the application. And application support for both input and display color management vary wildly. Many consumer grade applications have no color management awareness whatsoever.</p>
<p>Firefox can do color management and it’s half-enabled by default, read <a href="https://encrypted.pcode.nl/blog/2013/12/17/firefox-and-color-management/">this</a> to properly configure Firefox.</p>
<p>GIMP for example has display color management disabled by default, you need to enable it via its preferences.</p>
<p>Eye of GNOME has display color management enabled by default, but it has nasty corner case behaviors, for example when a file has no metadata no color management is done at all (instead of assuming sRGB input). Some of these issues seem to have been resolved on Ubuntu Trusty (<a href="https://bugs.launchpad.net/ubuntu/+source/eog/+bug/272584">LP #272584</a>).</p>
<p>Darktable has display color management enabled by default and is one of the few applications which directly support <a href="http://www.freedesktop.org/software/colord/">colord</a> and the display specific atoms as well as the generic _ICC_PROFILE atom as fallback. There are however a few caveats for darktable as well, documented <a href="http://www.darktable.org/2013/05/display-color-management-in-darktable/">here</a>.</p>
<hr>
<p><small style='color:#aaa;'><em>This article by <a href="https://encrypted.pcode.nl/">Pascal de Bruijn</a> was originally <a href="https://encrypted.pcode.nl/blog/2013/11/24/display-color-profiling-on-linux/">published on his site</a> and is reproduced here with permission.</em></small></p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[New Rapid Photo Downloader]]></title>
            <link>https://pixls.us/blog/2016/05/new-rapid-photo-downloader/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/05/new-rapid-photo-downloader/</guid>
            <pubDate>Sun, 22 May 2016 00:00:00 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2016/05/new-rapid-photo-downloader/about.jpg" /><br/>
                <h1>New Rapid Photo Downloader</h1> 
                <h2>Damon Lynch brings us a new release!</h2>  
                <p>Community member <a href="http://www.damonlynch.net">Damon Lynch</a> happens to make an awesome program called <a href="http://www.damonlynch.net/rapid/">Rapid Photo Downloader</a> in his “spare” time.  In fact you may have heard mention of it as part of <a href="http://www.rileybrandt.com/">Riley Brandt’s</a> <a href="http://www.rileybrandt.com/lessons/"><em>“The Open Source Photography Course”</em></a><sup>*</sup>.  It is a program that specializes in downloading photo and video from media in as efficient a manner as possible while extending the process with extra functionality.</p>
<p><small><sup>*</sup> Riley donates a portion of the proceeds from his course to various projects, and Rapid Photo Downloader is one of them!</small></p>
<!-- more -->
<h2 id="work-smart-not-dumb"><a href="#work-smart-not-dumb" class="header-link-alt">Work Smart, not Dumb</a></h2>
<p>The main features of Rapid Photo Downloader are listed on the website:</p>
<ol>
<li>Generates meaningful, user configurable <a href="http://www.damonlynch.net/rapid/features.html#generate">file and folder names</a></li>
<li>Downloads photos and videos from multiple devices <a href="http://www.damonlynch.net/rapid/features.html#download">simultaneously</a></li>
<li><a href="http://www.damonlynch.net/rapid/features.html#backup">Backs up</a> photos and videos as they are downloaded</li>
<li>Is carefully optimized to download and back up at <a href="http://www.damonlynch.net/rapid/features.html#download">high speed</a></li>
<li><a href="http://www.damonlynch.net/rapid/features.html#easy">Easy</a> to configure and use</li>
<li><a href="http://www.damonlynch.net/rapid/features.html#gnomekde">Runs</a> under Unity, Gnome, KDE and other Linux desktops</li>
<li>Available in <a href="http://www.damonlynch.net/rapid/features.html#languages">thirty</a> languages</li>
<li>Program configuration and use is <a href="http://www.damonlynch.net/rapid/documentation">fully documented</a></li>
</ol>
<p>Damon <a href="https://discuss.pixls.us/t/rapid-photo-downloader-0-9-0a1-is-now-released/1416">announced his 0.9.0a1 release on the forums</a>, and Riley Brandt even recorded a short overview of the new features:</p>
<div class="fluid-vid">
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/7D0Fz6H3R34?rel=0" frameborder="0" allowfullscreen></iframe>
</div>

<p>(Shortly after announcing the 0.9.0a1 release, he <a href="https://discuss.pixls.us/t/rapid-photo-downloader-0-9-0a2-is-released/1424">followed it up with a 0.9.0a2 release</a> with some bug fixes).</p>
<p>Some of the neat new features include being able to preview the download subfolder and storage space of devices <em>before</em> you download:</p>
<figure>
<img src="https://pixls.us/blog/2016/05/new-rapid-photo-downloader/mainwindow.png" alt='Rapid Photo Downloader Main Window'>
</figure>

<p>Also being able to download from multiple devices in parallel, including from all cameras supported by <a href="http://gphoto.sourceforge.net/">gphoto2</a>:</p>
<figure>
<img src="https://pixls.us/blog/2016/05/new-rapid-photo-downloader/downloading.png" alt='Rapid Photo Downloader Downloading'>
</figure>

<p>There is much, much more in this release.  Damon goes into much further detail on <a href="https://discuss.pixls.us/t/rapid-photo-downloader-0-9-0a1-is-now-released/1416">his post in the forum</a>, copied here:</p>
<hr>
<p>How about its <strong>Timeline</strong>, which groups photos and videos based on how much time elapsed between consecutive shots. Use it to identify photos and videos taken at different periods in a single day or
over consecutive days.</p>
<p>You can adjust the time elapsed between consecutive shots that is used to build the Timeline to match your shooting sessions.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/new-rapid-photo-downloader/timeline.png" alt='Rapid Photo Downloader timeline'>
</figure>

<p>How about a modern look?</p>
<figure>
<img src="https://pixls.us/blog/2016/05/new-rapid-photo-downloader/about.png" alt='Rapid Photo Downloader about'>
</figure>

<p>Download instructions: <a href="http://damonlynch.net/rapid/download.html">http://damonlynch.net/rapid/download.html</a></p>
<p>For those who’ve used the older version, I’m copying and pasting from the ChangeLog, which covers most but not all changes:</p>
<ul>
<li><p>New features compared to the previous release, version 0.4.11:</p>
<ul>
<li><p>Every aspect of the user interface has been revised and modernized.</p>
</li>
<li><p>Files can be downloaded from all cameras supported by gPhoto2,
including smartphones. Unfortunately the previous version could download
from only some cameras.</p>
</li>
<li><p>Files that have already been downloaded are remembered. You can still select
previously downloaded files to download again, but they are unchecked by
default, and their thumbnails are dimmed so you can differentiate them
from files that are yet to be downloaded.</p>
</li>
<li><p>The thumbnails for previously downloaded files can be hidden.</p>
</li>
<li><p>Unique to Rapid Photo Downloader is its Timeline, which groups photos and
videos based on how much time elapsed between consecutive shots. Use it
to identify photos and videos taken at different periods in a single day
or over consecutive days. A slider adjusts the time elapsed between
consecutive shots that is used to build the Timeline. Time periods can be
selected to filter which thumbnails are displayed.</p>
</li>
<li><p>Thumbnails are bigger, and different file types are easier to
distinguish.</p>
</li>
<li><p>Thumbnails can be sorted using a variety of criteria, including by device
and file type.</p>
</li>
<li><p>Destination folders are previewed before a download starts, showing which
subfolders photos and videos will be downloaded to. Newly created folders
have their names italicized.</p>
</li>
<li><p>The storage space used by photos, videos, and other files on the devices
being downloaded from is displayed for each device. The projected storage
space on the computer to be used by photos and videos about to be
downloaded is also displayed.</p>
</li>
<li><p>Downloading is disabled when the projected storage space required is more
than the capacity of the download destination.</p>
</li>
<li><p>When downloading from more than one device, thumbnails for a particular
device are briefly highlighted when the mouse is moved over the device.</p>
</li>
<li><p>The order in which thumbnails are generated prioritizes representative
samples, based on time, which is useful for those who download very large
numbers of files at a time.</p>
</li>
<li><p>Thumbnails are generated asynchronously and in parallel, using a load
balancer to assign work to processes utilizing up to 4 CPU cores.
Thumbnail generation is faster than the 0.4 series of program
releases, especially when reading from fast memory cards or SSDs.
(Unfortunately generating thumbnails for a smartphone’s photos is painfully
slow. Unlike photos produced by cameras, smartphone photos do not contain
embedded preview images, which means the entire photo must be downloaded
and cached for its thumbnail to be generated. Although Rapid Photo Downloader
does this for you, nothing can be done to speed it up).</p>
</li>
<li><p>Thumbnails generated when a device is scanned are cached, making thumbnail
generation quicker on subsequent scans.</p>
</li>
<li><p>Libraw is used to render RAW images from which a preview cannot be extracted,
which is the case with Android DNG files, for instance.</p>
</li>
<li><p><a href="https://www.freedesktop.org/wiki/">Freedesktop.org</a> thumbnails for RAW and TIFF photos are generated once they
have been downloaded, which means they will have thumbnails in programs like
Gnome Files, Nemo, Caja, Thunar, PCManFM and Dolphin. If the path files are being
downloaded to contains symbolic links, a thumbnail will be created for the
path with and without the links. While generating these thumbnails does slow the
download process a little, it’s a worthwhile tradeoff because Linux desktops
typically do not generate thumbnails for RAW images, and thumbnails only for
small TIFFs.</p>
</li>
<li><p>The program can now handle hundreds of thousands of files at a time.</p>
</li>
<li><p>Tooltips display information about the file including name, modification
time, shot taken time, and file size.</p>
</li>
<li><p>Right click on thumbnails to open the file in a file browser or copy the
path.</p>
</li>
<li><p>When downloading from a camera with dual memory cards, an emblem beneath the
thumbnail indicates which memory cards the photo or video is on</p>
</li>
<li><p>Audio files that accompany photos on professional cameras like the Canon
EOS-1D series of cameras are now also downloaded. XMP files associated with
a photo or video on any device are also downloaded.</p>
</li>
<li><p>Comprehensive log files are generated that allow easier diagnosis of
program problems in bug reports. Messages optionally logged to a
terminal window are displayed in color.</p>
</li>
<li><p>When running under <a href="http://www.ubuntu.com/">Ubuntu</a>‘s Unity desktop, a progress bar and count of files
available for download is displayed on the program’s launcher.</p>
</li>
<li><p>Status bar messages have been significantly revamped.</p>
</li>
<li><p>Determining a video’s  correct creation date and time has  been improved, using a
combination of the tools <a href="https://mediaarea.net/en/MediaInfo">MediaInfo</a> and <a href="http://www.sno.phy.queensu.ca/~phil/exiftool/">ExifTool</a>. Getting the right date and time
is trickier than it might appear. Depending on the video file and the camera that
produced it, neither MediaInfo nor ExifTool always give the correct result.
Moreover some cameras always use the UTC time zone when recording the creation
date and time in the video’s metadata, whereas other cameras use the time zone
the video was created in, while others ignore time zones altogether.</p>
</li>
<li><p>The time remaining until a download is complete (which is shown in the status
bar) is more stable and more accurate. The algorithm is modelled on that
used by Mozilla Firefox.</p>
</li>
<li><p>The installer has been totally rewritten to take advantage of <a href="https://www.python.org/">Python</a>‘s
tool pip, which installs Python packages. Rapid Photo Downloader can now
be easily installed and uninstalled. On <a href="http://www.ubuntu.com/">Ubuntu</a>, <a href="https://www.debian.org/">Debian</a> and <a href="https://getfedora.org/">Fedora</a>-like
Linux distributions, the installation of all dependencies is automated.
On other Linux distrubtions, dependency installation is partially
automated.</p>
</li>
<li><p>When choosing a Job Code, whether to remember the choice or not can be
specified.</p>
</li>
</ul>
</li>
<li><p>Removed feature:</p>
<ul>
<li>Rotate Jpeg images - to apply lossless rotation, this feature requires the
program jpegtran. Some users reported jpegtran corrupted their jpegs’ 
metadata – which is bad under any circumstances, but terrible when applied
to the only copy of a file. To preserve file integrity under all circumstances,
unfortunately the rotate jpeg option must therefore be removed.</li>
</ul>
</li>
<li><p>Under the hood, the code now uses:</p>
<ul>
<li><p>PyQt 5.4 +</p>
</li>
<li><p>gPhoto2 to download from cameras</p>
</li>
<li><p>Python 3.4 +</p>
</li>
<li><p>ZeroMQ for interprocess communication</p>
</li>
<li><p>GExiv2 for photo metadata</p>
</li>
<li><p>Exiftool for video metadata</p>
</li>
<li><p>Gstreamer for video thumbnail generation</p>
</li>
</ul>
</li>
<li><p>Please note if you use a system monitor that displays network activity,
don’t be alarmed if it shows increased local network activity while the
program is running. The program uses ZeroMQ over TCP/IP for its
interprocess messaging. Rapid Photo Downloader’s network traffic is
strictly between its own processes, all running solely on your computer.</p>
</li>
<li><p>Missing features, which will be implemented in future releases:</p>
<ul>
<li><p>Components of the user interface that are used to configure file
renaming, download subfolder generation, backups, and miscellaneous
other program preferences. While they can be configured by manually
editing the program’s configuration file, that’s far from easy and is
error prone. Meanwhile, some options can be configured using the command
line.</p>
</li>
<li><p>There are no full size photo and video previews.</p>
</li>
<li><p>There is no error log window.</p>
</li>
<li><p>Some main menu items do nothing.</p>
</li>
<li><p>Files can only be copied, not moved.</p>
</li>
</ul>
</li>
</ul>
<hr>
<p>Of course, Damon doesn’t sit still.  He quickly followed up the 0.9.0a1 announcement by <a href="https://discuss.pixls.us/t/rapid-photo-downloader-0-9-0a2-is-released/1424">announcing 0.9.0a2</a> which included a few bug fixes from the previous release:</p>
<ul>
<li><p>Added command line option to import preferences from from an old program
version (0.4.11 or earlier).</p>
</li>
<li><p>Implemented auto unmount using GIO (which is used on most Linux desktops) and
UDisks2 (all those desktops that don’t use GIO, e.g. KDE). </p>
</li>
<li><p>Fixed bug while logging processes being forcefully terminated.</p>
</li>
<li><p>Fixed bug where stored sequence number was not being correctly used when
renaming files.</p>
</li>
<li><p>Fixed bug where download would crash on Python 3.4 systems due to use of Python
3.5 only math.inf</p>
</li>
</ul>
<hr>
<p>If you’ve been considering optimizing your workflow for photo import and initial sorting now is as good a time as any - particularly with all of the great new features that have been packed into this release!  Head on over to the <a href="http://www.damonlynch.net/rapid/">Rapid Photo Downloader</a> website to have a look and see the instructions for getting a copy:</p>
<p><a href="http://damonlynch.net/rapid/download.html">http://damonlynch.net/rapid/download.html</a></p>
<p>Remember, this is <em>Alpha</em> software still (though most of the functionality is all in place).  If you do run into any problems, please drop in and let Damon know in <a href="https://discuss.pixls.us/t/rapid-photo-downloader-0-9-0a2-is-released/1424">the forums</a>!</p>
<style>
ol { max-width: 32rem; margin:0 auto; }
</style>

  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[G'MIC 1.7.1]]></title>
            <link>https://pixls.us/blog/2016/05/g-mic-1-7-1/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/05/g-mic-1-7-1/</guid>
            <pubDate>Wed, 18 May 2016 00:00:00 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/then_we_shall_all_burn_together.jpg" /><br/>
                <h1>G'MIC 1.7.1</h1> 
                <h2>When the flowers are blooming, image filters abound!</h2>  
                <p>A new version <strong>1.7.1</strong> &ldquo;<em>Spring 2016</em>&rdquo; of <a href="http://gmic.eu"><em>G’MIC</em></a> (<em>GREYC’s Magic for Image Computing</em>),
the open-source framework for image processing, has been released recently (<em>26 April 2016</em>).
This is a great opportunity to summarize some of the latest advances and features over the last 5 months.</p>
<!-- more -->
<h2 id="g-mic-a-brief-overview"><a href="#g-mic-a-brief-overview" class="header-link-alt">G’MIC: A brief overview</a></h2>
<p><a href="http://gmic.eu"><em>G’MIC</em></a> is an open-source project started in <em>August 2008</em>. It has been developed in the
<a href="https://www.greyc.fr/image"><em>IMAGE</em> team</a> of the <a href="https://www.greyc.fr/fr/node/6"><em>GREYC</em></a> laboratory
from the <a href="http://www.cnrs.fr"><em>CNRS</em></a> (one of the major French public research institutes).
This team is made up of researchers and teachers specializing in the algorithms and mathematics of image processing.
<em>G’MIC</em> is released under the free software licence <a href="http://www.cecill.info/licences/Licence_CeCILL_V2.1-en.html"><em>CeCILL</em></a>
(<em>GPL</em>-compatible) for various platforms (<em>Linux, Mac and Windows</em>). It provides a set of various user interfaces
for the manipulation of <em>generic</em> image data, that is images or image sequences of
<a href="https://en.wikipedia.org/wiki/Hyperspectral_imaging">multispectral data</a> being _2D_ or _3D_, and with high-bit precision
(up to 32bits floats per channel). Of course, it manages “classical” color images as well.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/logo_gmic.png" alt='logo_gmic' width='639' height='211'>
<figcaption>
Logo and (new) mascot of the G’MIC project, the open-source framework for image processing.
</figcaption>
</figure>

<p>Note that the project just got a redesign of its mascot <em>Gmicky</em>, drawn by <a href="http://www.davidrevoy.com/static6/about-me"><em>David Revoy</em></a>, a French illustrator well-known to free graphics lovers for being responsible for the great libre webcomics <a href="http://www.peppercarrot.com/"><em>Pepper&amp;Carott</em></a>.</p>
<p><em>G’MIC</em> is probably best known for it’s <a href="http://www.gimp.org"><em>GIMP</em></a> <a href="http://gmic.eu/gimp.shtml">plug-in</a>,
first released in <em>2009</em>. Today, this popular <em>GIMP</em> extension proposes more than <em>460</em> customizable filters and effects
to apply on your images.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_gimp171_s.png" alt='gmic_gimp171_s' width='640' height='377'>
<figcaption>
Overview of the G’MIC plug-in for GIMP.
</figcaption>
</figure>

<p>But <em>G’MIC</em> is not a plug-in for GIMP only. It also offers a <a href="http://gmic.eu/reference.shtml">command-line interface</a>, that can
be used in addition with the <em>CLI</em> tools from <a href="http://www.imagemagick.org/"><em>ImageMagick</em></a> or
<a href="http://www.graphicsmagick.org"><em>GraphicsMagick</em></a>
(this is undoubtly the most powerful and flexible interface of the framework).
<em>G’MIC</em> also has a web service <a href="https://gmicol.greyc.fr/"><em>G’MIC Online</em></a> to apply effects on your images
directly from a web browser. Other <em>G’MIC</em>-based interfaces also exist (<a href="https://www.youtube.com/watch?v=k1l3RdvwHeM"><em>ZArt</em></a>,
a plug-in for <a href="http://www.krita.org"><em>Krita</em></a>, filters for <a href="http://photoflowblog.blogspot.fr/"><em>Photoflow</em></a>…).
All these interfaces are based on the generic <em>C++</em> libraries <a href="http://cimg.eu"><em>CImg</em></a> and
<a href="http://gmic.eu/libgmic.shtml"><em>libgmic</em></a> which are portable, thread-safe and multi-threaded
(through the use of <a href="http://openmp.org/"><em>OpenMP</em></a>).
Today, <em>G’MIC</em> has more than <a href="http://gmic.eu/reference.shtml"><em>900</em> functions</a> to process images, all being
fully configurable, for a library of only  approximately <em>150 kloc</em> of source code.
It’s features cover a wide spectrum of the image processing field, with algorithms for
geometric and color manipulations, image filtering (denoising/sharpening with spectral, variational or
patch-based approaches…), motion estimation and registration, drawing of graphic primitives (up to 3d vector objects),
edge detection, object segmentation, artistic rendering, etc.
This is a <em>versatile</em> tool, useful to visualize and explore complex image data,
as well as elaborate custom image processing pipelines (see these
<a href="http://issuu.com/dtschump/docs/gmic_slides">slides</a> to get more information about
the motivations and goals of the <em>G’MIC</em> project).</p>
<h2 id="a-selection-of-some-new-filters-and-effects"><a href="#a-selection-of-some-new-filters-and-effects" class="header-link-alt">A selection of some new filters and effects</a></h2>
<p>Here we look at the descriptions of some of the most significant filters recently added. We illustrate their usage
from the <em>G’MIC</em> plug-in for <em>GIMP</em>. All of these filters are of course available from other interfaces as well
(in particular within the <em>CLI</em> tool <a href="http://gmic.eu/reference.shtml"><code>gmic</code></a>).</p>
<h3 id="painterly-rendering-of-photographs"><a href="#painterly-rendering-of-photographs" class="header-link-alt">Painterly rendering of photographs</a></h3>
<p>The filter <strong>Artistic / Brushify</strong> tries to transform an image into a <em>painting</em>.
Here, the idea is to simulate the process of painting with brushes on a white canvas. One provides a template image
and the algorithm first analyzes the image geometry (local contrasts and orientations of the contours), then
attempt to reproduce the image with a single <em>brush</em> that will be locally rotated and scaled accordingly to the
contour geometry.
By simulating enough of brushstrokes, one gets a “painted” version of the template image, which is more or less close to the original one,
depending on the brush shape, its size, the number of allowed orientations, etc.
All these settings being customizable by the user as parameters of the algorithm:
This filter allows thus to render a wide variety of painting effects.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_brushify.jpg" alt='gmic_brushify' width='640' height='399'>
<figcaption>
Overview of the filter “Brushify” in the G’MIC plug-in GIMP. The brush that will be used by the algorithmis visible on the top left.
</figcaption>
</figure>

<p>The animation below illustrates the diversity of results one can get with this filter, applied on the same
input picture of a lion. Various brush shapes and geometries have been supplied to the algorithm.
<em>Brushify</em> is computationally expensive so its implementation is parallelized (each core gives several brushstrokes simultaneously).</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/brushify2.gif" alt='brushify2' width='640' height='512'>
<figcaption>
A few examples of renderings obtained with “Brushify” from the same template image, but with different brushes and parameters.
</figcaption>
</figure>

<p>Note that it’s particularly fun to invoke this filter from the command line interface (using the option <code>-brushify</code>
available in <code>gmic</code>) to process a sequence of video frames
(<a href="https://www.youtube.com/watch?v=tf_fMzS3UyQ&amp;feature=youtu.be">see this example of “ brushified “ video</a>):</p>
<div class='fluid-vid'>
<iframe width="640" height="480" src="https://www.youtube-nocookie.com/embed/tf_fMzS3UyQ?rel=0" frameborder="0" allowfullscreen></iframe>
</div>

<p><br></p>
<h3 id="reconstructing-missing-data-from-sparse-samples"><a href="#reconstructing-missing-data-from-sparse-samples" class="header-link-alt">Reconstructing missing data from sparse samples</a></h3>
<p><em>G’MIC</em> gets a new algorithm to reconstruct missing data in images. This is a classical problem in image processing,
often named “<a href="https://en.wikipedia.org/wiki/Inpainting">Image Inpainting</a>“, and <em>G’MIC</em> already had a lot of
useful filters to solve this problem.
Here, the newly added interpolation method assumes only a sparse set of image data is known, for instance a few scattered pixels
over the image (instead of continuous chuncks of image data). The analysis and the reconstruction of the global
image geometry is then particularly tough.</p>
<p>The new option <code>-solidify</code> in <em>G’MIC</em> allows the reconstruction of dense image data from such a sparse sampling,
based on a multi-scale <a href="https://en.wikipedia.org/wiki/Diffusion_equation">diffusion PDE’s</a>-based technique.
The figure below illustrates the ability of the algorithm with an example of image reconstruction. We start from
an input <a href="https://www.flickr.com/photos/jfrogg/5810936597/in/photolist-9Ruz12-oHDr6x-8VW83C-iM2cR1-oXCyji-nTGYXY-oavqFt-5emqwQ-8Qx6Nx-pkREpT-nYhS8D-najxb9-a3XHVZ-jUq3Aw-qGTeCo-r2yj33-pvci15-p7WnqP-ajPFM1-7SquY5-6busU-7B5iLy-9Av8Kr-4jZ6zq-b2anbD-c2LF73-aiQ5Ta-cdTWpb-ob7FJx-aohzY1-razwT3-p5rXdc-fCvsV3-4N8vKM-4Nhy4z-4HVUCr-eMUCnQ-bqJnaX-6CuzQd-qCYpsk-NzLkj-hYUtqE-oVbqnh-4H1DkM-r4ArWu-drpZHp-pHbCDL-8Zr8K1-xxf3Q9-e8dK5N">image of a waterdrop</a>,
and we keep only 2.7% of the image data (a very little amount of data!). The algorithm is able to reconstruct
a whole image that looks like the input, even if all the small details have not been
fully reconstructed (of course!). The more samples we have, the finer details we can recover.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/waterdrop2.gif" alt='waterdrop2' width='640' height='346'>
<figcaption>
Reconstruction of an image from a sparse sampling.
</figcaption>
</figure>

<p>As this reconstruction technique is quite generic, several new <em>G’MIC</em> filters takes advantage of it:</p>
<ul>
<li>Filter <strong>Repair / Solidify</strong> applies the algorithm in a direct manner, by reconstructing transparent areas
from the interpolation of opaque regions.
The animation below shows how this filter can be used to create an artistic blur on the image borders.</li>
</ul>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_sol.gif" alt='gmic_sol' width='640' height='410'>
<figcaption>
Overview of the “Solidify” filter, in the G’MIC plug-in for GIMP.
</figcaption>
</figure>

<p>From an artistic point of view, there are many possibilities offered by this filters.
For instance, it becomes really easy to generate color gradients with complex shapes, as shown with the two examples below
(also in <a href="https://www.youtube.com/watch?v=rgLQayllv-g">this video</a> that details the whole process).</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_solidify2.jpg" alt='gmic_solidify2' width='636' height='636'>
<figcaption>
Using the “Solidify” filter of G’MIC to easily create color gradients with complex shapes (input images on the left, filter results on the right).
</figcaption>
</figure>

<ul>
<li>Filter <strong>Artistic / Smooth abstract</strong> uses same idea as the one with the waterdrop image:
it purposely sub-samples the image in a sparse way, by choosing keypoints mainly on the image edges, then use the reconstruction
algorithm to get the image back. With a low number of samples, the filter can only render a piecewise smooth image,
i.e. a smooth abstraction of the input image.</li>
</ul>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/smooth_abstract.jpg" alt='smooth_abstract' width='640' height='456'>
<figcaption>
Overview of the “Smooth abstract” filter in the G’MIC plug-in for GIMP.
</figcaption>
</figure>

<ul>
<li>Filter <strong>Rendering / Gradient [random]</strong> is able to synthetize random colored backgrounds. Here again, the filter initializes
a set of colors keypoints randomly chosen over the image, then interpolate them with the new reconstruction algorithm.
We end up with a psychedelic background composed of randomly oriented color gradients.</li>
</ul>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gradient_random.jpg" alt='gradient_random' width='640' height='387'>
<figcaption>
Overview of the “Gradient [random]” filter in the G’MIC plug-in for GIMP.
</figcaption>
</figure>

<ul>
<li><strong>Simulation of analog films</strong> : the new reconstruction algorithm also allowed a major improvement
for all the analog film emulation filters that have been present in <em>G’MIC</em> for years.
The section <strong>Film emulation/</strong> proposes a wide variety of filters for this purpose. Their goal is to apply color transformations
to simulate the look of a picture shot by an analogue camera with a certain kind of film.
Below, you can see for instance a few of the <em>300</em> colorimetric transformations that are available in <em>G’MIC</em>.</li>
</ul>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_clut1.jpg" alt='gmic_clut1' width='481' height='725'>
<figcaption>
A few of the 300+ color transformations available in G’MIC.
</figcaption>
</figure>

<p>From an algorithmic point of view, such a color mapping is extremely simple to implement :
for each of the <em>300+</em> presets, <em>G’MIC</em> actually has an <a href="http://www.quelsolaar.com/technology/clut.html"><em>HaldCLUT</em></a>, that is
a function defining for each possible color <em>(R,G,B)</em> (of the original image), a new color <em>(R’,G’,B’)</em> color to set
instead. As this function is not necessarily analytic, a <em>HaldCLUT</em> is stored in a discrete manner as a lookup table that gives
the result of the mapping <em>for all</em> possible colors of the <em>RGB</em> cube (that is <em>2^24 = 16777216</em> values
if we work with a <em>8bits</em> precision per color component). This <em>HaldCLUT</em>-based color mapping is illustrated below for all values of the <em>RGB</em> color cube.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_clut0.jpg" alt='gmic_clut0' width='322' height='445'>
<figcaption>
Principle of an HaldCLUT-based colorimetric transformation.
</figcaption>
</figure>

<p>This is a large amount of data: even by subsampling the <em>RGB</em> space (e.g. with <em>6 bits</em> per component) and compressing the corresponding <em>HaldCLUT</em> file,
you ends up with approximately <em>200</em> and <em>300</em> kB for each mapping file.
Multiply this number by <em>300+</em> (the number of available mappings in <em>G’MIC</em>), and you get a total of <em>85MB</em> of data, to store all these color transformations.
Definitely not convenient to spread and package!</p>
<p>The idea was then to develop a new lossy compression technique focused on <em>HaldCLUT</em> files, that is volumetric discretised vector-valued functions which are piecewise smooth by nature.
And that what has been done in <em>G’MIC</em>, thanks to the new sparse reconstruction algorithm. Indeed, the reconstruction technique also works with _3D_ image data (such as a <em>HaldCLUT</em>!), so
one simply has to extract a sufficient number of significant keypoints in the <em>RGB</em> cube and interpolate them afterwards to allow the reconstruction of a whole <em>HaldCLUT</em>
(taking care to have a reconstruction error small enough to be sure that
the color mapping we get with the compressed <em>HaldCLUT</em> is indistinguishable from the non-compressed one).</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_clut2.jpg" alt='gmic_clut2' width='640' height='320'>
<figcaption>
How the decompression of an HaldCLUT now works in G’MIC, from a set of colored keypoints located in the RGB cube.
</figcaption>
</figure>

<p>Thus, <em>G’MIC</em> doesn’t need to store all the color data from a <em>HaldCLUT</em>, but only a sparse sampling of it (i.e. a sequence of <code>{ rgb_keypoint, new_rgb_color }</code>).
Depending on the geometric complexity of the <em>HaldCLUTs</em> to encode, more or less keypoints are necessary (roughly from _30_ to <em>2000</em>).
As a result, the storage of the <em>300+</em> <em>HaldCLUTs</em> in <em>G’MIC</em> requires now only <em>850 KiB</em> of data (instead of <em>85 MiB</em>), that is a compression gain of <em>99%</em> !
That makes the whole <em>HaldCLUT</em> data storable in a single file that is easy to ship with the <em>G’MIC</em> package. Now, a user can then apply all the <em>G’MIC</em> color transformations
while being offline (previously, each <em>HaldCLUT</em> had to be downloaded separately from the <em>G’MIC</em> server when requested).</p>
<p>It looks like this new reconstruction algorithm from sparse samples is really great, and no doubts it will be used in other filters in the future.</p>
<h3 id="make-textures-tileable"><a href="#make-textures-tileable" class="header-link-alt">Make textures tileable</a></h3>
<p>Filter <strong>Arrays &amp; tiles / Make seamless [patch-based]</strong> tries to transform an input texture to make it <em>tileable</em>, so that it can be duplicated as <em>tiles</em> along the horizontal and vertical axes
without visible seams on the borders of adjacent tiles.
Note that this is something that can be extremely hard to achieve, if the input texture has few auto-similarity or glaring luminosity changes spatially.
That is the case for instance with the “Salmon” texture shown below as four adjacent tiles (configuration <em>2x2</em>) with a lighting that goes from dark (on the left) to bright (on the right).
Here, the algorithm modifies the texture so that the tiling shows no seams, but where the aspect of the original texture is preserved as much as possible
(only the texture borders are modified).</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/seamless1.gif" alt='seamless1' width='640' height='532'>
<figcaption>
Overview of the “Make Seamless” filter in the G’MIC plug-in for GIMP.
</figcaption>
</figure>

<p>We can imagine some great uses of this filter, for instance in video games, where texture tiling is common to render large virtual worlds.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/seamless2.gif" alt='seamless2' width='640' height='427'>
<figcaption>
Result of the “Make seamless” filter of G’MIC to make a texture tileable.
</figcaption>
</figure>


<h3 id="image-decomposition-into-several-levels-of-details"><a href="#image-decomposition-into-several-levels-of-details" class="header-link-alt">Image decomposition into several levels of details</a></h3>
<p>A “new” filter <strong>Details / Split details [wavelets]</strong> has been added to decompose an image into several levels of details.
It is based on the so-called <a href="https://en.wikipedia.org/wiki/Stationary_wavelet_transform">“à trous” wavelet decomposition</a>.
For those who already know the popular <a href="http://registry.gimp.org/node/11742"><em>Wavelet Decompose</em></a> plug-in for <em>GIMP</em>, there won’t be so much novelty here, as it is mainly the same kind of
decomposition technique that has been implemented.
Having it directly in <em>G’MIC</em> is still a great news: it offers now a preview of the different scales that will be computed, and the implementation is parallelized to take advantage of multiple cores.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_wavelets.jpg" alt='gmic_wavelets' width='640' height='448'>
<figcaption>
Overview of the wavelet-based image decomposition filter, in the G’MIC plug-in for GIMP.
</figcaption>
</figure>

<p>The filter outputs several layers, so that each layer contains the details of the image at a given scale. All those layers blended together gives the original image back.</p>
<p>Thus, one can work on those output layers separately and modify the image details only for a given scale. There are a lot of applications for this kind of image decomposition,
one of the most spectacular being the ability to retouch the skin in portraits : the flaws of the skin are indeed often present in layers with middle-sized scales, while
the natural skin texture (the pores) are present in the fine details. By selectively removing the flaws while keeping the pores, the skin aspect stays natural after the retouch
(see <a href="http://blog.patdavid.net/2011/12/getting-around-in-gimp-skin-retouching.html">this wonderful link</a> for a detailed tutorial about skin retouching techniques, with <em>GIMP</em>).</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/skin.gif" alt='skin' width='480' height='480'>
<figcaption>
Using the wavelet decomposition filter in G’MIC for removing visible skin flaws on a portrait.
</figcaption>
</figure>


<h3 id="image-denoising-based-on-patch-pca-"><a href="#image-denoising-based-on-patch-pca-" class="header-link-alt">Image denoising based on “Patch-PCA”</a></h3>
<p><em>G’MIC</em> is also well known to offer a wide range of algorithms for image <em>denoising</em> and <em>smoothing</em> (currently more than a dozen). And he got one more !
Filter <strong>Repair / Smooth [patch-pca]</strong> proposed a new image denoising algorithm that is both efficient and computationally intensive (despite its multi-threaded implementation, you
probably should avoid it on a machine with less than 8 cores…).
In return, it sometimes does magic to suppress noise while preserving small details.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/patchpca.jpg" alt='patchpca' width='640' height='291'>
<figcaption>
Result of the new patch-based denoising algorithm added to G’MIC.
</figcaption>
</figure>


<h3 id="the-droste-effect"><a href="#the-droste-effect" class="header-link-alt">The “Droste” effect</a></h3>
<p><a href="https://en.wikipedia.org/wiki/Droste_effect">The Droste effect</a> (also known as “<em>mise en abyme</em>“ in art) is the effect of a picture appearing within itself recursively.
To achieve this, a new filter <strong>Deformations / Continuous droste</strong> has been added into <em>G’MIC</em>. It’s actually a complete rewrite of the popular Mathmap’s
<a href="https://www.flickr.com/groups/88221799@N00/discuss/72157601071820707/">Droste filter</a> that has existed for years.
<em>Mathmap</em> was a very popular plug-in for <em>GIMP</em>, but it seems to be not maintained anymore. The Droste effect was one of its most iconic and complex filter.
<em>Martin “Souphead”</em>, one former user of <em>Mathmap</em> then took the bull by the horns and converted the complex code of this filter specifically into a <em>G’MIC</em> script,
resulting in a parallelized implementation of the filter.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/droste0.jpg" alt='droste0' width='640' height='373'>
<figcaption>
Overview of the converted “Droste” filter, in the G’MIC plug-in for GIMP.
</figcaption>
</figure>

<p>This filter allows all artistic delusions. For instance, it becomes trivial to create the result below in a few steps: create a selection around the clock, move it on a transparent background, run the <em>Droste</em> filter,
<em>et voilà!</em>.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/droste2.jpg" alt='droste2' width='488' height='736'>
<figcaption>
A simple example of what the G’MIC “Droste” filter can do.
</figcaption>
</figure>


<h3 id="equirectangular-to-nadir-zenith-transformation"><a href="#equirectangular-to-nadir-zenith-transformation" class="header-link-alt">Equirectangular to nadir-zenith transformation</a></h3>
<p>The filter <strong>Deformations / Equirectangular to nadir-zenith</strong> is another filter converted from <em>Mathmap</em> to <em>G’MIC</em>.
It is specifically used for the processing of panoramas: it reconstructs both the
<a href="https://en.wikipedia.org/wiki/Zenith"><em>Zenith</em></a> and the
<a href="https://en.wikipedia.org/wiki/Nadir"><em>Nadir</em></a> regions of a panorama so that they can be easily modified
(for instance to reconstruct missing parts), before being reprojected back into the input panorama.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/zenith1.jpg" alt='zenith1' width='640' height='318'>
<figcaption>
Overview of the “Deformations / Equirectangular to nadir-zenith” filter in the G’MIC plug-in for GIMP.
</figcaption>
</figure>

<p><a href="https://plus.google.com/u/0/b/117441237982283011318/115320419935722486008/posts"><em>Morgan Hardwood</em></a> has wrote a quite detailled tutorial,
<a href="https://discuss.pixls.us/t/panography-patching-the-zenith-and-nadir/585">here on pixls.us</a>,
about the reconstruction of missing parts in the Zenith/Nadir of an equirectangular panorama. Check it out!</p>
<h2 id="other-various-improvements"><a href="#other-various-improvements" class="header-link-alt">Other various improvements</a></h2>
<p>Finally, here are other highlights about the <em>G’MIC</em> project:</p>
<ul>
<li>Filter <strong>Rendering / Kitaoka Spin Illusion</strong> is another <em>Mathmap</em> filter converted to <em>G’MIC</em> by <em>Martin “Souphead”</em>. It generates a certain kind of
<a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html">optical illusions</a> as shown below (close your eyes if you are epileptic!)</li>
</ul>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/spin2.jpg" alt='spin2' width='422' height='422'>
<figcaption>
Result of the “Kitaoka Spin Illusion” filter.
</figcaption>
</figure>

<ul>
<li>Filter <strong>Colors / Color blindness</strong> transforms the colors of an image to simulate different types of <a href="https://en.wikipedia.org/wiki/Color_blindness">color blindness</a>.
This can be very helpful to check the accessibility of a web site or a graphical document for colorblind people.
The color transformations used here are the same as defined on <a href="http://www.color-blindness.com/coblis-color-blindness-simulator/"><em>Coblis</em></a>,
a website that proposes to apply this kind of simulation online. The <em>G’MIC</em> filter gives strictly identical results, but it ease
the batch processing of several images at once.</li>
</ul>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_cb.jpg" alt='gmic_cb' width='640' height='397'>
<figcaption>
Overview of the colorblindness simulation filter, in the G’MIC plug-in for GIMP.
</figcaption>
</figure>

<ul>
<li>Since a few years now, <em>G’MIC</em> has its own parser of mathematical expression, a really convenient module to perform complex calculations when applying image filters
This core feature gets new functionalities: the ability to manage variables that can be complex, vector or matrix-valued, but also the creation of
user-defined mathematical functions. For instance, the classical rendering of the <a href="https://en.wikipedia.org/wiki/Mandelbrot_set"><em>Mandelbrot</em> fractal set</a>
(done by estimating the divergence of a sequence of complex numbers) can be implemented like this, directly on the command line:<pre><code class="lang-sh">$ gmic 512,512,1,1,&quot;c = 2.4*[x/w,y/h] - [1.8,1.2]; z = [0,0]; for (iter = 0, cabs(z)&lt;=2 &amp;&amp; ++iter&lt;256, z = z**z + c); 6*iter&quot; -map 7,2
</code></pre>
</li>
</ul>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_mand.jpg" alt='gmic_mand' width='512' height='512'>
<figcaption>
Using the G’MIC math evaluator to implement the rendering of the Mandelbrot set, directly from the command line!_
</figcaption>
</figure>

<p>This clearly enlarge the math evaluator ability, as you are not limited to scalar variables anymore. You can now create complex filters which are able to
solve linear systems or compute eigenvalues/eigenvectors, and this, for each pixel of an input image.
It’s a bit like having a micro-(micro!)-<a href="https://www.gnu.org/software/octave/"><em>Octave</em></a> inside <em>G’MIC</em>.
Note that the <em>Brushify</em> filter described earlier uses these new features extensively.
It’s also interesting to know that the <em>G’MIC</em> math expression evaluator has its own <a href="https://en.wikipedia.org/wiki/Just-in-time_compilation"><em>JIT</em> compiler</a>
to achieve a fast evaluation of expressions when applied on thousands of image values simultaneously.</p>
<ul>
<li>Another great contribution has been proposed by <a href="https://plus.google.com/+TobiasFleischer/posts"><em>Tobias Fleischer</em></a>, with the creation of a new _C_
<a href="https://en.wikipedia.org/wiki/Application_programming_interface"><em>API</em></a> to invoke the functions of the <a href="http://gmic.eu/libgmic.shtml"><em>libgmic</em></a> library
(which is the library containing all the <em>G’MIC</em> features, initially available through a <em>C++</em> <em>API</em> only).
As the _C_ <a href="https://fr.wikipedia.org/wiki/Application_binary_interface"><em>ABI</em></a> is standardized (unlike <em>C++</em>),
this basically means <em>G’MIC</em> can be interfaced more easily with languages other than <em>C++</em>.
In the future, we can imagine the development of <em>G’MIC</em> <em>APIs</em> for languages such as <em>Python</em> for instance.
<em>Tobias</em> is currently using this new _C_ <em>API</em> to develop <em>G’MIC</em>-based plug-ins compatible with the <a href="https://en.wikipedia.org/wiki/OpenFX_%28API%29"><em>OpenFX</em></a> standard.
Those plug-ins should be usable indifferently in video editing software such as <a href="https://fr.wikipedia.org/wiki/Adobe_After_Effects">After effects</a>, <a href="https://fr.wikipedia.org/wiki/Sony_Vegas_Pro">Sony Vegas Pro</a>
or <a href="http://www.natron.fr/">Natron</a>. This is still an on-going work though.</li>
</ul>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_natron.jpg" alt='gmic_natron' width='640' height='391'>
<figcaption>
Overview of some G’MIC-based OpenFX plug-ins, running under Natron.
</figcaption>
</figure>

<ul>
<li>Another contributor <a href="https://github.com/Starfall-Robles"><em>Robin “Starfall Robles”</em></a> started to develop a <a href="https://github.com/Starfall-Robles/Blender-2-G-MIC">Python script</a>
to provide some of the <em>G’MIC</em> filters directly in the <a href="http://www.blendernation.com/2016/04/27/creative-imagery-blender-2-gmic/"><em>Blender</em> video sequence editor</a>.
This work is still in a early stage, but you can already apply different <em>G’MIC</em> effects on image sequences (see <a href="https://www.youtube.com/watch?v=TSzoEXAV1zs">this video</a> for a demonstration).</li>
</ul>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_blender2.jpg" alt='gmic_blender2' width='640' height='325'>
<figcaption>
Overview of a dedicated G’MIC script running within the Blender VSE.
</figcaption>
</figure>

<ul>
<li>You can find out <em>G’MIC</em> filters also in the opensource nonlinear video editor <a href="https://github.com/jliljebl/flowblade"><em>Flowblade</em></a>, thanks to the hard work of
<a href="https://plus.google.com/u/0/b/117441237982283011318/102624418925189345577/posts"><em>Janne Liljeblad</em></a> (<em>Flowblade</em> project leader).
Here again, the goal is to allow the application of <em>G’MIC</em> effects and filters directly on image sequences, mainly for artistic purposes
(as shown in <a href="https://vimeo.com/157364651">this video</a> or <a href="https://vimeo.com/164331676">this one</a>).</li>
</ul>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_flowblade.jpg" alt='gmic_flowblade' width='640' height='530'>
<figcaption>
Overview of a G’MIC filter applied under Flowblade, a nonlinear video editor.
</figcaption>
</figure>



<h2 id="what-s-next-"><a href="#what-s-next-" class="header-link-alt">What’s next ?</a></h2>
<p>As you see, the <em>G’MIC</em> project is doing well, with an active development and cool new features added months after months.
You can find and use interfaces to <em>G’MIC</em> in more and more opensource software, as
<a href="http://www.gimp.org"><em>GIMP</em></a>,
<a href="https://krita.org/"><em>Krita</em></a>,
<a href="https://www.blender.org/"><em>Blender</em></a>,
<a href="https://aferrero2707.github.io/PhotoFlow/"><em>Photoflow</em></a>,
<a href="https://github.com/jliljebl/flowblade"><em>Flowblade</em></a>,
<a href="http://veejayhq.net/">Veejay</a>,
<a href="http://ekd.tuxfamily.org/"><em>EKD</em></a> and
<a href="http://natron.fr/"><em>Natron</em></a> in a near future (at least we hope so!).</p>
<p>At the same time, we can see more and more external resources available for <em>G’MIC</em> : tutorials, blog articles
(<a href="https://discuss.pixls.us/t/fourier-transform-for-fixing-regular-pattern-noise/586">here</a>,
<a href="https://paulsphotopalace.wordpress.com/the-color-mixers-3/">here</a>,
<a href="http://lapizybits.blogspot.com/2015/12/efecto-esbozo.html">here</a>,…),
or demonstration videos
(<a href="https://www.youtube.com/watch?v=YjqMT7Mn2ac">here</a>,
<a href="https://www.youtube.com/watch?v=VPG1dkPlyvo">here</a>,
<a href="https://www.youtube.com/watch?v=N3KqWTmkgB8">here</a>,
<a href="https://www.youtube.com/watch?v=w6Sr1nO5gFo">here</a>,…).
This shows the project becoming more useful to users of opensource software for graphics and photography.</p>
<p>The development of version <em>1.7.2</em> already hit the ground running, so stay tuned and visit the official <em>G’MIC</em> <a href="https://discuss.pixls.us/c/software/gmic">forum on pixls.us</a>
to get more info about the project developement and get answers to your questions.
Meanwhile, feel the power of <em>free software</em> for image processing!</p>
<h2 id="links-"><a href="#links-" class="header-link-alt">Links:</a></h2>
<ul>
<li><a href="http://gmic.eu">G’MIC home page</a></li>
<li><a href="http://gmic.eu/gimp.shtml">G’MIC plug-in for GIMP</a></li>
<li><a href="http://gmic.eu/tutorial/basics.shtml">Introduction to the CLI interface of G’MIC</a></li>
<li><a href="http://gmic.eu/reference.shtml">Technical reference documentation</a></li>
<li><a href="https://linuxfr.org/news/g-mic-1-7-1-quand-les-fleurs-bourgeonnent-les-filtres-d-images-foisonnent">G’MIC 1.7.1 release article on linuxfr.org</a></li>
</ul>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Post Libre Graphics Meeting]]></title>
            <link>https://pixls.us/blog/2016/04/post-libre-graphics-meeting/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/04/post-libre-graphics-meeting/</guid>
            <pubDate>Fri, 29 Apr 2016 22:12:46 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/Mairi-Finsbury.jpg" /><br/>
                <h1>Post Libre Graphics Meeting</h1> 
                <h2>What a trip!</h2>  
                <p>What a blast!</p>
<p>This trip report is long overdue, but I wanted to process some of my images to share with everyone before I posted.</p>
<p>It had been a couple of years since I had an opportunity to travel and meet with the <a href="https://www.gimp.org">GIMP</a> team again (<a href="https://www.flickr.com/photos/patdavid/albums/72157643712169045">Leipzig</a> was awesome) so I was really looking forward to this trip.  I missed the opportunity to head up to the great white North for last years meeting in Toronto.</p>
<!-- more -->
<h2 id="london-calling"><a href="#london-calling" class="header-link-alt">London Calling</a></h2>
<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/to_LGM.jpg" alt='Passport to LGM'>
<figcaption>
Passport? Check! Magazine? Check! Ready to head to London!
</figcaption>
</figure>

<p>I was going to attend the pre-LGM photowalk again this year so this time I decided to pack some bigger off-camera lighting modifiers for everyone to play with.  Here’s a neat travelling photographer pro-tip: most airlines will let you carry on an umbrella as a “freebie” item.  They just don’t specify that it <em>has</em> to be an umbrella to keep the rain off you.  So I carried on my big Photek Softlighter II (luckily my light stands fit in my checked luggage).  Just be sure not to leave it behind somewhere (which I was paranoid about for most of my trip).  Luckily I was only changing planes in Atlanta.</p>
<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/ATL.jpg" alt='Atlanta Airport International Terminal'>
<figcaption>
The new ‘futristic’ looking Atlanta airport international terminal.
</figcaption>
</figure>

<p>A couple of (<em>bad</em>) movies and hours later I was in Heathrow.  I figured it wouldn’t be much trouble getting through border control.  </p>
<p>I may have been a little optimistic about that.  </p>
<p>The <strong>Border Force</strong> agent was quite nice and <em>super</em> inquisitive.  So much so that I actually began to worry at some point (I think I must have spent almost 20 minutes talking to her) that she might not let me in!</p>
<p>She kept asking what I was coming to London for and I kept trying to explain to her what a “<em>Libre Graphics Meeting</em>“ was.  This was almost a tragic comedy.  The idea of Free Software did not seem to compute to her and I was sorry I had even made the passing mention.  Her attention then turned to my umbrella and photography.  What was I there to photograph?  Who?  Why?  (Come to think of it, I should start asking myself those same questions more often… It was an existential visit to the border control.)</p>
<p>In the end I think she got bored with my answers and figured that I was far too awkward to be a threat to anything.  Which pretty much sums up my entire college dating life.</p>
<h2 id="photowalk"><a href="#photowalk" class="header-link-alt">Photowalk</a></h2>
<p>In what I hope will become a tradition we had our photowalk the day before LGM officially kicked off and we could not have asked for a better day of weather!  It was partly cloudy and just gorgeous (pretty much the complete <em>opposite</em> to what I was expecting for London weather). </p>
<h3 id="furtherfield-commons"><a href="#furtherfield-commons" class="header-link-alt">Furtherfield Commons</a></h3>
<p><a href='http://www.furtherfield.org/'>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/furtherfield_header.png" alt='Furtherfield Logo' style='background-color: #D3DBD5;'>
</a></p>
<p>I want to thank <a href="http://ruthcatlow.net">Ruth Catlow</a> (<a href="http://ruthcatlow.net/">http://ruthcatlow.net/</a>) for allowing us to use the awesome space at <a href="http://www.furtherfield.org">Furtherfield Commons</a> in Finsbury Park as a base for our photowalk!  They were amazingly accommodating and we had a wonderful time chatting in general about art and what they were up to at the gallery and space.</p>
<p>They have some really neat things going on at the gallery and space so be sure to check them out if you can!</p>
<h3 id="going-for-a-walk-with-friends"><a href="#going-for-a-walk-with-friends" class="header-link-alt">Going for a Walk with Friends</a></h3>
<p>This is one of my favorite things about being able to attend LGM.  I get to take a stroll and talk about photography with friends that I only usually get to interact with through an IRC window. I also feel like I can finally contribute something back to these awesome people that provide software I use every day.</p>
<figure >
<a href="https://www.flickr.com/photos/schumaml/25858162683/in/dateposted/" title="IMGP6089"><img src="https://farm2.staticflickr.com/1443/25858162683_47061b2074_z.jpg" width="640" height="426" alt="IMGP6089"></a>
<figcaption>
Mairi between Simon and myself (I’m holding a reflector for him).<br>
Photo by <a href="https://www.flickr.com/photos/schumaml/25858162683/in/dateposted/">Michael Schumacher</a> <span class='cc'><a href="https://www.flickr.com/photos/103724284@N02/26526017851">cbna</a></span>
</figcaption>
</figure>

<p>We meandered through the park and chatted a bit about various things.  Simon had brought along his external flash and wanted to play with off-camera lighting.  So we convinced Liam to stand in front of a tree for us and Simon ended up taking one of my favorite images from the entire trip.  This was Liam standing in front of the tree under the shade with me holding the flash slightly above him and to the camera right.</p>
<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/liam_by_nomis-500.jpg" alt='Liam by nomis'>
<figcaption>
Liam by Simon
</figcaption>
</figure>

<p>We even managed to run into Barrie Minney while on our way back to the Commons building.  Aryeom and I started talking a little bit while walking when we crossed paths with some locals hanging out in the park.  One man in particular was quite outgoing and let Aryeom take his photo, leading to another fun image!</p>
<p>Upon returning to the Commons building we experimented with some of the pretty window light coming into the building along with some black panels and a model (Mairi).  This was quite fun as we were experimenting with various setups for the black panels and speedlights.  Everyone had a chance to try some shots out and to direct Mairi (who was <em>super</em> patient and accommodating while we played).</p>
<figure>
<a href="https://www.flickr.com/photos/patdavid/26059429014/in/dateposted-public/" title="Mairi Natural Light"><img src="https://farm2.staticflickr.com/1456/26059429014_c00b1b6d63_c.jpg" width="598" height="800" alt="Mairi Natural Light"></a>
<figcaption>
I was having so much fun talking and trying things out with everyone that I didn’t even take that many photos of my own!  This is one of my only images of Mairi inside the Commons.<br>
<i>Mairi Natural Light</i> <span class='cc'><a href="https://creativecommons.org/licenses/by-sa/2.0/">cba</a></span>
</figcaption>
</figure>

<p>Towards the end of our day I decided get my big Softlighter out and to try a few things in the lane outside the Commons building.  Luckily Michael Schumacher grabbed an image of us while we were testing some shots with Mairi outside.</p>
<figure>
<a data-flickr-embed="true"  href="https://www.flickr.com/photos/schumaml/26395969771/in/dateposted/" title="IMGP6108"><img src="https://farm2.staticflickr.com/1612/26395969771_b4a404b072_z.jpg" width="640" height="426" alt="IMGP6108"></a>
<figcaption>
A nice behind-the-scenes image from schumaml of the lighting setup used below.<br>
Yes, that’s <a href='http://www.darktable.org'>darktable</a> developer hanatos bracing the umbrella from the wind for me!<br>
<i>Photo by <a href="https://www.flickr.com/photos/schumaml/25858162683/in/dateposted/">Michael Schumacher</a> </i><span class='cc'><a href="https://www.flickr.com/photos/103724284@N02/26526017851">cbna</a></span>
</figcaption>
</figure>

<p>I loved the lane receding in the background and thought it might make for some fun images of Mairi.  I had two YN-560 flashes in the Softlighter both firing around &frac34; power.  I had to balance the ambient sky with the softlighter so needed the extra power of a second flash (it also helps to keep the cycle times down).</p>
<figure>
<a href="https://www.flickr.com/photos/patdavid/26581376895/in/dateposted-public/" title="Mairi Finsbury"><img src="https://farm2.staticflickr.com/1565/26581376895_a716383b7e_z.jpg" width="640" height="360" alt="Mairi Finsbury"></a>
<figcaption>
Mairi waiting patiently while we set things up.<br>
<i>Mairi Finsbury</i> <span class='cc'><a href="https://creativecommons.org/licenses/by-sa/2.0/">cba</a></span><br>
50mm <i style='font-family:serif;'>f</i>/8.0 <sup style='margin-right:-0.1rem;'>1</sup>&frasl;<sub style='margin-left:-0.1rem;'>200</sub> ISO200
</figcaption>
</figure>

<figure>
<a href="https://www.flickr.com/photos/patdavid/26365329850/in/dateposted-public/" title="Mairi Finsbury Park (In the Lane)"><img src="https://farm2.staticflickr.com/1443/26365329850_3b9e044e57_z.jpg" width="640" height="640" alt="Mairi Finsbury Park (In the Lane)"></a>
<figcaption>
<i>Mairi Finsbury Park (In the Lane)</i> <span class='cc'><a href="https://creativecommons.org/licenses/by-sa/2.0/">cba</a></span>
</figcaption>
</figure>

<p>The day was awesome and I really enjoyed being able to just hang out with everyone and take some neat photos.  The evening at the pub was pretty great also (I got to hang out with Barrie and his friend and have a couple of pints - <em>thanks again Barrie</em>!).</p>
<h2 id="lgm"><a href="#lgm" class="header-link-alt">LGM</a></h2>
<p>It never fails to amaze me how every year the LGM organizers manage to put together such a great meeting for everyone.  The venue was great and the people were just fantastic at the University of Westminster.</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/UoW.jpg" alt='University of Westminster'>
<figcaption>
View of the lobby and meeting rooms (on the second floor).
</figcaption>
</figure>

<figure class='big-vid'>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/LGM_Auditorium.jpg" alt='LGM Auditorium'>
<figcaption>
Andrea Ferrero (<a href="https://discuss.pixls.us/users/carmelo_drraw/activity">@Carmelo_DrRaw</a>) presenting <a href='http://aferrero2707.github.io/PhotoFlow/' title='PhotoFlow website'>PhotoFlow</a> in the auditorium!
</figcaption>
</figure>


<p>The opening “<em>State of the Libre Graphics</em>“ presentation was done by our (the GIMP teams) very own João Bueno who did a fantastic job! João will also be the local organizer for the 2017 LGM in Rio.</p>
<p>Thanks to contributions from community members <a href="https://www.flickr.com/photos/andabata">Kees Guequierre</a>, <a href="https://29a.ch/">Jonas Wagner</a>, and <a href="https://www.flickr.com/photos/philipphaegi">Philipp Haegi</a> I had some great images to use for the PIXLS.US community slides for the “<em>State of the Libre Graphics</em>“.  If anyone is curious, here is what I submitted:</p>
<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/PIXLS-0.min.png" alt='PIXLS State of Libre Graphics 0'>
<figcaption>
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/PIXLS-1.min.png" alt='PIXLS State of Libre Graphics 0'>
<figcaption>
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/PIXLS-2.min.png" alt='PIXLS State of Libre Graphics 0'>
<figcaption>
</figcaption>
</figure>

<p>These slides can be found on our <a href="https://github.com/pixlsus/Presentations">Github PIXLS.US Presentations</a> page (along with all of our other presentations that relate to PIXLS.US and promoting the community).  </p>
<p>Speaking of presentations…</p>
<h3 id="presentation"><a href="#presentation" class="header-link-alt">Presentation</a></h3>
<p>I was given some time to talk about and present our community to everyone at the meeting. (See embedded slides below):</p>
<figure>
<a data-flickr-embed="true"  href="https://www.flickr.com/photos/patdavid/albums/72157668276522285" title="LGM2016 PIXLS.US Presentation"><img src="https://farm8.staticflickr.com/7116/26864395042_62177a54de_z.jpg" width="640" height="480" alt="LGM2016 PIXLS.US Presentation"></a><script async src="https://pixls.us//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
</figure>

<p>I started by looking at what my primary motivation was to begin the site and what the state of free software photography was like at that time (or not like).  Mainly that the majority of resources online for photographers that were high quality (and focused on high-quality results) were usually aimed at proprietary software users.  Worse still, in some cases these websites locked away their best tutorials and learning content behind paywalls and subscriptions.  I finished by looking at what was done to build this site and forum as a community for everyone to learn and share with each other freely.</p>
<p>I think the presentation went well and people seemed to be interested in what we were doing!  Nate Willis even published an article about the presentation at <a href="http://lwn.net">LWN.net</a>, <a href="http://lwn.net/Articles/684279/"><em>“Refactoring the open-source photography community”</em></a>:</p>
<figure>
<a href='http://lwn.net/Articles/684279/' title='Refactoring the open-source photography community on LWN.net'>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/04-lgm-david-sm.jpg" alt='Pat David presenting on PIXLS.US at LGM 2016'>
</a>
<figcaption>
A photo of me I <i>don’t</i> hate! :)
</figcaption>
</figure>


<h3 id="exhibition"><a href="#exhibition" class="header-link-alt">Exhibition</a></h3>
<p>A nice change this year was the inclusion of an exhibition space to display works by LGM members and artists.  We even got an opportunity to hang a couple of prints (for some reason they really wanted my quad-print of pippin).  I was particularly happy that we were able to print and display the <a href="https://www.flickr.com/photos/andabata/20025243436"><em>Green Tiger Beetle</em></a> by community member <a href="https://www.flickr.com/photos/andabata">Kees Guequierre</a>:</p>
<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/hanatos-houz-lgm.jpg" alt='hanatos and houz at LGM'>
<figcaption>
Hanatos and houz inspecting the prints at the exhibition.
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/lgm-exhibition.jpg" alt='View of the LGM Exhibition'>
<figcaption>
View of the Exhibition.  Well attended!
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/pippin-meta.jpg" alt='Pippin x5'>
<figcaption>
pippin x5
</figcaption>
</figure>

<h3 id="portraits"><a href="#portraits" class="header-link-alt">Portraits</a></h3>
<p>In Leipzig I thought it would be nice to offer portraits/headshots of folks that attended the meeting.  I think it’s a great opportunity to get a (hopefully) nice photograph that people can use in social media, avatars, websites, etc.  Here’s a sample of portraits from LGM2014 of the GIMP team that sat for me:</p>
<p><a data-flickr-embed="true" data-footer="true"  href="https://www.flickr.com/photos/patdavid/albums/72157644439419931" title="GIMPers"><img src="https://farm3.staticflickr.com/2900/14075907755_5224004a7c_z.jpg" width="640" height="640" alt="GIMPers"></a><script async src="https://pixls.us//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script></p>
<p>In 2014 I was lucky that houz had brought along an umbrella and stand to use, so this time I figured it was only fair that I bring along some gear myself.  I had the Softlighter setup on the last couple of days for anyone that was interested in sitting for us.  I say us because Marek Kubica (<a href="https://discuss.pixls.us/users/leonidas/activity">@Leonidas</a>) from the community was right there to shoot with me along with the very famous <a href="https://discuss.pixls.us/users/ofnuts/activity">@Ofnuts</a> (well - famous to me - I’ve lost count of the neat things I’ve picked up from his advice)!  Marek took quite a few portraits and managed the subjects very well - he was conversational, engaged, and managed to get some great personality from them.</p>
<figure>
<a  href="https://www.flickr.com/photos/103724284@N02/26526026171/in/pool-libregfx/" title="Still don&#x27;t know your name"><img src="https://farm2.staticflickr.com/1515/26526026171_fbf23edb01_z.jpg" width="640" height="396" alt="Still don&#x27;t know your name"></a>
<figcaption>
A sample portrait by <a href="https://www.flickr.com/photos/103724284@N02/">Marek Kubica</a> <span class='cc'><a href="https://creativecommons.org/licenses/by-sa/2.0/">cba</a></span>
</figcaption>
</figure>

<figure>
<a href="https://www.flickr.com/photos/103724284@N02/26526017851/in/pool-libregfx/" title="Better with glasses"><img src="https://farm2.staticflickr.com/1562/26526017851_dc57d13f50_z.jpg" width="640" height="396" alt="Better with glasses"></a>
<figcaption>
<a href="https://www.flickr.com/photos/103724284@N02/26526017851">Better with glasses</a> by <a href="https://www.flickr.com/photos/103724284@N02/">Marek Kubica</a> <span class='cc'><a href="https://creativecommons.org/licenses/by-sa/2.0/">cba</a></span>
</figcaption>
</figure>

<p>A couple of samples from the images that I got are here as well, and they are the local organizer Lara with students from the University!  I simply can’t thank them enough for the efforts and generosity in making us feel so welcome.</p>
<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/P4170268-rt.jpg" alt='Lara University of Westminster'>
</figure>

<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/P4170276-rt.jpg" alt='Lara University of Westminster'>
</figure>

<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/P4170267-rt.jpg" alt='Lara University of Westminster'>
</figure>

<p>I’m still working through the portraits I took, but I’ll have them uploaded to <a href="https://flickr.com/photos/patdavid">my Flickr</a> soon to share with everyone!</p>
<h2 id="gimpers"><a href="#gimpers" class="header-link-alt">GIMPers</a></h2>
<p>One of the best parts of attendance is getting to spend some time with the rest of the GIMP crew.  Here’s an action shot during the GIMP meeting over lunch with a neat, glitchy schumaml:</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/GIMP-pano.jpg" alt='GIMP Meeting Panorama'>
<figcaption>
There’s even some <a href="https://www.darktable.org">darktable</a> nerds thrown in there!
</figcaption>
</figure>

<p>It was great to see everyone at the flat on our last evening there as well…</p>
<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/LGM-flat.jpg" alt='GIMP and darktable at LGM'>
<figcaption>
Everyone spending the evening together!  Mitch is missing from his seat in this shot (back there by pippin).
</figcaption>
</figure>


<h2 id="wrap-up"><a href="#wrap-up" class="header-link-alt">Wrap up</a></h2>
<p>Overall this was another incredible meeting bringing together great folks who are building and supporting Free Software and Libre Graphics.  Just my kind of crowd!</p>
<p>I even got a chance to speak a bit with the wonderful <a href="https://github.com/tusuzu">Susan Spencer</a> of the <a href="http://valentinaproject.bitbucket.org/">Valentina</a> project and we roughed out some thoughts about getting together at some point.  It turns out she lives just up the same state as me (Alabama)!  This is simply too great to not take advantage of - Free Software Fashion + Photography?!  That will have to be a fun story (and photos) for another day…</p>
<p>Keep watching the blog for some more images from the trip - up next are the portraits of everyone and some more shots of the venue and exhibition!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Pre-LGM Photowalk]]></title>
            <link>https://pixls.us/blog/2016/04/pre-lgm-photowalk/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/04/pre-lgm-photowalk/</guid>
            <pubDate>Fri, 08 Apr 2016 21:41:36 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2016/04/pre-lgm-photowalk/at_thomaskirche.jpg" /><br/>
                <h1>Pre-LGM Photowalk</h1> 
                <h2>Time to take some photos!</h2>  
                <p>It’s that time of year again!  The weather is turning mild, the days are smelling fresh, and a bunch of photography nerds are all going to get together in a new country to roam around and (<em>possibly</em>) annoy locals by taking a <em>ton</em> of photographs! It’s the Pre-<a href="http://www.libregraphicsmeeting.org/2016/"><em>Libre Graphics Meeting</em></a> photowalk of 2016!</p>
<p>Come join us the day before LGM kicks off to have a stroll through a lovely park and get a chance to shoot some photos between making new friends and having a pint. </p>
<!-- more -->
<p>Thanks to the wonderful work by the local LGM organizing team, we are able to invite everyone out to the photowalk on <strong>Thursday, April 14<sup>th</sup></strong> the day before LGM kicks off.</p>
<p><a href='http://www.furtherfield.org/gallery/about'>
<img src="https://pixls.us/blog/2016/04/pre-lgm-photowalk/furtherfield_header.png" alt='Furtherfield Logo' style='background-color: #D3DBD5;'>
</a></p>
<p>They were able to get us in touch with the kind folks at <a href="http://www.furtherfield.org/gallery/visit">Furtherfield Gallery &amp; Commons</a> in Finsbury Park.  They’ve graciously offered us the use of their facilities at the Furtherfield Commons as a base to start from.  So we will meet at the Commons building at <strong>10:00 on Thursday morning</strong>.</p>
<blockquote>
<p><strong>Pre-LGM Photowalk</strong><br>10:00 (AM), Thursday, April 14<sup>th</sup><br>Furtherfield Commons<br>Finsbury Gate - Finsbury Park<br>Finsbury Park, London, N4 2NQ</p>
</blockquote>
<div class='fluid-vid'>
<figure class='big-vid'>
<iframe width="576" height="350" frameborder="0" scrolling="no" marginheight="0" marginwidth="0" src="https://www.openstreetmap.org/export/embed.html?bbox=-0.10637909173965454%2C51.56489127967849%2C-0.1036781072616577%2C51.566525239509325&amp;layer=mapnik&amp;marker=51.56570826693375%2C-0.10502859950065613" style="border: 1px solid black"></iframe>
<figcaption style='margin-top: 0.5rem;'>
<a href="http://www.openstreetmap.org/?mlat=51.56571&amp;mlon=-0.10503#map=19/51.56571/-0.10503">View Larger Map</a>
</figcaption>
</figure>
</div>

<p>An overview of the photowalk venue relative to the LGM venue at the University of Westminster, Harrow:</p>
<div class='fluid-vid'>
<iframe src="https://www.google.com/maps/d/embed?mid=zYKepeQNftPo.koxL6CFw1nPk" width="640" height="480"></iframe>
</div>

<p>If you would like to join us but may not make it to the Commons by 10:00, email me and let me know.  I’ll try my best to make arrangements to meet up so you can join us a little later.  I can’t imagine we’d be very far away (likely somewhere relatively near by in the park).</p>
<p>We’ll plan on meandering through the park with frequent stops to shoot images that strike our fancy.  I will personally be bringing along my off-camera lighting equipment and a model (Mairi) to pose for us during the day.  In case anyone wanted to play/learn a little about that type of photography.</p>
<p>There is no set time for finishing up.  I figured we would play it by ear through lunch and to possibly all finish up at a nice pub together. (Taking advantage of the golden hour light at the end of the day hopefully).</p>
<p>In the spirit of saying “Thank you!” and sharing, I have also offered the Furtherfield folks our services for headshots and architectural/environmental shots of the Commons and Gallery spaces.  For sure I will be taking these images for them but if anyone else wanted to pitch in and try, help, or assist the effort would be very welcome!</p>
<figure>
<img src="https://pixls.us/blog/2016/04/pre-lgm-photowalk/dot-leipzig-market.jpg" alt='Dot in the Leipzig Market, 2014'>
<figcaption>
Dot in the Leipzig Market from the 2014 Pre-LGM photowalk.
</figcaption>
</figure>

<p>Speaking of which, if you plan on attending and would like to explore some particular aspect of photography please feel free to let me know.  I’ll do my best to match folks up based on interest.  I sincerely hope this will be a fun opportunity to learn some neat new things, make some new friends, and to maybe grab some great images at the same time!</p>
<p>If there are any questions, please don’t hesitate to reach out to me!<br><a href="mailto:`patdavid@gmail.com">`patdavid@gmail.com</a>`<br>patdavid on irc://irc.gimp.org/#gimp</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Happy Birthday DISCUSS.PIXLS.US]]></title>
            <link>https://pixls.us/blog/2016/04/happy-birthday-discuss-pixls-us/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/04/happy-birthday-discuss-pixls-us/</guid>
            <pubDate>Wed, 06 Apr 2016 16:01:30 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2016/04/happy-birthday-discuss-pixls-us/birthday-cake_1920.jpg" /><br/>
                <h1>Happy Birthday DISCUSS.PIXLS.US</h1> 
                <h2>Where did the time go?!</h2>  
                <p>For some reason I was checking my account on the forums earlier today and noticed that it was created in April, 2015.  On further inspection it looks like my, and @darix, accounts were created on April 2<sup>nd</sup> 2015.</p>
<p>(Not to be confused with the main site because apparently it took me about 8 months to get a forum stood up…)</p>
<p>Which means that the forums have been around for just over a year now?!</p>
<p>So, <strong>Happy Birthday</strong> <a href="https://discuss.pixls.us">discuss</a>!</p>
<!-- more -->
<p>We’re just over a year old and just under <em>500</em> users on the forum!</p>
<p>For fun, I looked for the oldest (public) post we had and it looks like it’s the “<a href="https://discuss.pixls.us/t/welcome-to-pixls-us-discussion/8?u=patdavid">Welcome to PIXLS.US Discussion</a>“ thread.  In case anyone wanted to revisit a classic…</p>
<p><strong>THANK YOU</strong> so much to everyone who has made this an awesome place to be and nerd out about photography and software and more!  Since we started we migrated the official <a href="http://gmic.eu">G’MIC</a> forums here as well as our friends at <a href="http://rawtherapee.com">RawTherapee</a>!
We’ve been introduced to some awesome projects like <a href="http://aferrero2707.github.io/PhotoFlow/">PhotoFlow</a> as well as <a href="https://github.com/CarVac/filmulator-gui">Filmulator</a>.  And everyone has just been amazing, supportive, and fun to be around.</p>
<p>As I posted in the original <em>Welcome</em> thread…</p>
<div class='big-vid'>
<div class='fluid-vid'>
<iframe width="1280" height="720" src="https://www.youtube-nocookie.com/embed/StTqXEQ2l-Y?rel=0" frameborder="0" allowfullscreen></iframe>
</div>
</div>

  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Lighting Diagrams]]></title>
            <link>https://pixls.us/blog/2016/04/lighting-diagrams/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/04/lighting-diagrams/</guid>
            <pubDate>Mon, 04 Apr 2016 22:23:36 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2016/04/lighting-diagrams/lighting-lede.png" /><br/>
                <h1>Lighting Diagrams</h1> 
                <h2>Help Us Build Some Assets!</h2>  
                <p>Community member <a href="http://www.ericsbinaryworld.com/">Eric Mesa</a> asked on <a href="https://discuss.pixls.us/t/is-there-a-good-lighting-setup-template-for-gimp/1179/">the forums</a> the other day if there might be some Free resources for photographers that want to build a lighting diagram of their work.  These are the diagrams that show how a shot might be set up with the locations of lights, what types of modifiers might be used, and where the camera/photographer might be positioned with respect to the subject.  These diagrams usually also include lighting power details and notes to help the production.</p>
<p>It turns out there wasn’t really anything openly available and permissively licensed.  So we need to fix that…</p>
<!-- more -->
<p>These diagrams are particularly handy for planning a shoot conceptually or explaining what the lighting setup was to someone after the fact.  For instance, here’s a look at the lighting setup for <a href="https://www.flickr.com/photos/patdavid/14297966412">Sarah (Glance)</a>:</p>
<figure>
<img src="https://pixls.us/blog/2016/04/lighting-diagrams/sarah-glance.jpg" alt='Sarah (Glance) by Pat David'>
<figcaption>
Sarah (Glance)
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2016/04/lighting-diagrams/sarah-glance.png" alt='Sarah (Glance) Lighting Diagram'>
<figcaption>
YN560 full power into a 60” Photek Softlighter, about 20” from subject.<br>
She was actually a bit further from the rear wall…
</figcaption>
</figure>

<p>There are a few different commercial or restrictive-licensed options for photographers to create a lighting diagram, but nothing truly <a href="http://www.gnu.org/philosophy/free-sw.en.html">Free</a>.</p>
<p>So thanks to the prodding by Eric, I thought it was something we should work on as a community!</p>
<p>I already had a couple of simple, basic shapes created in <a href="https://inkscape.org">Inkscape</a> for another tutorial so I figured I could at least get those files published for everyone to use.</p>
<p>I don’t have much to start with but that shouldn’t be a problem!  I already had a backdrop, person, camera, octabox (+grid), and a softbox (+grid):</p>
<figure>
<img src="https://pixls.us/blog/2016/04/lighting-diagrams/lighting-assets.png" alt='Lighting Diagram Assets'>
</figure>

<h2 id="pixls-us-github-organization"><a href="#pixls-us-github-organization" class="header-link-alt">PIXLS.US Github Organization</a></h2>
<p>I already have a <a href="https://github.com/pixlsus">GitHub organization</a> setup just for PIXLS.US, you can find the lighting-diagram assets there:</p>
<p><a href="https://github.com/pixlsus/pixls-lighting-diagram">https://github.com/pixlsus/pixls-lighting-diagram</a></p>
<p>Feel free to join the organization!</p>
<p>Even better: join the organization and fork the repo to add your own additions and to help us flesh out the available diagram assets for all to use!
From the README.md on that repo, I compiled a list of things I thought might be helpful to create:</p>
<ul>
<li>Cameras<ul>
<li>DSLR</li>
<li>Mirrorless</li>
<li>MF</li>
</ul>
</li>
<li>Strobes<ul>
<li>Speedlight</li>
<li>Monoblock</li>
</ul>
</li>
<li>Lighting Modifiers<ul>
<li>Softbox (+ grid?)</li>
<li>Umbrella (+ grid?)</li>
<li>Octabox (+ grid?)</li>
<li>Brolly</li>
</ul>
</li>
<li>Reflectors</li>
<li>Flags</li>
<li>Barn Doors / Gobo</li>
<li>Light stands? (C-Stands?)</li>
<li>Environmental<ul>
<li>Chairs</li>
<li>Stools</li>
<li>Boxes</li>
<li>Backgrounds (+ stands)</li>
</ul>
</li>
<li>Models</li>
</ul>
<p>If you don’t want to create something from scratch, perhaps grabbing the files and tweaking the existing assets to make them better in some way?</p>
<p>Hopefully we can fill out the list fairly quickly (as it’s a fairly limited subset of required shapes).  Even better would be if someone picked up the momentum to possibly create a nice lighting diagram application of some sort!</p>
<p>The files that are there now are all licensed <a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons By-Attribution, Share-Alike 4.0</a>.</p>
<style>
li {
    margin-bottom: initial;
}
</style>

  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[PlayRaw (Again)]]></title>
            <link>https://pixls.us/blog/2016/03/playraw-again/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/03/playraw-again/</guid>
            <pubDate>Mon, 21 Mar 2016 22:00:45 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2016/03/playraw-again/mairi-troisieme-lede.jpg" /><br/>
                <h1>PlayRaw (Again)</h1> 
                <h2>The Resurrectioning</h2>  
                <p>On the old <a href="http://rawtherapee.com/">RawTherapee</a> forums they used to have a contest sharing a single raw file amongst the members to see how everyone would approach processing from the same starting point.  They called it <strong>PlayRaw</strong>.  This seemed to really bring out some great work from the community so I thought it might be fun to start doing something similar again here.</p>
<p>I took a (<em>relatively</em>) recent image of <a href="https://www.flickr.com/photos/patdavid/albums/72157632799856846" title="Mairi Album on Flickr">Mairi</a> and decided to see how it would be received (I’d say fairly well given the responses).  This was my result from the raw file that I called <a href="https://www.flickr.com/photos/patdavid/16259030889/in/album-72157632799856846/" title="Mairi Troisieme on Flickr"><em>Mairi Troisième</em></a>:</p>
<!-- more -->
<figure>
<img src="https://pixls.us/blog/2016/03/playraw-again/Mairi Troisieme.jpg" alt='Mairi Troisieme' width='640' height='800'>
</figure>

<p>I made the raw file available under a <a href="https://creativecommons.org/licenses/by-nc-sa/3.0/" title="Creative Commons BY-SA-NC">Creative Commons, By-Attribution, Non-Commercial, Share-Alike license</a> so that anyone could freely download and process the file as they wanted to.</p>
<p>The only things I asked for was to see the results and possibly the processing steps through either an XMP or PP3 sidecar file (<a href="http://www.darktable.org/">darktable</a> and <a href="http://rawtherapee.com/">RawTherapee</a> respectively).</p>
<p>Here’s a montage of the results from everyone:</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2016/03/playraw-again/Mairi-combined.jpg" width='960' height='1896'>
</figure>

<p>I loved being able to see what everyone’s approaches looked like.  It’s neat to get a feel for all the different visions out there among the users and there were some truly beautiful results!</p>
<p>If you haven’t given it a try yourself yet, head on over to the <a href="https://discuss.pixls.us/t/playraw-mairi-troisieme">[PlayRaw] Mairi Troisieme</a> thread to get the raw file and try it out yourself!  Just don’t forget to show us <em>your</em> results in the topic.</p>
<p>I’ll be soliciting options for a new image to kick off another round of processing again soon.</p>
<h2 id="speaking-of-mairi"><a href="#speaking-of-mairi" class="header-link-alt">Speaking of Mairi</a></h2>
<p>Don’t forget that we still have a <a href="https://pledgie.com/campaigns/30905">Pledgie Campaign</a> going on to help us offset the costs of getting everyone together at the <a href="https://pixls.us/blog/2016/01/libre-graphics-meeting-london/">2016 Libre Graphics Meeting in London</a> this April!</p>
<p><a href='https://pledgie.com/campaigns/30905'><img alt='Click here to lend your support to: PIXLS.US at Libre Graphics Meeting 2016 and make a donation at pledgie.com !' src='https://pledgie.com/campaigns/30905.png?skin_name=chrome' border='0' ></a></p>
<p>Donations go to help cover to costs of various projects to come together and meet, photograph, discuss, and hack at things.  Please consider donating as every little bit helps us immensely!  If you can’t donate then please consider helping us to raise awareness of what we’re trying to do!  Either link the Pledgie campaign to others or let them know we’re here to help and share!</p>
<p>Even better is if you’re in the vicinity of London this April 15&ndash;18! Come out and join us as well as many other awesome Free Software projects all focused on the graphics community!  We (PIXLS) will be conducting photowalks and meet-ups the Thursday before LGM kicks off as well!</p>
<p>Oh, and I finally did convince Mairi to join us through the weekend to model for us as needed.  She’s super awesome and worth raising a glass to/with!  Even more reason to come out and join us!</p>
<figure>
<img src="https://pixls.us/blog/2016/03/playraw-again/Mairi Hedcut.jpg" alt='Mairi Deux'>
</figure>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Shimming an Adapter to be Parallel]]></title>
            <link>https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/</guid>
            <pubDate>Fri, 11 Mar 2016 19:03:48 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/carvac-lede.jpg" /><br/>
                <h1>Shimming an Adapter to be Parallel</h1> 
                <h2>Achieving perfect infinity focus</h2>  
                <p>Some of you may know I exclusively use Contax manual focus lenses on my Canon cameras. I have had one reliable adapter from the start, that just happened to be perfect in every way: perfectly parallel, and lets my lenses focus <em>exactly</em> to infinity, and none of my lenses hit the mirror on my 5D.</p>
<p>However, swapping adapters between cameras gets mighty tedious, so recently I have been trying a variety of different adapters for my cameras, several quality tiers ranging from the cheapest ($15) up to the most expensive ($70).</p>
<!-- more -->
<figure>
<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/39cc6bc295d7b8fb61f7f30bddb439236c3c07ba.jpg" alt='39cc6bc295d7b8fb61f7f30bddb439236c3c07ba.jpg'>
</figure>

<p>However, I wasn’t satisfied with any of them. In order to assure that the adapted lenses can focus to infinity even with manufacturing tolerances, they’re made thinner than necessary. This means that they focus <em>past</em> infinity, and with some lenses the mirror of my 5D would hit the back of the lens, needing me to wiggle it to free the mirror after taking a photo.</p>
<figure>
<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/e2d3556dfa31bafeebe55be3503cd31d320ca418.jpg" alt='e2d3556dfa31bafeebe55be3503cd31d320ca418.jpg'>
</figure>

<p>I measured my fancier Fotodiox Pro adapter, and found that not only was it too thin, but it was unevenly thick! The top was 8 thousandths of an inch thin, the bottom right was 2 thousandth of an inch thin, and the bottom left was exactly the right thickness.</p>
<p>I decided I could do something about it.</p>
<figure>
<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/c8f2904056b5956c424217eac2e5ff8c071bcd35.jpg" alt='c8f2904056b5956c424217eac2e5ff8c071bcd35.jpg'>
</figure>

<p>I bought some shim stock from McMaster Carr, plastic and 2 thousandths of an inch thick, figuring I might be able to fold it to build up thickness if necessary. (Spoiler: it does fold.) It comes as a giant sheet five by twenty inches, but you’ll only need the tiniest amount of it.</p>
<figure>
<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/9e62a1fa5ec3df578b5068e04c06bf70826cea6c.jpg" alt='9e62a1fa5ec3df578b5068e04c06bf70826cea6c.jpg'>
</figure>

<p>Then I went about removing the screws that hold the two sides together.</p>
<figure>
<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/23fcb9581ed7ba5b4b1ab8dc8f6d6abbd1b1edd5.jpg" alt='23fcb9581ed7ba5b4b1ab8dc8f6d6abbd1b1edd5.jpg'>
</figure>

<p>The screws are incredibly small.</p>
<figure>
<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/3328b75d620272e42a07e6d923012e762f244736.jpg" alt='3328b75d620272e42a07e6d923012e762f244736.jpg'>
</figure>

<p>Here you can see that there are only three points on the ring that actually control the thickness; I point to one with the scissors. I had to be careful when measuring the thickness to only measure it between the screws, and that was challenging because the EF mount diameter is larger than the C/Y mount diameter, and there was only the slightest overlap between the outside of the C/Y registration surface and the inside of the EF mount.</p>
<figure>
<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/630d554c266458e194fa65c77c21d00b2426cfe7.jpg" alt='630d554c266458e194fa65c77c21d00b2426cfe7.jpg'>
</figure>

<p>Next I just cut a narrow strip out of this piece of shim stock using scissors, and put slits in it so it could fold more easily.</p>
<figure>
<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/bc177c29ec559927f3f1b8df373a53dea4d2270a.jpg" alt='bc177c29ec559927f3f1b8df373a53dea4d2270a.jpg'>
</figure>

<p>The right hand shim is folded in the shape of a W, and the left hand shim is only one layer.</p>
<figure>
<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/b7b673db42db682c8681e11363500892230d11f6.jpg" alt='b7b673db42db682c8681e11363500892230d11f6.jpg'>
</figure>

<p>The thicker shim went on the top, and the thinner shim went on the bottom-right.</p>
<figure>
<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/2c15b643aabc97d65f6fce6547d80e769391d70c.jpg" alt='2c15b643aabc97d65f6fce6547d80e769391d70c.jpg'>
</figure>

<p>Put the ring back on, and then…</p>
<figure>
<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/201a553a455b9780fc4120632b4db51bb2bf3a6c.jpg" alt='201a553a455b9780fc4120632b4db51bb2bf3a6c.jpg'>
</figure>

<p>Reinstall the screws.</p>
<p>Test your lenses for infinity focus and, if applicable, mirror slap, and rejoice if they’re good!</p>
<hr>
<p>If you don’t have a perfect adapter as a reference for the proper thickness, you can first adjust the adapter to be perfectly even thickness all the way around, and then you can add thickness uniformly until your lenses just barely focus to infinity. It might be time consuming, but it’s very rewarding being able to trust the infinity stop on your lenses.</p>
<p>This method isn’t only applicable to the two-part SLR-&gt;SLR Fotodiox adapters; it should also work for SLR or rangefinder to mirrorless adapters as well.</p>
<p>I’ve seen it written that you can’t be sure whether or not your adapters are even thickness all the way around, but with this technique, you can <em>make</em> sure that your adapters are perfect.</p>
<hr>
<p><em>Carlo originally posted this as a thread on the forums but I thought it would be useful as a post.  He has graciously allowed us to re-publish it here. <strong>–Pat</strong></em></p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[jpeg2RAW Guest Spot]]></title>
            <link>https://pixls.us/blog/2016/02/jpeg2raw-guest-spot/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/02/jpeg2raw-guest-spot/</guid>
            <pubDate>Sat, 20 Feb 2016 19:54:54 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2016/02/jpeg2raw-guest-spot/andabata-tiger-beetle.jpg" /><br/>
                <h1>jpeg2RAW Guest Spot</h1> 
                <h2>An interview! LGM update! And Github?</h2>  
                <p><a href="http://www.jpeg2raw.com/your-jpeg2raw-host/">Mike Howard</a>, the host and creator of the <a href="http://www.jpeg2raw.com/">jpeg2RAW podcast</a> reached out to me last week to see if I might be able to come on the show to talk about Free Software Photography and what we’ve been up to here. 
One of the primary reasons for creating this site was to be able to raise awareness of the Free Software community to a wider audience.</p>
<p><em>So this is a great opportunity for us to expose ourselves!</em></p>
<!-- more -->
<h2 id="exposing-ourselves"><a href="#exposing-ourselves" class="header-link-alt">Exposing Ourselves</a></h2>
<p>The podcast airs <strong>live</strong> this Tuesday, February 23<sup>rd</sup> at 8PM Eastern (-0500). You can join us at the <a href="http://www.jpeg2raw.com/live/">jpeg2RAW live podcast page</a>!
Mike has the live feed available to watch on that page and also has a chat server set up so viewers can interact with us live during the broadcast.</p>
<div class='fluid-vid'>
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/SZ2jPqWXClQ" frameborder="0" allowfullscreen></iframe>
</div>

<p>If you are free on Tuesday night then come on by and join us! I’ll be happy to field any questions you want answered (and that Mike asks) and will do my best to not embarrass myself (or our community). If you would like to make sure I address something in particular (or just don’t forget something), I also have a <a href="https://discuss.pixls.us/t/interview-for-jpeg2raw-podcast/871/1">thread on discuss</a> where you can make sure I know it.</p>
<p>I’m also looking for community members to submit some photos to help highlight our work and what’s possible with Free Software. Feel free to link them in the <a href="https://discuss.pixls.us/t/interview-for-jpeg2raw-podcast/871/1">same thread as above</a>.  I’ve already convinced <a href="https://kees.nl/">andabata</a> to point us to some of his great macro shots (like that awesome lede image) and I’ll be submitting a few of my own images as well.  If you have some works that you’d like to share please let me know!</p>
<h3 id="in-case-you-miss-it"><a href="#in-case-you-miss-it" class="header-link-alt">In Case You Miss It</a></h3>
<p>Mike has all of his prior podcasts archived on <a href="http://www.jpeg2raw.com/podcasts/">his <em>Podcasts</em> page</a>. So if you miss the live show it looks like you’ll be able to catch up later at your convenience.</p>
<h2 id="lgm-update"><a href="#lgm-update" class="header-link-alt">LGM Update</a></h2>
<p>As <a href="https://pixls.us/blog/2016/01/libre-graphics-meeting-london/">mentioned previously</a> we are heading to London for Libre Graphics Meeting 2016! We’ve got a flat rented for a great crew to be able to stay together and we’re on track for a <a href="https://pixls.us/blog/2016/01/libre-graphics-meeting-london/#pixls-meet-up">PIXLS meet up</a> before LGM!</p>
<p>Speaking of people, I’m looking forward to being able to spend some time with some great folks again this year!  We’ve got Tobias, Johannes, and Pascal making it out (I’m not sure that Simon, top below, will be making it out) from <a href="http://www.darktable.org">darktable</a>, DrSlony and qogniw from <a href="http://www.rawtherapee.com">RawTherapee</a>, <a href="https://pixls.us/articles/a-blended-panorama-with-photoflow/">Andrea Ferrero</a> creator of <a href="https://github.com/aferrero2707/PhotoFlow">PhotoFlow</a>, even <a href="https://discuss.pixls.us/users/ofnuts/activity">Ofnuts</a> (how cool is that?) may make it out!</p>
<figure>
<a href="https://www.flickr.com/photos/patdavid/14050852344/in/dateposted-public/" title="Darktable II"><img src="https://farm3.staticflickr.com/2930/14050852344_d7fe5dd73d.jpg" width="500" height="500" alt="Darktable II"></a>
<figcaption>
Pascal, Johannes, and Tobias (left to right, bottom row) will be there!
</figcaption>
</figure>

<p>We’ve also already had a great response so far on <a href="https://pledgie.com/campaigns/30905">our Pledgie campaign</a>. The campaign is still running if you want to help out!</p>
<p><a href='https://pledgie.com/campaigns/30905'>
<img alt='Click here to lend your support to: PIXLS.US at Libre Graphics Meeting 2016 and make a donation at pledgie.com !' src='https://pledgie.com/campaigns/30905.png?skin_name=chrome' border='0' style='width: initial;'>
</a></p>
<p>If anyone is thinking they’d like to make it out to join us, please let me know as soon as possible so we can plan for space!</p>
<figure>
<a href="https://www.flickr.com/photos/patdavid/16706076622/in/album-72157632799856846/" title="Mairi (Further)"><img src="https://farm9.staticflickr.com/8613/16706076622_7217ced886_c.jpg" width="622" height="800" alt="Mairi (Further)"></a>
<figcaption>
Looks like <a href="https://www.flickr.com/photos/patdavid/albums/72157632799856846">Mairi</a> will be joining us!
</figcaption>
</figure>

<p>My friend and model Mairi will also be making it out for the meeting. She’ll be on hand to help us practice lighting setups, model interactions, and will likely be shooting right along with the rest of us as well!</p>
<p>I’ll also be assembling slides for my presentation during LGM.  I’ve got a 20 minute time slot to talk about the community we’ve been building here and the neat things our members have been up to (<a href="https://github.com/CarVac/filmulator-gui">Filmulator</a>, <a href="https://github.com/aferrero2707/PhotoFlow">PhotoFlow</a>, and more).</p>
<p>Speaking of slides and sharing information…</p>
<h3 id="github-organization"><a href="#github-organization" class="header-link-alt">Github Organization</a></h3>
<p>I’ve setup a <a href="https://github.com/pixlsus">Github Pixls organization</a> so that we can begin to share various things. This came about after talking with <a href="https://discuss.pixls.us/users/paperdigits/activity">@paperdigits</a> on the post about the upcoming podcast at jpeg2RAW.  We were talking about ways to <a href="https://discuss.pixls.us/t/pixls-us-github-organization/893">share information and assets</a> for creating/delivering presentations about Free Software photography.</p>
<p>At the moment there is only the single repository <a href="https://github.com/pixlsus/Presentations"><em>Presentations</em></a> as we are figuring out structure. I’ve uploaded my slides and notes from the <a href="https://github.com/pixlsus/Presentations/tree/master/LGM2015_State_Of">LGM2015 <em>State of the Libre Graphics</em></a> presentation announcing PIXLS. If you’re on <a href="http://www.github.com">Github</a> and want to join us just let me know!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[HDR Photography with Free Software (LuminanceHDR)]]></title>
            <link>https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/</link>
            <guid isPermaLink="true">https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/</guid>
            <pubDate>Tue, 26 Jan 2016 19:57:59 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/HDRLayers.jpg" /><br/>
                <h1>HDR Photography with Free Software (LuminanceHDR)</h1> 
                <h2>A first approach to creating and mapping HDR images</h2>  
                <p>I have a mostly love/hate relationship with HDR images (well, tonemapping HDR more than the HDR themselves).
I think the problem is that it’s very easy to create really bad HDR images that the photographer <em>thinks look really good</em>.
I know because I’ve been there:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/226464161_2a792c925d_z.jpg" alt="Hayleys - Mobile, AL" height="369" width="640">
<figcaption>Don’t judge me, it was a weird time in my life…</figcaption>
</figure> 

<p>The best term I’ve heard used to describe over-processed images created from an HDR is <i>“clown vomit” </i>(which would also be a great name for a band, by the way).
They are easily spotted with some tell-tale signs such as the halos at high-contrast edges, the unrealistically hyper-saturated colors that make your eyes bleed, and a general affront to good taste.
In fact, while I’m putting up embarrassing images that I’ve done in the past, here’s one that scores on all the points for a crappy image from an HDR:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/210251868_26c6041c62_o.jpg" alt="Tractor" width="600" height="874">
<figcaption><a target="_blank" href="http://www.youtube.com/watch?v=juFZh92MUOY">“My Eyes! The goggles do nothing!”</a></figcaption>
</figure> 

<p>Crap-tastic! 
Of course, the allure here is that it provides first timers a glimpse into something new, and they feel the desire to crank every setting up to 11 with no regards to good taste or aesthetics.</p>
<p>If you take anything away from this post, let it be this:  <strong>“Turn it <em>DOWN</em>“</strong>. 
If it looks good to you, then it’s too much. ;)</p>
<!-- more -->
<p class='aside' style='font-size: 1rem;'>HDR lightprobes are used in movie fx compositing to ensure that the lighting on CG models matches exactly the lighting for a live-action scene.  By using an HDR lightprobe, you can match the lighting exactly to what is filmed.
<br>
<br>
I originally learned about, and used, HDR images when I would use them to illuminate a scene in <a href="http://www.blender.org/">Blender</a>.  In fact, I will still often use <a href="http://www.pauldebevec.com/Probes/">Paul Debevec’s Uffizi gallery lightprobe</a> to light scene renders in Blender today.</p>

<p>For example, you may be able to record 10-12 stops of light information using a modern camera.  Some old films could record 12-13 stops of light, while your eyes can approximately see up to 14 stops.</p>
<p>HDR images are intended to capture <em>more</em> than this number of stops.  (Depending on your patience, significantly more in some cases).</p>
<p>I can go on a bit about the technical aspects of HDR imaging, but I won’t.  It’s boring.  Plus, I’m sure you can <a href="http://en.wikipedia.org/wiki/High-dynamic-range_imaging">use Wikipedia</a>, or <a href="http://lmgtfy.com/?q=HDR">Google </a>yourselves. :)
In the end, just realize that an HDR image is simply one where there is a greater amount of light information being stored than is able to be captured by your camera sensor in one shot.</p>
<h2 id="taking-an-hdr-image-s-">Taking an HDR image(s)<a href="#taking-an-hdr-image-s-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>More light information than my camera can record in one shot?<br>Then how do I take an HDR photo?</p>
<p>You don’t.</p>
<p>You take multiple photos of a scene, and <em>combine</em> them to create the final HDR image.
Before I get into the process of capturing these photos to create an HDR with, consider something:</p>
<h3 id="when-why-to-use-hdr">When/Why to use HDR<a href="#when-why-to-use-hdr" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>An HDR image is most useful to you when the scene you want to capture has bright and dark areas that fall outside the range of a single exposure, <em>and you feel that there is something important enough outside that range to include in your final image</em>.</p>
<p>That last part is important, because sometimes it’s OK to have some of your photo be too dark for details (or too light).  This is an aesthetic decision of course, but keep it in mind…</p>
<p>Here’s what happens.  Say you have a pretty scene you would like to photograph.  Maybe it’s the <a href="http://www.flickr.com/photos/jp_photo_online/7369521956/">Lower Chapel of Sainte Chapelle</a>:</p>
<figure class='big-vid'>
<a href="http://www.flickr.com/photos/jp_photo_online/7369521956/">
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/7369521956_95d6a3003c_k.jpg" alt="Sainte Chapelle Lower Chapel" height="640" width="960">
</a>
<figcaption><a href="http://www.flickr.com/photos/jp_photo_online/7369521956/">Sainte Chapelle Lower Chapel</a> by <a href="http://www.flickr.com/photos/jp_photo_online/with/7369521956/">iwillbehomesoon</a> on Flickr (<a href='https://creativecommons.org/licenses/by-nc-sa/2.0/'><span class='cc'>cbsna</span></a>)</figcaption>
</figure> 

<p>You may setup to take the shot, but when you are setting your exposure you may run into a problem.  To expose for the brighter parts of the image means that the shadows fall to black too quickly, crushing out the details there.</p>
<p>If you expose for the shadows, then the brighter parts of the image quickly clip beyond white.</p>
<p>The use case for an HDR is when you can’t find a happy medium between those two exposures.</p>
<p>A similar situation comes up when you want to shoot any ground details against a bright sky, but you want to keep the details in both.  Have a look at this example:</p>
<figure class='big-vid'>
<a href="http://www.flickr.com/photos/fredvdd/236863839/">
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/236863839_8722c5f2dd_b.jpg" alt="HDR Layers by dontmindme, on Flickr" height='640' width="960">
</a>
<figcaption>
<a href="http://www.flickr.com/photos/fredvdd/236863839/">HDR Layers</a> 
by <a href="http://www.flickr.com/photos/fredvdd">dontmindme</a>, on Flickr 
(<a href="https://creativecommons.org/licenses/by-nc-sa/2.0/" title="Creative Commons, BY-NC-SA"><span class='cc'>cbna</span></a>)
</figcaption>
</figure>

<p>In the first column, if you expose for the ground, the sky blows out.</p>
<p>In the second, you can drop the exposure to bring the sky in a bit, but the ground is getting too dark.</p>
<p>In the third, the sky is exposed nicely, but the ground has gone to mostly black.</p>
<p>If you wanted to keep the details in the sky and ground at the same time, you might use an HDR (you could technically also use exposure blending with just a couple of exposures and blend them by hand, but I digress) to arrive at the last column.</p>
<h3 id="shooting-images-for-an-hdr">Shooting Images for an HDR<a href="#shooting-images-for-an-hdr" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Many cameras have an auto-bracketing feature that will let you quickly shoot a number of photos while changing the exposure value (EV) of each.  You can also do this by hand simply by changing one parameter of your exposure each time.</p>
<p>You can technically change any of ISO, shutter speed, or aperture to modify the exposure, but <strong>I’d recommend you change only the shutter speed</strong> (or EV value when in Aperture Priority modes).</p>
<p>The reason is that changing the shutter speed will not alter the depth-of-field (DoF) of your view or introduce any extra noise the way changing the aperture or ISO would.</p>
<p>When considering your scene, you will also want to try to stick to static scenes if possible.
The reason is that objects that move around (swaying trees, people, cars, fast moving clouds, etc.) could end up as ghosts or mis-alignments in your final image.
So as you’re starting out, choose your scene to help you achieve success.</p>
<p>Set up your camera someplace very steady (like a tripod), dial in your exposure and take a shot.
If you let your camera meter your scene for you then this is a good middle starting point.</p>
<p>For example, if you setup your camera and meter your scene, it might report a <sup>1</sup>⁄<sub>160</sub> second exposure.  This is our starting point (<strong>0EV</strong>).</p>
<figure>
<img border="0" src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/B1010235.jpg" width='600' height='452'>
<figcaption>The base exposure, <sup>1</sup>&frasl;<sub>160</sub> s, 0EV</figcaption>
</figure>

<p>To capture the lower values, just cut your shutter speed in half ( <sup>1</sup>&frasl;<sub>80</sub> second, +1EV), and take a photo.  Repeat if you’d like ( <sup>1</sup>&frasl;<sub>40</sub> second, +2EV).</p>
<figure>
<img border="0" src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/B1010234.jpg" width="300" height="226" style='display:inline; width: 300px;'>
<img border="0" src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/B1010233.jpg" width="300" height="226" style='display:inline; width: 300px; margin-left: 0.5rem;'>
<figcaption>
<sup>1</sup>⁄<sub>80</sub> second, +1EV (left), <sup>1</sup>⁄<sub>40</sub> second, +2EV (right)
</figcaption>
</figure>

<p>To capture the upper values, just double your starting point shutter speed ( <sup>1</sup>⁄<sub>320</sub>, -1EV) and take a photo. Repeat if you’d like again ( <sup>1</sup>⁄<sub>640</sub>, -2EV).</p>
<figure>
<img border="0" src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/B1010236.jpg" width="300" height="226" style='display:inline; width: 300px;'>
<img border="0" src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/B1010237.jpg" width="300" height="226" style='display:inline; width: 300px; margin-left:0.5rem;'>
<figcaption>
<sup>1</sup>⁄<sub>320</sub>, -1EV (left), <sup>1</sup>⁄<sub>640</sub>, -2EV (right)
</figcaption>
</figure>

<p>This will give you 5 images covering a range of -2EV to +2EV:</p>
<style>
table#EVs {
    border-collapse: collapse;
    border: solid 1px gray;
    margin-left: auto;
    margin-right: auto;
} 

#EVs th, #EVs td {
    border: solid 1px gray;
    padding: 0.5rem 0.5em;
    text-align:center;
}
</style>

<table id="EVs"><tbody><tr><th>Shutter Speed</th><th>Exposure Value</th></tr>
<tr><td><sup>1</sup>⁄<sub>640</sub></td><td>-2EV</td></tr>
<tr><td><sup>1</sup>⁄<sub>320</sub></td><td>-1EV</td></tr>
<tr><td><sup>1</sup>⁄<sub>160</sub></td><td>0EV</td></tr>
<tr><td><sup>1</sup>⁄<sub>80</sub></td><td>+1EV</td></tr>
<tr><td><sup>1</sup>⁄<sub>40</sub></td><td>+2EV</td></tr>
</tbody></table>

<p>Your values don’t have to be exactly 1EV each time, LuminanceHDR is usually smart enough to figure out what’s going on from the EXIF data in your image - I chose full EV stops here to simplify the example.</p>
<p>So armed with your images, it’s time to turn them into an HDR image!</p>
<h2 id="creating-an-hdr-image">Creating an HDR Image<a href="#creating-an-hdr-image" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>You kids have it too easy these days.  We used to have to bring all the images into Hugin and align them before we could save an hdr/exr file.  Nowadays you’ve got a phenomenal piece of Free/Open Source Software to handle this for you:</p>
<p><a href="http://qtpfsgui.sourceforge.net/" style="font-size:1.5rem;">LuminanceHDR</a><br>(Previously qtpfsgui. Seriously.)</p>
<p>After installing it, open it up and hit “<strong>New HDR Image</strong>“:</p>
<figure>
<img border="0" src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Luminance-Open.png" alt="LuminanceHDR startup screen" width='475' height='263'>
</figure>

<p>This will open up the <em>“HDR Creation Wizard”</em> that will walk you through the steps of creating the HDR.  The splash screen notes a couple of constraints.</p>
<figure>
<img border="0" src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Luminance-wizard-1.png" alt="LuminanceHDR wizard splash screen" width='600' height='358'>
</figure>

<p>On the next screen, you’ll be able to load up all of the images in your stack.  Just hit the big green “<b style="color:green; font-size:1.5em;">+</b>“ button in the middle, and choose all of your images:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Luminance-wizard-load.png" alt="LuminanceHDR load wizard" width='600' height='358'>
</figure>

<p>LuminanceHDR will load up each of your files, and investigate them to try and determine the EV values for each one.  It usually does a good job of this on its own, but if there a problem you can always manually specify what the actual EV value is for each image.</p>
<p>Also notice that because I only adjusted my shutter speed by half or double, that each of the relative EV values is neatly spaced 1EV apart.  They don’t have to be, though.  I could have just as easily done &frac12; EV or &frac13; EV steps as well.</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Luminance-wizard-loaded.png" alt="LuminanceHDR creation wizard" width='600' height='358'>
</figure>

<p>If there is even the remotest question about how well your images will line up, I’d recommend that you check the box for <em>“Autoalign images”</em>, and let <a href="http://hugin.sourceforge.net/">Hugin’s </a>align_image_stack do it’s magic.
You really need all of your images to line up perfectly for the best results.</p>
<p>Hit “<strong>Next</strong>“, and if you are aligning the images be patient.
Hugin’s align_image_stack will find control points between the images and remap them so they are all aligned.
When it’s done you’ll be presented with some editing tools to tweak the final result before the HDR is created.</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Luminance-wizard-editing.png" alt="LuminanceHDR Creation Wizard" width='600' height='355'>
</figure>

<p>You are basically looking at a difference view between images in your stack at the moment.  You can choose which two images to difference compare by choosing them in the list on the left.  You can now shift an image horizontally/vertically if it’s needed, or even generate a ghosting mask (a mask to handle portions of an image where objects may have shifted between frames).</p>
<p>If you are careful, and there’s not much movement in your image stacks, then you can safely click through this screen.  Hit the “<strong>Next</strong>“ button.</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Luminance-wizard-final.png" alt="LuminanceHDR Creation Wizard" width='600' height='403'>
</figure>

<p>This is the final screen of the HDR Creation Wizard.
There are a few different ways to calculate the pixel values that make up an HDR image, and this is where you can choose which ones to use.
For the most part, people far smarter than I had a look at a bunch of creation methods, and created the predefined profiles.
Unless you know what you’re doing, I would stick with those.</p>
<p>Hit “<strong>Finish</strong>“, and you’re all done!</p>
<p>You’ll now be presented with your HDR image in LuminanceHDR, ready to be tonemapped so us mere mortals can actually make sense of the HDR values present in the image.
At this point, I would hit the “Save As…” button, and save your work.</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Luminance-Main.png" alt="LuminanceHDR Main" width='600' height='340'>
</figure>



<h2 id="tonemapping-the-hdr">Tonemapping the HDR<a href="#tonemapping-the-hdr" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>So now you’ve got an HDR image.  Congratulations!</p>
<p>The problem is, you can’t really view it with your puny little monitor.</p>
<p>The reason is that the HDRi now contains more information than can be represented within the limited range of your monitor (and eyeballs, likely).  So we need to find a way to represent all of that extra light-goodness so that we can actually view it on our monitors.  This is where <a href="http://en.wikipedia.org/wiki/Tone_mapping">tonemapping </a>comes in.</p>
<p>We basically have to take our HDRi and use a method for compressing all of that radiance data down into something we can view on our monitors/prints/eyeballs.  We need to create a Low Dynamic Range (LDR) image from our HDR.</p>
<p>Yes - we just went through all the trouble of stacking together a bunch of LDR images to create the HDRi, and now we’re going <i>back to LDR </i>?  We are - but this time we are armed with <b><i>way </i></b>more radiance data than we had to begin with!</p>
<p>The question is, how do we represent all that extra data in an LDR?  Well, there’s quite a few different ways.  LuminanceHDR provides for 9 different tonemapping operators (TMO’s) to represent your HDRi as an LDR image:</p>
<ul>
<li><a href="#mantiuk-06">Mantiuk ‘06</a></li>
<li><a href="#mantiuk-08">Mantiuk ‘08</a></li>
<li><a href="#fattal">Fattal</a></li>
<li><a href="#drago">Drago</a></li>
<li><a href="#durand">Durand</a></li>
<li><a href="#reinhard-02">Reinhard ‘02</a></li>
<li><a href="reinhard-05">Reinhard ‘05</a></li>
<li><a href="askikhmin">Ashikhmin</a></li>
<li><a href="pattanaik">Pattanaik</a></li>
</ul>
<p>Just a small reminder, there’s a ton of math involved in how to map these values to an LDR image.
I’m going to skip the math.
The <a href="http://www.mpi-inf.mpg.de/resources/tmo/">references are out there</a> if you want them.</p>
<p>I’ll try to give examples of each of the operators below, and a little comment here and there.  If you want more information, you can always check out the list on the <a href="http://osp.wikidot.com/parameters-for-photographers">Open Source Photography wikidot page</a>.</p>
<p>Before we get started, let’s have a look at the window we’ll be working in:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Luminance-TMO.png" alt="LuminanceHDR Main Window" width='960' height='544'>
</figure>

<p><span style="color:#00FF00;">Tonemap</span> is the section where you can choose which TMO you want to use, and will expose the various parameters you can change for each TMO.  This is the section you will likely be spending most of your time, tweaking the settings for whichever TMO you decide to play with.</p>
<p><span style="color:#00FFFF;">Process</span> gives you two things you’ll want to adjust.  The first is the size of the output that you want to create (<i>Result Size</i>).  While you are trying things out and dialing in settings you’ll probably want to use a smaller size here (some operators will take a while to run against the full resolution image).  The second is any pre-gamma you want to apply to the image.  I’ll talk about this setting a bit later on.</p>
<p>Oh, and this section also has the “Tonemap” button to apply your settings and generate a preview.  I’ll also usually keep the “Update current LDR” checked while I rough in parameters.  When I’m fine-tuning I may uncheck this (it will create a new image every time you hit the “Tonemap” button).</p>
<p><span style="color:#FF0000;">Results</span> are shown in this big center section of the window.  The result will be whatever <i>Result Size</i> you set in the previous section.</p>
<p><span style="color:#0000FF;">Previews</span> are automatically generated and shown in this column for each of the TMO.  If you click on one, it will automatically apply that TMO to your image and display it (at a reduced resolution - I think the default is 400px, but you can change it if you want).  It’s a nice way to quickly get a preview overview of what all the different TMOs are doing to your image.</p>
<p>Ok, with that out of the way, let’s dive into the TMOs and have a look at what we can do.  I’m going to try to aim for a reasonably realistic output here that (hopefully) won’t make your eyeballs bleed.  No promises, though.</p>
<p class='aside'>
<span>Need an HDR to follow along?</span>
I figured it might be more fun (easier?) to follow along if you had the same file I do.
<br>

So here it is, don’t say I never gave you anything (This hdr is licensed <a href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/">cc-by-sa-nc</a> by me):
<br>

<span>
<a href="https://docs.google.com/uc?export=download&amp;id=0B21lPI7Ov4CVMTJwSS14aGtCc1U">Download from Google Drive (41MB .hdr)</a>
</span>
</p>


<p>Another note - all of the operators can have their results tweaked by modification of the pre-gamma value ahead of time.  This is applied the image <i>before </i>the TMO is applied, and will make a difference in the final output.  Usually pushing the pre-gamma value down will increase contrast/brightness in the image, while increasing it will do the opposite.  I find it better to start with pre-gamma set to 1 as I experiment, just remember that it is another factor that you use to modify your final result.</p>
<h3 id="mantiuk-06">Mantiuk ‘06<a href="#mantiuk-06" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I’m starting with this one because it’s the first in the list of TMOs.  Let’s see what the defaults from this operator look like against our base HDRi:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_mantiuk06_default.jpg" alt="Mantiuk 06 default" width='600' height='452'>
<figcaption>
Default Mantiuk ‘06 applied
</figcaption>
</figure>

<p>By default Mantiuk ‘06 produces a muted color result that seems pleasing to my eye.  Overall the image feels like it’s almost “dirty” or “gritty” with these results.  The default settings produce a bit of extra local contrast boosting as well.</p>
<p>Let’s see what the parameters do to our image.</p>
<h4 id="contrast-factor">Contrast Factor<a href="#contrast-factor" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default factor is 0.10.</p>
<p>Pushing this value down to as low as 0.01 produces just a slight increase in contrast across the image from the default.  Not that much overall.</p>
<p>Pushing this value up, though, will tone down the contrast overall.  I think this helps to add some moderation to the image, as hard contrasts can be jarring to the eyes sometimes.  Here is the image with only the <i>Contrast Factor</i> pushed up to 0.40:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_mantiuk06_contrast_mapping_0.4_default.jpg" data-swap-src="untitled_pregamma_1_mantiuk06_default.jpg" alt='Mantiuk 06 Contrast Factor 0.4' width='600' height='452'>
<figcaption>
Mantiuk ‘06 - Contrast Factor increased to 0.40<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="saturation-factor">Saturation Factor<a href="#saturation-factor" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default value is 0.80.</p>
<p>This factor just scales the saturation in the image, and behaves as expected.  If you find the colors a bit muted using this TMO, you can bump this value a bit (don’t get crazy).  For example, here is the <em>Saturation Factor</em> bumped to 1.10:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_mantiuk06_saturation_factor_1.1_default.jpg" data-swap-src="untitled_pregamma_1_mantiuk06_default.jpg" width='600' height='452' alt='Mantiuk 06 Saturation 1.10'>
<figcaption>
Mantiuk ‘06 - Saturation Factor increased to 1.10<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>Of course, you can also go the other way if you want to mute the colors a bit more:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_mantiuk06_saturation_factor_0.4_default.jpg" data-swap-src="untitled_pregamma_1_mantiuk06_default.jpg" width='600' height='452' alt='Mantiuk 06 Saturation 0.40'>
<figcaption>
Mantiuk ‘06 - Saturation Factor decreased to 0.40<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="detail-factor">Detail Factor<a href="#detail-factor" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default is 1.0.</p>
<p>The <em>Detail Factor</em> appears to control local contrast intensity.  It gets overpowering very quickly, so make small movements here (if at all).  Here is what pushing the <em>Detail Factor</em> up to 10.0 produces:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_mantiuk06_detail_factor_10_default.jpg" data-swap-src="untitled_pregamma_1_mantiuk06_default.jpg" width='600' height='452' alt='Mantiuk 06 Detail Factor' >
<figcaption>
<strong><em>Don’t</em></strong> do this.  Mantiuk ‘06 - Detail Factor increased to 10.0<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="contrast-equalization">Contrast Equalization<a href="#contrast-equalization" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>This is supposed to equalize the contrast if there are heavy swings of light/dark across the image on a global scale, but in my example did little to the image (other than a strange lightening in the upper left corner).</p>
<h4 id="my-final-version-2">My Final Version<a href="#my-final-version-2" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>I played a bit starting from the defaults.  First I wanted to push down the contrast a bit to make everything just a bit more realistic, so I pushed <em>Contrast Factor</em> up to 0.30.  I slightly bumped the <em>Saturation Factor</em> to 0.95 as well.</p>
<p>I liked the textures of the tree and house, so I wanted to bring those back up a bit after decreasing the Contrast Factor, so I pushed the <em>Detail Factor</em> up to 5.0.</p>
<p>Here is what I ended up with in the end:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_mantiuk06_contrast_mapping_0.3_saturation_factor_0.95_detail_factor_5_FINAL.jpg" data-swap-src="untitled_pregamma_1_mantiuk06_default-960.jpg" width='960' height='723' alt='Mantiuk 06 Final Result'>
<figcaption>
My final output (Contrast 0.3, Saturation 0.95, Detail 5.0)<br>
(click to compare to defaults)
</figcaption>
</figure>


<h3 id="mantiuk-08">Mantiuk ‘08<a href="#mantiuk-08" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Mantiuk ‘08 is a global contrast TMO (for comparison, Mantiuk ‘06 uses local contrast heavily).  Being a global operator, it’s very quick to apply.</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_mantiuk08_default.jpg" alt="Mantiuk 08 default" height='' width=''>
<figcaption>
Default Mantiuk ‘08 applied
</figcaption>
</figure>

<p>As you can see, the effect of this TMO is to compress the dynamic range into an LDR output using a function that operates across the entire image globally.  This will produce a more realistic result I think, overall.</p>
<p>The default output is not bad at all, where brights seem appropriately bright, and darks are dark while still retaining details.  It does feel like the resulting output is a little over-sharp to my eye, however.</p>
<p>There are only a couple of parameters for this TMO (unless you specifically override the <em>Luminance Level</em> with the checkbox, Mantiuk ‘08 will automatically adjust it for you):</p>
<h4 id="predefined-display">Predefined Display<a href="#predefined-display" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>There are options for <em>LCD Office, LCD, LCD Bright,</em> and <em>CRT</em> but they didn’t seem to make any difference in my final output at all.</p>
<h4 id="color-saturation-2">Color Saturation<a href="#color-saturation-2" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default is 1.0.</p>
<p><em>Color Saturation</em> operates exactly how you’d expect.  Dropping this value decreases the saturation, and vice versa.  Here’s a version with the <em>Color Saturation</em> bumped to 1.50:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_mantiuk08_colorsaturation_1.5_default.jpg" data-swap-src="untitled_pregamma_1_mantiuk08_default.jpg" width='600' height='452'>
<figcaption>
Mantiuk ‘08 - Color Saturation increased to 1.50<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="contrast-enhancement">Contrast Enhancement<a href="#contrast-enhancement" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default value is 1.0.</p>
<p>This will affect the global contrast across the image.  The default seemed to have a bit too much contrast, so it’s worth it to dial this value in.  For instance, here is the <em>Contrast Enhancement</em>  dialed down to 0.51:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_mantiuk08_contrastenhancement_0.51_default.jpg" data-swap-src="untitled_pregamma_1_mantiuk08_default.jpg" width='600' height='452' alt='Mantiuk 08 Contrast Enhancement 0.51'>
<figcaption>
Mantiuk ‘08 - Contrast Enhancement decreased to 0.51<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>Compared to the default settings I feel like this operator can work better if the contrast is turned down just a bit to make it all a little less harsh.</p>
<h4 id="enable-luminance-level">Enable Luminance Level<a href="#enable-luminance-level" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>This checkbox/slider allows you to manually specify the Luminance Level in the image.  The problem that I ran into was that with this enabled, I couldn’t adjust the Luminance far enough to keep bright areas in the image from blowing out.  if I let the default behavior of automatically adjusting Luminanace, then it kept things more under control.</p>
<h4 id="my-final-version-3">My Final Version<a href="#my-final-version-3" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Starting from the defaults, I pushed down the <em>Contrast Enhancement</em> to 0.61 to even out the overall contrast.  I bumped the <em>Color Saturation</em> to 1.10 to bring out the colors a bit more as well.</p>
<p>I also dropped the pre-gamma correction to 0.91 in order to bring back some of the contrast lost from the <em>Contrast Enhancement</em>.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_0.91_mantiuk08_auto_luminancecolorsaturation_1.1_contrastenhancement_0.61_FINAL.jpg" data-swap-src="untitled_pregamma_1_mantiuk08_default-960.jpg" width='960' height='723' alt='Mantiuk 08 final result'>
<figcaption>
My final Mantiuk ‘08 output<br>
(pre-gamma 0.91, Contrast Enhancement 0.61, Color Saturation 1.10)<br>
(click to compare to defaults)
</figcaption>
</figure>



<h3 id="fattal">Fattal<a href="#fattal" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Crap.  Time for this TMO I guess…</p>
<p><strong>THIS</strong> is the TMO responsible for some of the greatest sins of HDR images.
Did you see the first two images in this post?  Those were Fattal.
The problem is that it’s really easy to get stupid with this TMO.</p>
<p>Fattal (like the other local contrast operators) is dependent on the final output size of the image.
When testing this operator, do it at the full resolution you will want to export.
The results will not match up if you change size.
I’m also going to focus on using only the newer v.2.3.0 version, not the old one.</p>
<p>Here is what the default values look like on our image:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_fattal_default.jpg" alt="Fattal default" height='' width=''>
<figcaption>
Default Fattal applied
</figcaption>
</figure>

<p>The defaults are pretty contrasty, and the color seems saturated quite a bit as well.  Maybe we can get something useful out of this operator.  Let’s have a look at the parameters.</p>
<h4 id="alpha">Alpha<a href="#alpha" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default is 1.00.</p>
<p>This parameter is supposed to be a threshold against which to apply the effect. According to the wikidot, decreasing this value should increase the level of details in the output and vice versa.  Here is an example with the <em>Alpha</em> turned down to 0.25:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_fattal_alpha_0.25_default.jpg" data-swap-src="untitled_pregamma_1_fattal_default.jpg" width='600' height='452'>
<figcaption>
Fattal - Alpha decreased to 0.25<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>Increasing the <em>Alpha</em> value seems to darken the image a bit as well.</p>
<h4 id="beta">Beta<a href="#beta" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default value is 0.90.</p>
<p>This parameter is supposed to control the amount of the algorithm applied on the image.  A value of 1 is no effect on the image (straight gamma=1 mapping).  Lower values will increase the amount of the effect.  Recommended values are between 0.8 and 0.9.  As the values get lower, the image gets more cartoonish looking.</p>
<p>Here is an example with <em>Beta</em> dropped down to 0.75:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_fattal_beta_0.75_default.jpg" data-swap-src="untitled_pregamma_1_fattal_default.jpg" width='600' height='452' alt='Fattal Beta 0.75'>
<figcaption>
Fattal - Beta decreased to 0.75<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="color-saturation">Color Saturation<a href="#color-saturation" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default value is 1.0.</p>
<p>This parameter does exactly what’s described.  Nothing interesting to see here.</p>
<h4 id="noise-reduction">Noise Reduction<a href="#noise-reduction" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default value is 0.</p>
<p>This should suppress fine detail noise from being picked up by the algorithm for enhancement.  I’ve noticed that it will slightly affect the image brightness as well.  Fine details may be lost if this value is too high.  Here the <i>Noise Reduction</i> has been turned up to 0.15:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_fattal_noiseredux_0.15_default.jpg" data-swap-src="untitled_pregamma_1_fattal_default.jpg" width='600' height='452' alt='Fattal NR 0.15'>
<figcaption>
Fattal - Noise Reduction increased to 0.15<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="my-final-version-4">My Final Version<a href="#my-final-version-4" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>This TMO is sensitive to changes in its parameters.  Small changes can swing the results far, so proceed lightly.</p>
<p>I increased the <em>Noise Reduction</em> a little bit up front, which lightened up the image.  Then I dropped the <em>Beta</em> value to let the algorithm work to brighten up the image even further.  To offset the increase, I pushed <em>Alpha</em> up a bit to keep the local contrasts from getting too harsh.  A few minutes of adjustments yielded this:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_fattal_alpha_1.07_beta_0.86_saturation_0.7_noiseredux_0.02_fftsolver_1_FINAL.jpg" data-swap-src="untitled_pregamma_1_fattal_default-960.jpg" width='960' height='723' alt='Fattal Final Result'>
<figcaption>
My Fattal output - Alpha 1.07, Beta 0.86, Saturation 0.7, Noise red. 0.02<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>Overall, Fattal can be easily abused.  Don’t abuse the Fattal TMO.  If you find your values sliding too far outside of the norm, step away from your computer, get a coffee, take a walk, then come back and see if it still hurts your eyes.</p>
<h3 id="drago">Drago<a href="#drago" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Drago is another of the global TMOs.  It also has just one control: bias.</p>
<p>Here is what the default values produce:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_drago_default.jpg" alt="" height='' width=''>
<figcaption>
Default Drago applied
</figcaption>
</figure>

<p>The default values produced a very washed out appearance to the image.  The black points are heavily lifted, resulting in a muddy gray in dark areas.</p>
<p><em>Bias</em> is the only parameter for this operator.  The default value is 0.85.  Decreasing this value will lighten the image significantly, while increasing it will darken it.  For my image, even pushing the <em>Bias</em> value all the way up to 1.0 only produced marginal results:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_drago_bias_1.jpg" data-swap-src="untitled_pregamma_1_drago_default.jpg" width='600' height='452' alt='Drago Bias 1.0'>
<figcaption>
Drago - Bias 1.0<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>Even at this level the image still appears very washed out.  The only other parameter to change would be the pre-gamma before the TMO can operate.  After adjusting values for a bit, I settled on a pre-gamma of 0.67 in addition to the <em>Bias</em> being set to 1:</p>
<h4 id="my-final-version-5">My Final Version<a href="#my-final-version-5" class="header-link"><i class="fa fa-link"></i></a></h4>
<figure class='big-vid'>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_0.67_drago_bias_1.jpg" data-swap-src="untitled_pregamma_1_drago_default-960.jpg" width='960' height='723' alt='Drago final result'>
<figcaption>
My result: Drago - Bias 1.0, pre-gamma 0.67<br>
(click to compare to defaults)
</figcaption>
</figure>



<h3 id="durand">Durand<a href="#durand" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Most of the older documentation/posts that I can find describe Durand as the most realistic of the TMOs, yielding good results that do not appear overly processed.</p>
<p>Indeed the default settings immediately look reasonably natural, though it does exhibit a bit of blowing out in very bright areas - which I imagine can be fixed by adjustment of the correct parameters.  Here is the default Durand output:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_durand_default.jpg" alt="" height='' width=''>
<figcaption>
Default Durand applied
</figcaption>
</figure>

<p>There are three parameters that can be adjusted for this TMO, let’s have a look:</p>
<h4 id="base-contrast">Base Contrast<a href="#base-contrast" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default is 5.00.</p>
<p>This value is considered a little high from most sources I’ve read.  Usually recommending to drop this value to the 3-4 range.  Here is the image with the <i>Base Contrast </i> dropped to 3.0:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_durand_base_3.5_default.jpg" data-swap-src="untitled_pregamma_1_durand_default.jpg" width='600' height='452' alt='Durand Base Contrast 3.5'>
<figcaption>
Durand - Base Contrast decreased to 3.5<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>The <em>Base Contrast</em> does appear to drop the contrast in the image, but it also drops the blown-out high values on the house to more reasonable levels.</p>
<h4 id="spatial-kernel-sigma">Spatial Kernel Sigma<a href="#spatial-kernel-sigma" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default value is 2.00.</p>
<p>This parameter seems to produce a change to contrast in the image.  Large value swings are required to notice some changes, depending on the other parameter values.  Pushing the value up to 65.00 looks like this:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_durand_spatial_65_default.jpg" data-swap-src="untitled_pregamma_1_durand_default.jpg" width='600' height='452' alt='Durand Spatial Kernel 65.00'>
<figcaption>
Durand - Spatial Kernel Sigma increased to 65.00<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="range-kernel-sigma">Range Kernel Sigma<a href="#range-kernel-sigma" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default value is 2.00.</p>
<p>My limited testing shows that this parameters doesn’t quite operate correctly.  Changes will not modify the output image until you reach a certain threshold in the upper bounds, where it will overexpose the image.  I am assuming there is a bug in the implementation, but will have to test further before filing a bug report.</p>
<h4 id="my-final-version-6">My Final Version<a href="#my-final-version-6" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>In experiment I found that pre-gamma adjustments can affect the saturation in the output image.  Pushing pre-gamma down a bit will increase the saturation.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_0.88_durand_spatial_5_range_1.01_base_3.6_FINAL.jpg" data-swap-src="untitled_pregamma_1_durand_default-960.jpg" width='960' height='723' alt='Durand final result'>
<figcaption>
My Durand results - pre-gamma 0.88, Contrast 3.6, Spatial Sigma 5.00<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>I pulled the <em>Base Contrast</em> back to keep the sides of the house from blowing out.  Once I had done that, I also dropped the pre-gamma to 0.88 to bump the saturation slightly in the colors.  A slight boost to <em>Spatial Kernel Sigma</em> let me increase local contrasts slightly as well.</p>
<p>Finally, I used the <em>Adjust Levels</em> dialog to modify the levels slightly by raising the black point a small amount (hey - I’m the one writing about all these #@$%ing operators, I deserve a chance to cheat a little).</p>
<h3 id="reinhard-02">Reinhard ‘02<a href="#reinhard-02" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>This is supposed to be another very natural looking operator.  The initial default result looks good with medium-low contrast and nothing blowing out immediately:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_reinhard02_default.jpg" alt="" height='' width=''>
<figcaption>
Default Reinhard ‘02 applied
</figcaption>
</figure>

<p>Even though many parameters are listed, they don’t really appear to make a difference.  At least with my test HDR.  Even worse, attempting to use the “Use Scales” option usually just crashes my LuminanceHDR.</p>
<h4 id="key-value">Key Value<a href="#key-value" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default is 0.18.</p>
<p>This appears to be the only operator that does anything in my image at the moment.  Increasing it will increase the brightness of the image, and decreasing it will darken the image.</p>
<p>Here is the image with <em>Key Value</em> turned down to 0.05:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_reinhard02_key_0.05_default.jpg" data-swap-src="Cabin_pregamma_1_reinhard02_default.jpg" width='600' height='452' alt='Reinhard 02 Key Value 0.05'>
<figcaption>
Reinhard ‘02 - Key Value 0.05<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="phi">Phi<a href="#phi" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default is 1.00.</p>
<p>This parameter does not appear to have any affect on my image.</p>
<h4 id="use-scales">Use Scales<a href="#use-scales" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Turning this option on currently crashes my session in LuminanceHDR.</p>
<h4 id="my-final-version-7">My Final Version<a href="#my-final-version-7" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>I started by setting the <i>Key Value </i> very low (0.01), and adjusted it up slowly until I got the highlights about where I wanted them.  Due to this being the only parameter that modified the image, I then started adjusting pre-gamma up until I got to roughly the exposure I thought looked best (1.09).</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1.09_reinhard02_key_0.09_phi_1_FINAL.jpg" data-swap-src="Cabin_pregamma_1_reinhard02_default-960.jpg" width='960' height='723' alt='Reinhard 02 final result'>
<figcaption>
Final Reinhard ‘02 version - Key Value 0.09, pre-gamma 1.09<br>
(click to compare to defaults)
</figcaption>
</figure>



<h3 id="reinhard-05">Reinhard ‘05<a href="#reinhard-05" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Reinhard ‘05 is supposed to be another more ‘natural’ looking TMO, and also operates globally on the image.  The default settings produce an image that looks under-exposed and very saturated:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_reinhard05_default.jpg" alt="" height='' width=''>
<figcaption>
Default Reinhard ‘05 applied
</figcaption>
</figure>

<p>There are three parameters for this TMO that can be adjusted.</p>
<h4 id="brightness">Brightness<a href="#brightness" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default value is -10.00.</p>
<p>Interestingly, pushing this parameter down (all the way to its lowest setting, -20) did not darken my image at all.  Pulling it up, however, did increase the brightness overall.  Here the brightness is increased to -2.00:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_reinhard05_brightness_-2_default.jpg" data-swap-src="Cabin_pregamma_1_reinhard05_default.jpg" width='600' height='452' alt='Reinhard 05 brightness -2.00'>
<figcaption>
Reinhard ‘05 - Brightness increased to -2.00<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="chromatic-adaptation">Chromatic Adaptation<a href="#chromatic-adaptation" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default is 0.00.</p>
<p>This parameter appears to affect the saturation in the image.  Increasing it desaturates the results, which is fine given that the default value of 0.00 shows a fairly saturated image to begin with.  Here is the <i>Chromatic Adaptation </i> turned up to 0.60:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_reinhard05_chromatic_adaptation_0.6_default.jpg" data-swap-src="Cabin_pregamma_1_reinhard05_default.jpg" width='600' height='452' alt='Reinhard 05 chromatic adaptation 0.6'>
<figcaption>
Reinhard ‘05 - Chromatic Adaptation increased to 0.6<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="light-adaptation">Light Adaptation<a href="#light-adaptation" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default is 1.00.</p>
<p>This parameter modifies the global contrast in the final output.  It starts at the maximum of 1.00, and decreasing this value will increase the contrast in the image.  Pushing the value down to 0.5 does this to the test image:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_reinhard05_light_adaptation_0.5_default.jpg" data-swap-src="Cabin_pregamma_1_reinhard05_default.jpg" width='600' height='452' alt='Reinhard 05 light adaptation 0.50'>
<figcaption>
Reinhard ‘05 - Light Adaptation decreased to 0.50<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="my-final-version-8">My Final Version<a href="#my-final-version-8" class="header-link"><i class="fa fa-link"></i></a></h4>
<figure class='big-vid'>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_reinhard05_brightness_-3_chromatic_adaptation_0.6_light_adaptation_0.75_FINAL.jpg" data-swap-src="Cabin_pregamma_1_reinhard05_default-960.jpg" width='960' height='723' alt='Reinhard 05 final result'>
<figcaption>
My Reinhard ‘05 - Brightness -5.00, Chromatic Adapt. 0.60, Light Adapt. 0.75<br>
(click to compare to defaults)
</figcaption>
</figure>


<p>Starting from the defaults, I raised the <em>Brightness</em> to -5.00 to lift the darker areas of the image, while keeping an eye on the highlights to keep them from blowing out.  I then decreased the <em>Light Adaptation</em> until the scene had a reasonable amount of contrast without becoming overpowering to 0.75.  At that point I turned up the <em>Chromatic Adaptation</em> to reduce the saturation in the image to be more realistic, and finished at 0.60.</p>
<h3 id="ashikhmin">Ashikhmin<a href="#ashikhmin" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>This TMO has little in the way of controls - just options for two different equations that can be used, and a slider.  The default (Eqn. 2) image is very dark and heavily saturated:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_ashikhmin_default.jpg" alt="Ashikhmin default" height='' width=''>
<figcaption>
Default Ashikhmin applied
</figcaption>
</figure>

<p>There is a checkbox option for using a “Simple” method (that produces identical results regardless of which Eqn is checked - I’m thinking it doesn’t use that information).</p>
<h4 id="simple">Simple<a href="#simple" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Checking the <em>Simple</em> checkbox removes any control over the image parameters, and yields this image:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_ashikhmin_-simple.jpg" data-swap-src="Cabin_pregamma_1_ashikhmin_default.jpg" width='600' height='452' alt='Ashikhmin simple'>
<figcaption>
Ashikhmin - Simple<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>Fairly saturated, but exposed reasonably well.  It lacks some contrast, but the tones are all there.  This result could use some further massaging to knock down the saturation and to bump the contrast slightly (or adjust pre-gamma).</p>
<h4 id="equation-4">Equation 4<a href="#equation-4" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>This is the result of choosing <i>Equation 4 </i> instead:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_ashikhmin_-eq4_default.jpg" data-swap-src="Cabin_pregamma_1_ashikhmin_default.jpg" width='600' height='452' alt='Ashikhmin equation 4'>
<figcaption>
Ashikhmin - Equation 4<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>There is a large loss of local contrast details in the scene, and some of the edges appear very soft.  Overall the exposure remains very similar.</p>
<h4 id="local-contrast-threshold">Local Contrast Threshold<a href="#local-contrast-threshold" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default value is 0.50.</p>
<p>This parameter modifies the local contrast being applied to the image.  The result will be different depending on which <em>Equation</em> is being used.</p>
<p>Here is <em>Equation 2</em> with the <em>Local Contrast Threshold</em> reduced to 0.20:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_ashikhmin_-eq2_local_0.2.jpg" data-swap-src="Cabin_pregamma_1_ashikhmin_default.jpg" width='600' height='452' alt='Ashikhmin eqn 2 local contrast 0.20'>
<figcaption>
Ashikhmin - Eqn 2, Local Contrast Threshold 0.20<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>Lower values will decrease the amount of local contrast in the final output.</p>
<p><em>Equation 4</em> with <em>Local Contrast Threshold</em> reduced to 0.20:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_ashikhmin_-eq4_local_0.2.jpg" data-swap-src="Cabin_pregamma_1_ashikhmin_-eq4_default.jpg" width='600' height='452' alt='Ashikhmin eqn 4 local contrast 0.20'>
<figcaption>
Ashikhmin - Eqn 4, Local Contrast Threshold 0.20<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="my-final-version-9">My Final Version<a href="#my-final-version-9" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>After playing with the options, the overall best version I feel is had by just using the <i>Simple </i> option.  Further tweaking may be necessary to get usable results beyond this.</p>
<h3 id="pattanaik">Pattanaik<a href="#pattanaik" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>This TMO appears to attempt to mimic the behavior of human eyes with the inclusion of terminology like “Rod” and “Cone”.  There are quite a few different parameters to adjust if wanted.  The default TMO results in an image like this:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_pattanaik00_default.jpg" alt="" height='' width=''>
<figcaption>
Default Pattanaik applied
</figcaption>
</figure>

<p>The default results are very desaturated, and tends to blow out in the highlights.  The dark areas appear well exposed, with the problems (in my test hdr) being mostly constrained to highlights for this example.  On first glance, the results look like something that could be worked with.</p>
<p>There are quite a few different parameters for this TMO.  Let’s have a look at them:</p>
<h4 id="multiplier">Multiplier<a href="#multiplier" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default value is 1.00.</p>
<p>This parameter appears to modify the overall contrast in the image.  Decreasing the value will decrease contrast, and vice versa.  It also appears to slightly modify the brightness in the image as well (pushing the highlights to a less blown-out value).  Here is the <em>Multiplier</em> decreased to 0.03:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_pattanaik00_mul_0.03_autolum.jpg" data-swap-src="Cabin_pregamma_1_pattanaik00_default.jpg" width='600' height='452' alt='Pattanaik multiplier 0.03'>
<figcaption>
Pattanaik - Multiplier 0.03<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="local-tone-mapping">Local Tone Mapping<a href="#local-tone-mapping" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>This parameter is just a checkbox, with no controls.  The result is a washed out image with heavy local contrast adjustments:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_pattanaik00_mul_1_local.jpg" data-swap-src="Cabin_pregamma_1_pattanaik00_default.jpg" width='600' height='452' alt='Pattanaik local tone mapping'>
<figcaption>
Pattanaik - Local Tone Mapping<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="cone-rod-levels">Cone/Rod Levels<a href="#cone-rod-levels" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default is to have <em>Auto Cone/Rod</em> checked, greying out the options to change the parameters manually.</p>
<p>Turning off <em>Auto Cone/Rod</em> will get the default manual values of 0.50 for both applied:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_pattanaik00_mul_1_cone_0.5_rod_0.5_.jpg" data-swap-src="Cabin_pregamma_1_pattanaik00_default.jpg" width='600' height='452' alt='Pattanaik manual cone/rod 0.5 each'>
<figcaption>
Pattanaik - Manual Cone/Rod (0.50 for each)<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>The image gets very blown out everywhere, and modification of the Cone/Rod values does not significantly reduce brightness across the image.</p>
<h4 id="my-final-version">My Final Version<a href="#my-final-version" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Starting with the defaults, I reduced the <i>Multiplier </i> to bring the highlights under control.  This reduced contrast and saturation in the image.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_0.91_pattanaik00_mul_0.03_autolum_FINAL.jpg" data-swap-src="Cabin_pregamma_1_pattanaik00_default-960.jpg" width='960' height='723' alt='Pattanaik final result'>
<figcaption>
My final Pattanaik - Multiplier 0.03, pre-gamma 0.91<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>To bring back contrast and some saturation, I decreased the pre-gamma to 0.91.  The results are not too far off of the defualt settings.  The results could still use some further help with global contrast and saturation, and might benefit from layering or modifications in GIMP.</p>
<h2 id="closing-thoughts">Closing Thoughts<a href="#closing-thoughts" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Looking through all of the results shows just how different each TMO will operate across the same image.  Here are all of the final results in a single image:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/All-Finals.png" alt="" height='1600' width='850' style='max-height: initial;'>
</figure>

<p>I personally like the results from Mantiuk ‘06.  The problem is that it’s still a little more extreme than I would care for in a final result.  For a really good, realistic result that I think can be massaged into a great image, I would go to Mantiuk ‘08 or Reinhard.</p>
<p>I could also do something with Fattal, but would have to tone a few things down a bit.</p>
<p>While you’re working, remember to occasionally open up the <strong>Levels Adjustment</strong> to keep an eye on the histogram.  Look for highlights blowing out, and shadows becoming too murky.  All the normal rules of image processing still apply here - so use them!</p>
<p>You’re trying to use HDR as a tool for you to capture more information, but remember to still keep it looking realistic.  If you’re new to HDR processing, then I can’t recommend enough to stop occasionally, get away from the monitor, and come back to look at your progress.</p>
<p>If it hurts your eyes, dial it all back.  Heck, if <em>you</em> think it looks good, <em><strong>still dial it back</strong></em> .</p>
<p>If I can head off even one clown-vomit image, then I’ll consider my mission accomplished with this post.</p>
<h3 id="a-couple-of-further-resources">A Couple of Further Resources<a href="#a-couple-of-further-resources" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Here’s a few things I’ve found scattered around the internet if you want to read more.</p>
<ul>
<li><a href="http://osp.wikidot.com/parameters-for-photographers">The Open Source Photography wikidot</a> page has some information as well</li>
<li>Cambridge in Colour user David has written about many of the operators:<ul>
<li><a href="http://www.cambridgeincolour.com/forums/thread1513.htm">Mantiuk</a></li>
<li><a href="http://www.cambridgeincolour.com/forums/thread1625.htm">Fattal</a></li>
<li><a href="http://www.cambridgeincolour.com/forums/thread1499.htm">Drago</a></li>
<li><a href="http://www.cambridgeincolour.com/forums/thread1514.htm">Durand</a></li>
<li><a href="http://www.cambridgeincolour.com/forums/thread1630.htm">Reinhard 05</a></li>
<li><a href="http://www.cambridgeincolour.com/forums/thread1681.htm">Reinhard 02</a></li>
<li><a href="http://www.cambridgeincolour.com/forums/thread1651.htm">Ashikhmin</a></li>
<li><a href="http://www.cambridgeincolour.com/forums/thread1612.htm">Pattanaik</a></li>
</ul>
</li>
<li><a href="http://pallopanoraama.blogspot.com/2011/05/realistinen-tonemappaus-luminance-hdr.html">A little Finnish exploration</a> of global vs. local operators</li>
</ul>
<p>We also have a sub-category on the <a href="https://discuss.pixls.us">forums</a> dedicated entirely to LuminanceHDR and HDR processing in general: <a href="https://discuss.pixls.us/c/software/luminancehdr">https://discuss.pixls.us/c/software/luminancehdr</a>.</p>
<p>This tutorial was originally published <a href="http://blog.patdavid.net/2013/05/hdr-photography-with-foss-tools.html">here</a>.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Libre Graphics Meeting London]]></title>
            <link>https://pixls.us/blog/2016/01/libre-graphics-meeting-london/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/01/libre-graphics-meeting-london/</guid>
            <pubDate>Fri, 08 Jan 2016 14:36:06 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2016/01/libre-graphics-meeting-london/london-calling-2048.jpg" /><br/>
                <h1>Libre Graphics Meeting London</h1> 
                <h2>Join us in London for a PIXLS meet-up!</h2>  
                <p>We’re heading to London!</p>
<figure>
<a href='http://libregraphicsmeeting.org/2016/'>
<img src="https://pixls.us/blog/2016/01/libre-graphics-meeting-london/banner_glitch_1.png" alt='LGM/London Logo' />
</a>
</figure>

<p>I missed LGM last year in Toronto (having a baby - well, my wife was).
I <em>am</em> going to be there this year for <a href="http://libregraphicsmeeting.org/2016/">LGM/London</a>!</p>
<!-- more -->
<h2 id="help-support-us"><a href="#help-support-us" class="header-link-alt">Help Support Us</a></h2>
<p>I don’t ever do this normally, but you’ve got to start somewhere, right?</p>
<p>It’s my long-term desire to be able to hold a PIXLS meetup/event every year where the community can get together.
Where we can hold workshops, photowalks, and generally share knowledge and information.
For free, for anyone.</p>
<p><em>For now though, we need support.</em>
LGM is a great opportunity for us to meet with many different projects usually having representatives there.  </p>
<p>Donations will help us to offset travel costs to attend LGM as well as a pre-LGM meetup we are holding (<a href="#pixls-meet-up">more below</a>).
Anything further will go to creating new content and to cover hosting costs for the site.</p>
<h3 id="pledgie"><a href="#pledgie" class="header-link-alt">Pledgie</a></h3>
<p>I have started a <a href="https://pledgie.com/campaigns/30905">Pledgie campaign</a> to help ease the solicitation of donations:<br><a href="https://pledgie.com/campaigns/30905">https://pledgie.com/campaigns/30905</a></p>
<p>Here’s the fancy little widget they make available:</p>
<p><a href='https://pledgie.com/campaigns/30905'><img alt='Click here to lend your support to: PIXLS.US at Libre Graphics Meeting 2016 and make a donation at pledgie.com !' src='https://pledgie.com/campaigns/30905.png?skin_name=chrome' border='0' style='width: initial;'></a></p>
<p>If you want to help by adding this button places, here’s the code to do it:</p>
<pre><code>&lt;a href=&#39;https://pledgie.com/campaigns/30905&#39;&gt;
&lt;img alt=&#39;Click here to lend your support to: PIXLS.US at Libre Graphics Meeting 2016 and make a donation at pledgie.com !&#39; src=&#39;https://pledgie.com/campaigns/30905.png?skin_name=chrome&#39; border=&#39;0&#39; style=&#39;width: initial;&#39;&gt;
&lt;/a&gt;
</code></pre><p>Feel free to use it wherever you think it might help. :)</p>
<h3 id="paypal"><a href="#paypal" class="header-link-alt">PayPal</a></h3>
<p>You can also donate directly via <a href="https://www.paypal.com/cgi-bin/webscr?cmd=_donations&amp;business=patdavid%40gmail%2ecom&amp;lc=US&amp;item_name=PIXLS%2eUS%20LGM%2FLondon&amp;item_number=pixls-london&amp;currency_code=USD&amp;bn=PP%2dDonationsBF%3abtn_donate_SM%2egif%3aNonHosted">PayPal</a> if you want:</p>
<p><a href="https://www.paypal.com/cgi-bin/webscr?cmd=_donations&amp;business=patdavid%40gmail%2ecom&amp;lc=US&amp;item_name=PIXLS%2eUS%20LGM%2FLondon&amp;item_number=pixls-london&amp;currency_code=USD&amp;bn=PP%2dDonationsBF%3abtn_donate_SM%2egif%3aNonHosted"><img src="https://pixls.us/blog/2016/01/libre-graphics-meeting-london/donate.png" alt='Lend a hand via PayPal' style='width: 33%;'/></a></p>
<h3 id="awareness"><a href="#awareness" class="header-link-alt">Awareness</a></h3>
<p>I realize that not everyone will be able to donate funds.  No sweat!
If you’d still like to help out then perhaps you can help us raise awareness for the campaign?
The more folks that know about it the better!</p>
<p>Re-tweeting, blogging, linking, yelling on a street corner all help to raise awareness of what we are doing here.
Heck, just invite folks to come read and participate in the community.  Let’s help even more people learn about free software!</p>
<h2 id="come-join-us"><a href="#come-join-us" class="header-link-alt">Come Join Us</a></h2>
<p>Of course, even better if you are able to make your way to London and actually join us at the <a href="http://libregraphicsmeeting.org/2016/">Libre Graphics Meeting 2016</a>!</p>
<p>The event will be April 15<sup>th</sup> &mdash; 18<sup>th</sup>, hosted by <a href="http://www.westminster.ac.uk/about-us/faculties/media">Westminster School of Media Arts and Design</a>, University of Westminster at the Harrow Campus (red marker on the map).</p>
<div class='fluid-vid'>
<iframe src="https://www.google.com/maps/d/embed?mid=zYKepeQNftPo.koxL6CFw1nPk" width="640" height="480" style='border: none;'></iframe>
</div>

<p>The little checkered flag on the map is for something really neat: a PIXLS meetup!</p>
<h3 id="pixls-meet-up"><a href="#pixls-meet-up" class="header-link-alt">PIXLS Meet Up</a></h3>
<p>I am going to arrive a day early so that we can have a gathering of PIXLS community folks and anyone else who wants to join us for some photographic fun!</p>
<p>Thanks to the local organizers in London (yay Lara!), we have facilities for us to use.
We will be meeting on Thursday, April 14<sup>th</sup> at the <a href="http://www.furtherfield.org/gallery/visit">Furtherfield Commons</a>.
The facilities will be available from 1000 &ndash; 1800 for us to use.</p>
<p><a href="http://www.furtherfield.org/gallery/visit">Furtherfield Commons</a><br>
Finsbury Gate &ndash; Finsbury Park<br>
Finsbury Park, London, N4 2NQ<br></p>
<p>As near as I can tell, here’s a street view of the Finsbury Gate:</p>
<div class='fluid-vid'>
<iframe src="https://www.google.com/maps/embed?pb=!1m0!3m2!1sen!2sus!4v1452283931744!6m8!1m7!1sOP5bSwtG8XL-Rdoz2M-RyQ!2m2!1d51.56506385511825!2d-0.1037885701573437!3f315.2912956391929!4f-1.9344543679182067!5f0.7820865974627469" width="600" height="450" frameborder="0" style="border:0" allowfullscreen></iframe>
</div>

<p>I believe the <a href="http://www.furtherfield.org/gallery/visit">Commons</a> building is just inside this gate, and on the left.</p>
<p>In 2014 I held a photowalk with LGM attendees in Leipzig the day before the event that was great fun.
Let’s expand the idea and do even more!</p>
<figure>
<img src="https://pixls.us/blog/2016/01/libre-graphics-meeting-london/nikolaikirche.jpg" alt='Nikolaikirche, Leipzig, LGM 2014'/>
<figcaption>
Nikolaikirche, Leipzig, from the 2014 LGM photowalk.<br/>
(That’s houz in the bottom right)
</figcaption>
</figure>

<p>Here’s a Flickr <a href="https://www.flickr.com/photos/patdavid/albums/72157643712169045">album of my images from LGM2014 in Leipzig</a>:</p>
<figure>
<a data-flickr-embed="true" data-header="true" data-footer="true"  href="https://www.flickr.com/photos/patdavid/albums/72157643712169045" title="LGM2014"><img src="https://farm8.staticflickr.com/7214/13781228444_956fcee5ef_z.jpg" width="640" height="640" alt="LGM2014"></a><script async src="https://pixls.us//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
</figure>

<p>This year I plan on bringing a model along to shoot while we are out and about (my friend <a href="https://www.flickr.com/photos/patdavid/albums/72157632799856846">Mairi</a> if she’s available - or a local model if not).
I will also be doing a photowalk again, either in the morning or afternoon.</p>
<p>I am also looking for folks from the community to suggest holding their own photoshoots or workshops, so please step forward and let me know if you’d be interested in doing something!
The facilities have bench seating for approximately 20 people, a big desk, and a projector as well.</p>
<p>Three things that I personally will be doing are (in no particular order):</p>
<ul>
<li>Natural + flash portraits and model shooting workshop.</li>
<li>Photowalk around the park + surrounding environs.</li>
<li>Portraits + architectural photos for Furtherfield (the hosts).</li>
</ul>
<p>I am hoping to possibly record some of these workshops and interactions for posterity and others that might not be able to make it to London.
It might be fun to record some shoots for the community to be able to use!</p>
<p>I am also 100% open to suggestions for content that you, the community, might be interested in seeing.
If you have something you’d like me to try (and record), please let me know!</p>
<figure>
<img src="https://pixls.us/blog/2016/01/libre-graphics-meeting-london/mairi-troisieme.jpg" alt='Mairi Troisieme'/>
<figcaption>
Hopefully <a href='https://www.flickr.com/photos/patdavid/16259030889/in/album-72157632799856846/'>Mairi</a> will be able to make it to London to model for us!
</figcaption>
</figure>



  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[darktable 2.0]]></title>
            <link>https://pixls.us/blog/2015/12/darktable-2-0/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/12/darktable-2-0/</guid>
            <pubDate>Fri, 25 Dec 2015 02:56:56 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/12/darktable-2-0/Lying in Ambush.jpg" /><br/>
                <h1>darktable 2.0</h1> 
                <h2>An awesome present for the end of 2015!</h2>  
                <style>
li {  margin-bottom: 0.25rem; }
ul + h3 { margin-top: 1.5rem; }
</style>

<p>Sneaking a release out on Christmas Eve, the <a href="https://www.darktable.org">darktable</a> team have announced their feature release of <a href="https://www.darktable.org/2015/12/darktable-2-0-released/">darktable 2.0</a>!
After quite a few months of Release Candidates the 2.0 is finally here.
Please join me in saying <em><strong>Congratulations</strong></em> and a hearty <em><strong>Thank You!</strong></em> for all of their work bringing this release to us.</p>
<!-- more -->
<p>Alex Prokoudine of <a href="http://libregraphicsworld.org">Libre Graphics World</a> has a more <a href="http://libregraphicsworld.org/blog/entry/darktable-2-0-released-with-printing-support">in-depth look at the release</a> including a nice interview with part of the team: Johannes Hanika, Tobias Ellinghaus, Roman Lebedev, and Jeremy Rosen.  My favorite tidbit from the interview:</p>
<blockquote>
<p>There is a lot less planning involved than many might think.</p>
<div style="text-align: right; font-size: 0.85rem;">&mdash; Tobias Ellinghaus</div>
</blockquote>
<p><a href="https://www.roberthutton.net/">Robert Hutton</a> has taken the time to produce a <a href="https://www.youtube.com/watch?v=VJbJ0btlui0">video covering the new features</a> and other changes between 1.6 and 2.0 as well:</p>
<div class='fluid-vid'>
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/VJbJ0btlui0" frameborder="0" allowfullscreen></iframe>
</div>

<p>A high-level look at the changes and improvements from the <a href="https://www.darktable.org/2015/12/darktable-2-0-released/">release post on the darktable site</a>:</p>
<h3 id="gui-"><a href="#gui-" class="header-link-alt">gui:</a></h3>
<ul>
<li>darktable has been ported to gtk-3.0</li>
<li>the viewport in darkroom mode is now dynamically sized, you specify the border width</li>
<li>side panels now default to a width of 350px in dt 2.0 instead of 300px in dt 1.6</li>
<li>further hidpi enhancements</li>
<li>navigating lighttable with arrow keys and space/enter</li>
<li>brush size/hardness/opacity have key accels</li>
<li>allow adding tone- and basecurve nodes with ctrl-click</li>
<li>the facebook login procedure is a little different now</li>
<li>image information now supports gps altitude</li>
</ul>
<h3 id="features-"><a href="#features-" class="header-link-alt">features:</a></h3>
<ul>
<li>new print mode</li>
<li>reworked screen color management (softproof, gamut check etc.)</li>
<li>delete/trash feature</li>
<li>pdf export</li>
<li>export can upscale</li>
<li>new “mode” parameter in the export panel to fine tune application of styles upon export</li>
</ul>
<h3 id="core-improvements-"><a href="#core-improvements-" class="header-link-alt">core improvements:</a></h3>
<ul>
<li>new thumbnail cache replaces mipmap cache (much improved speed, stability and seamless support for even up to 4K/5K screens)</li>
<li>all thumbnails are now properly fully color-managed</li>
<li>it is now possible to generate thumbnails for all images in the library using new darktable-generate-cache tool</li>
<li>we no longer drop history entries above the selected one when leaving darkroom mode or switching images</li>
<li>high quality export now downsamples before watermark and framing to guarantee consistent results</li>
<li>optimizations to loading jpeg’s when using libjpeg-turbo with its custom features</li>
<li>asynchronous camera and printer detection, prevents deadlocks in some cases</li>
<li>noiseprofiles are in external JSON file now</li>
<li>aspect ratios for crop&amp;rotate can be added to config file</li>
</ul>
<h3 id="image-operations-"><a href="#image-operations-" class="header-link-alt">image operations:</a></h3>
<ul>
<li>color reconstruction module</li>
<li>magic lantern-style deflicker was added to the exposure module (extremely useful for timelapses)</li>
<li>text watermarks</li>
<li>shadows&amp;highlights: add option for white point adjustment</li>
<li>more proper Kelvin temperature, fine-tuning preset interpolation in white balance iop</li>
<li>monochrome raw demosaicing (for cameras with color filter array physically removed)</li>
<li>raw black/white point module</li>
</ul>
<h3 id="packaging-"><a href="#packaging-" class="header-link-alt">packaging:</a></h3>
<ul>
<li>removed dependency on libraw</li>
<li>removed dependency on libsquish (solves patent issues as a side effect)</li>
<li>unbundled pugixml, osm-gps-map and colord-gtk</li>
</ul>
<h3 id="generic-"><a href="#generic-" class="header-link-alt">generic:</a></h3>
<ul>
<li>32-bit support is soft-deprecated due to limited virtual address space</li>
<li>support for building with gcc earlier than 4.8 is soft-deprecated</li>
<li>numerous memory leaks were exterminated</li>
<li>overall stability enhancements</li>
</ul>
<h3 id="scripting-"><a href="#scripting-" class="header-link-alt">scripting:</a></h3>
<ul>
<li>lua scripts can now add UI elements to the lighttable view (buttons, sliders etc…)</li>
<li>a new repository for external lua scripts was started: <a href="https://github.com/darktable-org/lua-scripts">https://github.com/darktable-org/lua-scripts</a></li>
<li>it is now possible to edit the collection filters via lua</li>
<li>it is now possible to add new cropping guides via lua</li>
<li>it is now possible to run background tasks in lua</li>
<li>a lua event is generated when the mouse under the cursor changes</li>
</ul>
<p>The source is <a href="https://www.darktable.org/install/">available now</a> as well as a .dmg for OS X.<br>Various Linux distro builds are either already available or will be soon!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Let's Encrypt!]]></title>
            <link>https://pixls.us/blog/2015/12/let-s-encrypt/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/12/let-s-encrypt/</guid>
            <pubDate>Tue, 15 Dec 2015 18:53:26 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/12/let-s-encrypt/LE.jpg" /><br/>
                <h1>Let's Encrypt!</h1> 
                <h2>Also a neat 2.5D parallax video for Wikipedia.</h2>  
                <p>I finally got off my butt to get a process in place to obtain and update security certificates using Let’s Encrypt for both <a href="https://pixls.us//pixls.us">pixls.us</a> and <a href="https://pixls.us//discuss.pixls.us">discuss.pixls.us</a>.
I also did some (<em>more</em>) work with <a href="https://commons.wikimedia.org/wiki/User:Victorgrigas">Victor Grigas</a> and <a href="http://www.wikipedia.org">Wikipedia</a> to support their <a href="https://www.youtube.com/watch?v=Rm1LKcHD1VE">#Edit2015</a> video this year.</p>
<!-- more -->
<h2 id="wikipedia-edit2015"><a href="#wikipedia-edit2015" class="header-link-alt">Wikipedia #Edit2015</a></h2>
<p>Last year, I did some 2.5 parallax animations for Wikipedia to help with their first-ever <a href="http://blog.wikimedia.org/2014/12/17/wikipedias-first-ever-annual-video-reflects-contributions-from-people-around-the-world/">end-of-the-year retrospective video</a> (<a href="http://blog.patdavid.net/2014/12/wikipedia-edit2014-video.html">see the blog post from last year</a>).
Here is the retrospective from #Edit2014:</p>
<div class='fluid-vid'>
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/ci0Pihl2zXY?rel=0" frameborder="0" allowfullscreen></iframe>
</div>


<p>So it was an honor to hear from <a href="https://commons.wikimedia.org/wiki/User:Victorgrigas">Victor Grigas</a> again this year!
This time around there was a neat new crop of images he wanted to animate for the video.
Below you’ll find my contributions (they were all used in the final edit, just shortened to fit appropriately):</p>
<figure style='width: 100%;'>
<div class='fluid-vid'><iframe src="https://player.vimeo.com/video/146782845?portrait=0" width="500" height="281" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe></div>
<figcaption>
<a href="https://vimeo.com/146782845">Wiki #Edit2015 Bel</a> from <a href="https://vimeo.com/patdavid">Pat David</a> on <a href="https://vimeo.com">Vimeo</a>.
</figcaption>
</figure>

<figure style='width: 100%;'>
<div class='fluid-vid'><iframe src="https://player.vimeo.com/video/146784000?portrait=0" width="500" height="281" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe></div> 
<figcaption><a href="https://vimeo.com/146784000">Wiki #Edit2015 Je Suis Charlie</a> from <a href="https://vimeo.com/patdavid">Pat David</a> on <a href="https://vimeo.com">Vimeo</a>.</figcaption>
</figure>

<figure style='width: 100%;'>
<div class='fluid-vid'><iframe src="https://player.vimeo.com/video/146790790?portrait=0" width="500" height="281" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe></div> 
<figcaption><a href="https://vimeo.com/146790790">Wiki #Edit2015 Samantha Cristoforetti Nimoy Tribute</a> from <a href="https://vimeo.com/patdavid">Pat David</a> on <a href="https://vimeo.com">Vimeo</a>.</figcaption>
</figure>

<figure style='width: 100%;'>
<div class='fluid-vid'><iframe src="https://player.vimeo.com/video/146791049?portrait=0" width="500" height="281" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe></div> 
<figcaption><a href="https://vimeo.com/146791049">Wiki #Edit2015 SCOTUS LGBQT</a> from <a href="https://vimeo.com/patdavid">Pat David</a> on <a href="https://vimeo.com">Vimeo</a>.</figcaption>
</figure>

<p>Here is the final cut of the video, just released today:</p>
<figure class='big-vid'>
<div class='fluid-vid'>
<iframe width="1280" height="720" src="https://www.youtube-nocookie.com/embed/Rm1LKcHD1VE?rel=0" frameborder="0" allowfullscreen></iframe>
</div>
</figure>

<p>Victor chose some really neat images that were fun to work on!
Of course, all free software was used in this creation (<a href="https://www.gimp.org">GIMP</a> for cutting up the images into sections and rebuilding textures as needed and <a href="http://www.blender.org">Blender</a> for re-assembling the planes and animating the camera movements).
I had previously <a href="http://blog.patdavid.net/2014/02/25d-parallax-animated-photo-tutorial.html">written a tutorial</a> on doing this with free software on my blog.</p>
<p>You can <a href="http://blog.wikimedia.org/2015/12/15/edit2015/">read more on the wikimedia.org blog</a>!</p>
<h2 id="new-certificates"><a href="#new-certificates" class="header-link-alt">New Certificates</a></h2>
<p><img src="https://pixls.us/blog/2015/12/let-s-encrypt/letsencrypt-logo-horizontal.png" alt="Let's Encrypt Logo" style='width:initial;' width='550' height='131'/></p>
<p>Yes, this is not very exciting I’ll concede.
I think it _is_ important though.</p>
<p>I recently took advantage of my beta invite to <a href="https://letsencrypt.org">Let’s Encrypt</a>.
It’s a certificate authority that provides free X.509 certs for domain owners that was founded by the <a href="https://www.eff.org/">Electronic Frontier Foundation</a>, <a href="www.mozilla.org">Mozilla</a>, and the <a href="https://www.umich.edu/">University of Michigan</a>.</p>
<p>The key principles behind <em>Let’s Encrypt</em> are:</p>
<ul>
<li><strong>Free:</strong> Anyone who owns a domain name can use Let’s Encrypt to obtain a trusted certificate at zero cost.</li>
<li><strong>Automatic:</strong> Software running on a web server can interact with Let’s Encrypt to painlessly obtain a certificate, securely configure it for use, and automatically take care of renewal.</li>
<li><strong>Secure:</strong> Let’s Encrypt will serve as a platform for advancing TLS security best practices, both on the CA side and by helping site operators properly secure their servers.</li>
<li><strong>Transparent:</strong> All certificates issued or revoked will be publicly recorded and available for anyone to inspect.</li>
<li><strong>Open:</strong> The automatic issuance and renewal protocol will be published as an open standard that others can adopt.</li>
<li><strong>Cooperative:</strong> Much like the underlying Internet protocols themselves, Let’s Encrypt is a joint effort to benefit the community, beyond the control of any one organization.</li>
</ul>
<p>It was relatively painless to obtain the certs.
I only had to run their program to use ACME to verify my domain ownership through placing a file on my web root.
Once the certs were generated I only had to make some small changes for it to work automatically on <a href="https://discuss.pixls.us">https://discuss.pixls.us</a>.
(And to automatically get picked up when I update the certs within 90 days).</p>
<p>I still had to manually copy/paste the certs into cpanel for <a href="https://pixls.us">https://pixls.us</a>, though.
Not automated (<em>or elegant</em>) but it works and only takes an extra moment to do.</p>
<!-- more -->
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Users Guide to High Bit Depth GIMP 2.9.2, Part 2]]></title>
            <link>https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-2/</link>
            <guid isPermaLink="true">https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-2/</guid>
            <pubDate>Wed, 02 Dec 2015 18:00:00 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-2/flying-bird-between-trees.jpg" /><br/>
                <h1>Users Guide to High Bit Depth GIMP 2.9.2, Part 2</h1> 
                <h2>Part 2: Radiometrically correct editing, unbounded ICC profile conversions, and unclamped editing</h2>  
                <p class='aside'>
This is Part 2 of a two-part guide to high bit depth editing in GIMP 2.9.2 with Elle Stone.
The first part of this article can be found here: <a href="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/"><em>Part 1</em></a>.
</p>


<h3 id="contents">Contents<a href="#contents" class="header-link"><i class="fa fa-link"></i></a></h3>
<ol class='toc'>
<li><a href="#radiometrically-correct-editing">Using GIMP 2.9.2 for radiometrically correct editing</a>

    <ol>
    <li><a href="#linearized-srgb-channel-values-and-radiometrically-correct-editing">Linearized sRGB channel values and radiometrically correct editing</a></li>
    <li><a href="#using-the-linear-light-option-in-the-image-precision-menu">Using the “Linear light” option in the “Image/Precision” menu</a></li>
    <li><a href="#a-note-on-interoperability-between-krita-and-gimp">A note on interoperability between Krita and GIMP</a></li>
    </ol>
</li>

<li><a href="#gimp-2-9-2-s-unbounded-floating-point-icc-profile-conversions-handle-with-care-">GIMP 2.9.2’s unbounded floating point ICC profile conversions (handle with care!)</a></li>

<li><a href="#using-gimp-2-9-2-s-floating-point-precision-for-unclamped-editing">Using GIMP 2.9.2’s floating point precision for unclamped editing</a>

    <ol>
    <li><a href="#high-bit-depth-gimp-s-unclamped-editing-a-whole-realm-of-new-editing-possibilities">High bit depth GIMP’s unclamped editing: a whole realm of new editing possibilities</a></li>
    <li><a href="#if-the-thought-of-working-with-unclamped-rgb-data-is-unappealing-use-integer-precision">If the thought of working with unclamped RGB data is unappealing, use integer precision</a></li>
    </ol>
</li>

<li>
<a href="#looking-to-the-future-gimp-3-0-and-beyond">Looking to the future: GIMP 3.0 and beyond</a>
</li>
</ol>


<hr>
<h2 id="radiometrically-correct-editing">Radiometrically correct editing<a href="#radiometrically-correct-editing" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="linearized-srgb-channel-values-and-radiometrically-correct-editing">Linearized sRGB channel values and radiometrically correct editing<a href="#linearized-srgb-channel-values-and-radiometrically-correct-editing" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>One goal for GIMP 2.10 is to make it easy for users to produce radiometrically correct editing results. “Radiometrically correct editing” reflects the way light and color combine out there in the real world, and so requires that the relevant editing operations be done on linearized RGB.</p>

<p>Like many commonly used RGB working spaces, the sRGB color space is encoded using perceptually uniform RGB. Unfortunately colors simply don’t blend properly in perceptually uniform color spaces. So when you open an sRGB image using GIMP 2.9.2 and start to edit, in order to produce radiometrically correct results, many GIMP 2.9 editing operations will silently linearize the RGB channel information before the editing operation is actually done.</p>

<p>GIMP 2.9.2 editing operations that automatically linearize the RGB channel values include scaling the image, Gaussian blur, UnSharp Mask, Channel Mixer, Auto Stretch Contrast, decomposing to LAB and LCH, all of the LCH blend modes, and quite a few other editing operations.</p>

<p>GIMP 2.9.2 editing operations that <a title="GIMP bug report:  Curves and Levels should operate by default on linear RGB and present linear RGB Histograms" href="https://bugzilla.gnome.org/show_bug.cgi?id=757444">ought to, but don’t yet, linearize the RGB channels include the all-important Curves and Levels operations.</a> For Levels and Curves, to operate on linearized RGB, change the precision to “Linear light” and use the Gamma hack. However, <a title="Jpeg attachment to bug757444 illustrating the problem. with the Curves histogram" href="https://bug757444.bugzilla-attachments.gnome.org/attachment.cgi?id=314590">the displayed histogram will be misleading</a>.</p>

<p>The GIMP 2.9.2 editing operations that automatically linearize the RGB channel values do this regardless of whether you choose “Perceptual gamma (sRGB)” or “Linear light” precision. The only thing that changes when you switch between the “Perceptual gamma (sRGB)” and “Linear light” precisions is <em>how colors blend when painting and when blending different layers together</em>.</p>

<p>(Well, what the Gamma hack actually does changes when you switch between the “Perceptual gamma (sRGB)” and “Linear light” precisions, but the way it changes varies from one operation to the next, which is why I advise to not use the Gamma hack unless you know exactly what you are doing.)</p>

<h3 id="using-the-linear-light-option-in-the-image-precision-menu">Using the “Linear light” option in the “Image/Precision” menu<a href="#using-the-linear-light-option-in-the-image-precision-menu" class="header-link"><i class="fa fa-link"></i></a></h3>
<figure class='big-vid' style='max-width:768px;'>
<img width="768" src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-2/normal-blend-perceptual-vs-linear-cyan-background.jpg" alt="normal-blend-perceptual-vs-linear-cyan-background">
<figcaption><strong>Large soft disks painted on a cyan background.</strong><br/>
 <ol><li><i>Top row:</i> Painted using “Perceptual gamma (sRGB)” precision. Notice the darker colors surrounding the red and magenta disks, and the green surrounding the yellow disk: those are “gamma” artifacts.</li> <li><i>Bottom row:</i> Painted using “Linear Light” precision. This is how light waves blend to make colors out there in the real world.</li></ol>
</figcaption>
</figure>

<figure class='big-vid' style='max-width: 768px;'>
<img width="768" src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-2/normal-blend-perceptual-vs-linear.jpg" alt="normal-blend-perceptual-vs-linear">
<figcaption><strong>Circles painted on a red background.</strong><br/>
 <ol><li><i>Top row:</i> Painted using “Perceptual gamma (sRGB)” precision. The dark edges surrounding the paint strokes are “gamma” artifacts.</li> <li><i>Bottom row:</i> Painted using “Linear Light” precision. This is how light waves blend to make colors out there in the real world.</li></ol>
</figcaption>
</figure>

<p>In GIMP 2.9.2, when using the Normal, Multiply, Divide, Addition, and Subtract painting and Layer blending:</p>
<ul class="double-space">
<li>For radiometrically correct Layer blending and painting, use the “Image/Precision” menu to select the “Linear light” precision option.</li> 

<li>When “Perceptual gamma (sRGB)” is selected, layers and colors will blend and paint like they blend in GIMP 2.8, which is to say there will be “gamma” artifacts.</li> </ul>

<p>The LCH painting and Layer blend modes will <em>always</em> blend using Linear light precision, regardless of what you choose in the “Image/Precision” menu.</p>

<p>What about all the other Layer and painting blend modes? The concept of “radiometrically correct” doesn’t really apply to those other blend modes, so choosing between “Perceptual gamma (sRGB)” and “Linear light” depends entirely on what you, the artist or photographer, actually want to accomplish. Switching back and forth is time-consuming so I tend to stay at “Linear light” precision all the time, unless I really, really, really want a blend mode to operate on perceptually uniform RGB.</p>

<h3 id="a-note-on-interoperability-between-krita-and-gimp">A note on interoperability between Krita and GIMP<a href="#a-note-on-interoperability-between-krita-and-gimp" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Many digital artists and photographers are switching to linear gamma image editing. Let’s say you use Krita for digital painting in a true linear gamma sRGB profile, specifically <a title="Krita/Manual/ColorManagement, section on Linear and Gamma corrected colours. The whole tutorial is very well worth reading." href="https://userbase.kde.org/Krita/Manual/ColorManagement">the “sRGB-elle-V4-g10.icc” profile that is supplied with recent Krita installations</a>, and you want to export your image from Krita and open it with GIMP 2.9.2.</p> 

<p>Upon opening the image, GIMP will automatically detect that the image is in a linear gamma color space, and will offer you the option to keep the embedded profile or convert to the GIMP built-in sRGB profile. Either way, GIMP will automatically mark the image as using “Linear light” precision.</p> 

<p>For interoperability between Krita and GIMP, when editing a linear gamma sRGB image that was exported to disk by Krita:</p> 
<ol>
<li>Upon importing the Krita-exported linear gamma sRGB image into GIMP, elect to <em>keep</em> the embedded “sRGB-elle-V4-g10.icc” profile.</li> 
<li><em>Keep the precision at “Linear light”</em>. </li>
<li>Then <em>assign</em> the GIMP built-in Linear RGB profile (“Image/Color management/Assign”). The GIMP built-in Linear RGB profile is functionally exactly the same as Krita’s supplied “sRGB-elle-V4-g10.icc” profile (as are the GIMP built-in sRGB profile and Krita’s “sRGB-elle-V4-srgbtrc.icc” profile).</li></ol>

<p>Once you’ve assigned the GIMP built-in Linear RGB profile to the imported linear gamma sRGB Krita image, then feel free to change the precision back and forth between “Linear light” and “Perceptual gamma (sRGB)”, as suits your editing goal.</p>

<p>When you are finished editing the image that was imported from Krita to GIMP:</p>

<ol>
<li>Convert the image to one of the “Perceptual gamma (sRGB) precisions (“Image/Precision”).</li>
<li>Convert the image to the Krita-supplied “sRGB-elle-V4-g10.icc” profile (“Image/Color management/Convert”).</li>
<li>Export the image to disk and import it into Krita.</li>
</ol>

<p>If your Krita image is in a color space other than sRGB, I would suggest that you simply not try to edit non-sRGB images in GIMP 2.9.2 because many GIMP 2.9.2 editing operations do depend on hard-coded sRGB color space parameters.</p>


<h2 id="gimp-2-9-2-s-unbounded-floating-point-icc-profile-conversions-handle-with-care-">GIMP 2.9.2’s unbounded floating point ICC profile conversions (handle with care!)<a href="#gimp-2-9-2-s-unbounded-floating-point-icc-profile-conversions-handle-with-care-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Compared to most other RGB color spaces, the sRGB color space gamut is very small. When shooting raw, it’s <a title="Nine Degrees Below Photography: Photographic colors that exceed the very small sRGB color gamut" href="http://ninedegreesbelow.com/photography/srgb-versus-photographic-colors.html">incredibly easy to capture colors that exceed the sRGB color space</a>.</p> 

<figure class='big-vid' style='max-width: 768px;'>
<img width="768" src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-2/srgb-inside-prophoto-3-views.jpg" alt="srgb-inside-prophoto-3-views">
<figcaption><strong>The sRGB (the gray blob) and ProPhotoRGB (the multicolored wire-frame) color spaces as seen from different viewing angles inside the CIELAB reference color space.</strong> <em>(Images produced using ArgyllCMS and View3DScene).</em></figcaption>
</figure>


<p>Every time you convert saturated colors from larger gamut RGB working spaces to GIMP’s built-in sRGB working space <em>using floating point precision</em>, you run the risk of producing out of gamut RGB channel values. Rather than just explaining how this works, it’s better if you experiment and see for yourself:</p>

<ol class="double-space">
<li>Download this 16-bit integer ProPhotoRGB png, “<a href="http://ninedegreesbelow.com/photography/gimp/users-guide/saturated-colors.png">saturated-colors.png</a>“.</li>

<li>Open “saturated-colors.png” with GIMP 2.9.2. GIMP will report the color space profile as “LargeRGB-elle-V4-g18.icc” — this profile is functionally equivalent to ProPhotoRGB.</li>

<li>Immediately change the precision to 32-bit floating point precision (“Image/Precision/32-bit floating point) and check the “Perceptual gamma (sRGB)” option.</li>

<li>Using the Color Picker Tool, make sure the Color Picker is set to “Use info Window” in the Tools dialog. Then eye-dropper the color squares, and make sure to set one of the columns in the Color Picker info Window to “Pixel”. The red square will eye-dropper as (1.000000, 0.000000, 0.000000). The cyan square will eyedropper as (0.000000, 1.000000, 1.000000), and so on. All the channel values will be either 1.000000 or 0.000000.</li>

<li>While still at 32-bit floating point precision, and still using the “Perceptual gamma (sRGB)” option, convert “saturated-colors.png” to GIMP’s built-in sRGB.</li>

<li>Eyedropper the color squares again. The red square will now eyedropper as approximately (1.363299, -2.956852, -0.110389), the cyan square will eyedropper as approximately (-13.365499, 1.094588, 1.003746), and so on.</li> 

<li>For extra credit, change the precision from 32-bit floating point “Perceptual gamma (sRGB)” to 32-bit floating point “Linear light” and eye-dropper the colors again. I will leave it to you as an exercise to figure out why the eye-droppered RGB “Pixel” values change so radically when you switch back and forth between “Perceptual gamma (sRGB)” and “Linear light”.</li>

</ol>

<p>Where did the funny RGB channel values come from? At floating point precision, GIMP uses LCMS2 to do <a title="Nine Degrees Below Photography: LCMS2 Unbounded ICC Profile Conversions" href="http://ninedegreesbelow.com/photography/lcms2-unbounded-mode.html"><i>unbounded</i> ICC profile conversions</a>. This allows an RGB image to be converted from the source to the destination color space without clipping otherwise out of gamut colors. So instead of clipping the RGB channels values to the <a title="Nine Degrees Below Photography: What are 'Clipped Colors' from ICC Profile Conversions?" href="http://ninedegreesbelow.com/photography/icc-profile-conversion-clipped-colors-examples.html">boundaries of the very small sRGB color gamut</a>, the sRGB color gamut was effectively “unbounded”.</p>

<p>When you do an unbounded ICC profile conversion from a larger color space to sRGB, all the otherwise out of gamut colors are encoded using at least one sRGB channel value that is less than zero. And you might get one or more channel values that are greater than 1.0. Figure 11 below gives you a visual idea of the difference between bounded and unbounded ICC profile conversions:</p> 

<figure class='big-vid' style="max-width: 769px;">
<img width="769" src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-2/red-flower-clipping-prophoto-to-srgb.jpg" alt="red-flower-clipping-prophoto-to-srgb">
<figcaption><strong>Unbounded (unclipped floating point) and bounded (clipped integer) conversions of a very colorful red flower from the original ProPhotoRGB color space to the much smaller sRGB color space.</strong> <em>(Images produced using ArgyllCMS and View3DScene).</em><br/><br/>

<ul>
<li><i>Top row:</i> Unbounded (unclipped floating point) and bounded (clipped integer) conversions of a very colorful red flower from the original ProPhotoRGB color space to the much smaller sRGB color space. The unclipped flower is on the left and the clipped flower is on the right.</li>

<li><i>Middle and bottom rows:</i> the unclipped and clipped flower colors in the sRGB color space. The unclipped colors are shown on the left and the clipped colors are shown on the right: <ul> <li class="none">The gray blobs are the boundaries of the sRGB color gamut.</li>
<li>The middle row shows the view inside CIELAB looking straight down the LAB Lightness axis.</li> 
<li>The bottom row shows the view inside CIELAB looking along the plane formed by the LAB A and B axes.</li></ul></li>
</ul>

The unclipped sRGB colors shown on the left are all encoded using at least one sRGB channel value that is less than zero, that is, using a negative RGB channel value.
</figcaption>
</figure>


<p>When converting saturated colors from larger color spaces to sRGB, not clipping would seem to be much better than clipping. Unfortunately a whole lot of RGB editing operations don’t work when performed on negative RGB channel values. In particular, <a title="Nine Degrees Below Photography: Multiplying out of gamut colors in the unbounded sRGB color space produces meaningless results" href="http://ninedegreesbelow.com/photography/unbounded-srgb-multiply-produces-meaningless-results.html">multiplying such colors produces meaningless results</a>, which of course applies not just to the Multiply and Divide blend modes (division and multiplications are inverse operations), but to <em>all</em> editing operations that involve multiplication by a color (other than gray, which is a special case).</p>

<p>So here’s one workaround you can use to clip the out of gamut channel values: Change the precision of “saturated-colors.png” from 32-bit floating point to 32-bit <i>integer</i> precision (“Image/Precision/32-bit integer”). This will clip the out of gamut channel values (integer precision always clips out of gamut RGB channel values). Depending on your monitor profile’s color gamut, you might or might not see the displayed colors change appearance; on a wide-gamut monitor, the change will be obvious.</p> 

<p>When switching to integer precision, all colors are <em>clipped</em> to fit within the sRGB color gamut. Switching back to floating point precision won’t restore the clipped colors.</p>

<aside class="more"><h4>More about out of gamut channel values</h4>

<p>Editing operations that only use add/subtract (which are inverse of each other), and/or multiply/divide by gray (where R=G=B), work just fine on colors that are encoded using one or more negative channel values. Almost all of the problems with <a title="Nine Degrees Below Photography: Using unbounded sRGB as a universal color space for image editing is a really bad idea" href="http://ninedegreesbelow.com/photography/unbounded-srgb-as-universal-working-space.html">unbounded sRGB image editing</a> have to do with editing operations that use multiply and divide.</p>

<p>I’m glossing over the difference between “out of gamut and encoded using at least one negative channel value” and “in gamut high dynamic range colors”, which are encoded using at least one channel value that is &gt;1.0, but no channel value that is &lt;0.0. In this latter case the color is inside the sRGB color gamut for HDR editing, but it falls outside the “0.0 to 1.0” floating point range for <a title="Nine Degrees Below Photography: Models for image editing: Display-referred and scene-referred" href="http://ninedegreesbelow.com/photography/display-referred-scene-referred.html">display-referred editing.</a></p>
</aside>

<p>As an important aside (and contrary to a distressingly popular assumption), when doing a normal “bounded” conversion to sRGB, <a title="Nine Degrees Below Photography: ICC Profile Conversion Intents" href="http://ninedegreesbelow.com/photography/icc-profile-conversion-intents.html">using “Perceptual intent” does <em>not</em> “keep all the colors”</a>. The regular and linear gamma sRGB working color space profiles are matrix profiles, which don’t have perceptual intent tables. When you ask for perceptual intent and the destination profile is a matrix profile, what you get is relative colorimetric intent, which clips.</p>


<h2 id="using-gimp-2-9-2-s-floating-point-precision-for-unclamped-editing">Using GIMP 2.9.2’s floating point precision for unclamped editing<a href="#using-gimp-2-9-2-s-floating-point-precision-for-unclamped-editing" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="high-bit-depth-gimp-s-unclamped-editing-a-whole-realm-of-new-editing-possibilities">High bit depth GIMP’s unclamped editing: a whole realm of new editing possibilities<a href="#high-bit-depth-gimp-s-unclamped-editing-a-whole-realm-of-new-editing-possibilities" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I’ve warned you about the bad things that can happen when you try to multiply or divide colors that are encoded using negative sRGB channel values. However, out of gamut sRGB channel values can also be incredibly useful.</p> 

<p>GIMP 2.9.2 does provide a number of “unclamped” editing operations from which the clipping code in the equivalent GIMP 2.8 operation has been removed. For example, at floating point precision, the Levels upper and lower sliders, Unsharp Mask, Channel Mixer and “Colors/Desaturate/Luminance” do not clip out of gamut RGB channel values (however, Curves does clip). Also the Normal, Lightness, Chroma, and Hue blend modes do not clip out of gamut channel values. </p> 

<p>Unclamped editing opens up a whole realm of new editing possibilities. Quoting from <a title="Nine Degrees Below Photography: tutorial on using high bit depth GIMP's new LCH blend modes and unclamped editing operations." href="http://ninedegreesbelow.com/photography/high-bit-depth-gimp-tutorial-edit-tonality-color-separately.html">Autumn colors: An Introduction to High Bit Depth GIMP’s New Editing Capabilities</a>:</p>

<blockquote>
<p>Unclamped editing operations might sound more arcane than interesting, but especially for photographers this is a really big deal:</p>
<ul>
    <li>Automatically clipped RGB data produces lost detail and causes hue and saturation shifts.</li>
    <li>Unclamped editing operations allow you, the photographer, to choose when and how to bring the colors back into gamut.</li>
    <li>Of interest to photographers and digital artists alike, unclamped editing sets the stage for (and already allows very rudimentary) HDR scene-referred image image editing.</li></ul>
</blockquote>

<p>Having used high bit depth GIMP for quite a while now, I can’t imagine going back to editing that is constrained to only using clipped RGB channel values. The <cite>Autumn colors</cite> tutorial provides a start-to-finish editing example making full use of unclamped editing and the LCH blend modes, with a downloadable XCF file so you can follow along.</p>


<h3 id="if-the-thought-of-working-with-unclamped-rgb-data-is-unappealing-use-integer-precision">If the thought of working with unclamped RGB data is unappealing, use integer precision<a href="#if-the-thought-of-working-with-unclamped-rgb-data-is-unappealing-use-integer-precision" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>If working with unclamped RGB channel data is simply not something you want to do, then use integer precision for all your image editing. At integer precision <i>all</i> editing operations clip. This is a function of integer encoding and so happens regardless of whether the particular editing function includes or doesn’t include clipping code.</p>

<h2 id="looking-to-the-future-gimp-3-0-and-beyond">Looking to the future: GIMP 3.0 and beyond<a href="#looking-to-the-future-gimp-3-0-and-beyond" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Even though GIMP 2.10 hasn’t yet been released, high bit depth GIMP is already an amazing image editor. GIMP 3.0 and beyond will bring many more changes, including the port to GTK+3 (for GIMP 3.0), full color management for any well-behaved RGB working space (maybe by 3.2?), plus extended LCH processing with HSV strictly for use with legacy files. Also users will eventually have the ability to choose “Perceptual” encodings other than the sRGB TRC.</p> 

<p>If you would like to see GIMP 3.0 and beyond arrive sooner rather than later, GIMP is coded, documented, and maintained by volunteers, and GIMP needs more developers. If you are not a programmer, there are <a title="GIMP website: Ways to contribute to GIMP development" href="http://www.gimp.org/develop/">many other ways you can contribute to GIMP development.</a></p>

<p><small><strong>All text and images &copy;2015 <a href="http://ninedegreesbelow.com/">Elle Stone</a>, all rights reserved.</strong></small></p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Happy Birthday GIMP!]]></title>
            <link>https://pixls.us/blog/2015/11/happy-birthday-gimp/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/11/happy-birthday-gimp/</guid>
            <pubDate>Wed, 25 Nov 2015 13:25:15 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/11/happy-birthday-gimp/lede_Mimir.jpg" /><br/>
                <h1>Happy Birthday GIMP!</h1> 
                <h2>Also, wallpapers and darktable 2.0 creeps even closer!</h2>  
                <p>I got busy building a <a href="https://www.gimp.org">birthday present for a project</a> I work with and all sort of neat things happened in my absence!
The <a href="http://www.ubuntu.com/">Ubuntu</a> <a href="https://wiki.ubuntu.com/UbuntuFreeCultureShowcase"><em>Free Culture Showcase</em></a> chose winners for it’s wallpaper contest for <a href="http://releases.ubuntu.com/15.10/">Ubuntu 15.10</a> ‘Wily Werewolf’ (and quite a few community members were among those chosen).</p>
<p>The <a href="http://www.darktable.org">darktable</a> crew is speeding along to a 2.0 release with a new <a href="https://pixls.us/blog/2015/11/happy-birthday-gimp/#darktable-2-0-rc2">RC2 being released</a>.</p>
<p>Also, a great big <a href="https://pixls.us/blog/2015/11/happy-birthday-gimp/#gimp-birthday"><strong>HAPPY 20<sup>th</sup> BIRTHDAY GIMP</strong></a>!
I made you a present.  I hope it fits and you like it! :)</p>
<!-- more -->
<h2 id="ubuntu-wallpapers"><a href="#ubuntu-wallpapers" class="header-link-alt">Ubuntu Wallpapers</a></h2>
<p>Back in early September I <a href="https://discuss.pixls.us/t/ubuntu-free-culture-showcase/382">posted on discuss</a> about the <a href="https://wiki.ubuntu.com/UbuntuFreeCultureShowcase">Ubuntu Free Culture Showcase</a> that was looking for wallpaper submissions from the free software community to coincide with the release of Ubuntu 15.10 ‘Wily Werewolf’.
The winners were recently chosen from among the submissions and several of our community members had their images chosen!</p>
<p>The winning entries from our community include:</p>
<figure class='big-vid'>
<a href='https://www.flickr.com/photos/carmelo75/21455138181' title='Moss inflorescence by carmelo75 on Flickr'>
<img src="https://pixls.us/blog/2015/11/happy-birthday-gimp/carmelo75.jpg" alt='Moss inflorescence by carmelo75'/>
</a>
<figcaption>
<a href="https://www.flickr.com/photos/carmelo75/21455138181"><em>Moss inflorescence</em></a><br/>
The first winner is from <a href="http://www.google.com">PhotoFlow</a> creator <a href="http://photoflowblog.blogspot.com">Andrea Ferrero</a>
<figcaption>
</figure>

<figure class='big-vid'>
<a href='https://www.flickr.com/photos/40792319@N04/20651557934' title='Light my fire, evening sun by Dariusz Duma on Flickr'>
<img src="https://pixls.us/blog/2015/11/happy-birthday-gimp/Dariusz.jpg" alt='Light my fire, evening sun by Dariusz Duma'/>
</a>
<figcaption>
<a href="https://www.flickr.com/photos/40792319@N04/20651557934"><em>Light my fire, evening sun</em></a><br/>
by <a href="https://www.flickr.com/photos/40792319@N04/">Dariusz Duma</a>
<figcaption>
</figure>

<figure class='big-vid'>
<a href='https://www.flickr.com/photos/philipphaegi/21155753321' title='Sitting Here, Making Fun by Philipp Haegi on Flickr'>
<img src="https://pixls.us/blog/2015/11/happy-birthday-gimp/Mimir.jpg" alt='Sitting Here, Making Fun by Philipp Haegi'/>
</a>
<figcaption>
<a href="https://www.flickr.com/photos/philipphaegi/21155753321"><em>Sitting Here, Making Fun</em></a><br/>
by <a href="https://www.flickr.com/photos/philipphaegi/">Mimir</a>
<figcaption>
</figure>

<figure class='big-vid'>
<a href='https://www.flickr.com/photos/patdavid/4624063643' title='Tranquil by Pat David'>
<img src="https://pixls.us/blog/2015/11/happy-birthday-gimp/Pat.jpg" alt='Tranquil by Pat David'/>
</a>
<figcaption>
<a href="https://www.flickr.com/photos/patdavid/4624063643"><em>Tranquil</em></a><br/>
by <a href="https://www.flickr.com/photos/patdavid/">Pat David</a>
<figcaption>
</figure>

<p>A big congratulations to you all for some amazing images being chosen!
If you’re running Ubuntu 15.10, you can grab the <code>ubuntu-wallpapers</code> package to <a href="https://launchpad.net/ubuntu/wily/+source/ubuntu-wallpapers">get these images right here</a>!</p>
<h2 id="darktable-2-0-rc2"><a href="#darktable-2-0-rc2" class="header-link-alt">darktable 2.0 RC2</a></h2>
<p>Hot on the heels of the prior release candidate, <a href="http://www.darktable.org">darktable</a> now <a href="https://github.com/darktable-org/darktable/releases/tag/release-2.0rc2">has an RC2 out</a>.
There are many minor bugfixes from the previous RC1, such as:</p>
<ul>
<li>high iso fix for exif data of some cameras</li>
<li>various macintosh fixes (fullscreen)</li>
<li>fixed a deadlock</li>
<li>updated translations</li>
</ul>
<p>The preliminary changelog from the 1.6.x series:</p>
<ul>
<li>darktable has been ported to gtk-3.0</li>
<li>new thumbnail cache replaces mipmap cache (much improved speed, less crashiness)</li>
<li>added print mode</li>
<li>reworked screen color management (softproof, gamut check etc.)</li>
<li>removed dependency on libraw</li>
<li>removed dependency on libsquish (solves patent issues as a side effect)</li>
<li>unbundled pugixml, osm-gps-map and colord-gtk</li>
<li>text watermarks</li>
<li>color reconstruction module</li>
<li>raw black/white point module</li>
<li>delete/trash feature</li>
<li>addition to shadows&amp;highlights</li>
<li>more proper Kelvin temperature, fine-tuning preset interpolation in WB iop</li>
<li>noiseprofiles are in external JSON file now</li>
<li>monochrome raw demosaicing (not sure whether it will stay for release, like Deflicker, but hopefully it will stay)</li>
<li>aspect ratios for crop&amp;rotate can be added to conf (ae36f03)</li>
<li>navigating lighttable with arrow keys and space/enter</li>
<li>pdf export – some changes might happen there still</li>
<li>brush size/hardness/opacity have key accels</li>
<li>the facebook login procedure is a little different now</li>
<li>export can upscale</li>
<li>we no longer drop history entries above the selected one when leaving dr or switching images</li>
<li>text/font/color in watermarks</li>
<li>image information now supports gps altitude</li>
<li>allow adding tone- and basecurve nodes with ctrl-click</li>
<li>new “mode” parameter in the export panel</li>
<li>high quality export now downsamples before watermark and frame to guarantee consistent results</li>
<li>lua scripts can now add UI elements to the lighttable view (buttons, sliders etc…)</li>
<li>a new repository for external lua scripts was started.</li>
</ul>
<p>More information and packages can be <a href="https://github.com/darktable-org/darktable/releases/tag/release-2.0rc2">found on the darktable github repository</a>.</p>
<p>Remember, updating from the currently stable 1.6.x series is a one-way street for your edits (no downgrading from 2.0 back to 1.6.x).</p>
<h2 id="gimp-birthday"><a href="#gimp-birthday" class="header-link-alt">GIMP Birthday</a></h2>
<p>All together now…</p>
<p><em>Happy Birthday to GIMP!  Happy Birthday to GIMP!</em>…</p>
<figure>
<img src="https://pixls.us/blog/2015/11/happy-birthday-gimp/wilber-big.png" alt='GIMP Wilber Big Icon'/>
<figcaption>
</figcaption>
</figure>

<p>This past weekend <a href="https://www.gimp.org">GIMP</a> celebrated it’s 20<sup>th</sup> anniversary!
It was twenty years ago on November 21<sup>st</sup> that Peter Mattis <a href="http://www.gimp.org/about/prehistory.html#november-1995-an-announcement">announced the availability</a> of the <strong>“General Image Manipulation Program”</strong> on <em>comp.os.linux.development.apps</em>.</p>
<p>Twenty years later and GIMP doesn’t look a day older than a 1.0 release!
(Yes, there’s a <a href="https://en.wikipedia.org/wiki/Double_entendre">double entendre</a> there).</p>
<p>To celebrate, I’ve been spending the past couple of months getting a brand new website and infrastructure built for the project!
<small><em>Just in case anyone was wondering where I was or why I was so quiet.</em></small>
I like the way it turned out and is shaping up so go have a look if you get a moment!</p>
<p>There’s even an <a href="http://www.gimp.org/news/2015/11/22/20-years-of-gimp-release-of-gimp-2816/">official news post</a> about it on the new site!</p>
<h3 id="gimp-2-8-16"><a href="#gimp-2-8-16" class="header-link-alt">GIMP 2.8.16</a></h3>
<p>To coincide with the 20<sup>th</sup> anniversary, the team also released a new stable version in the 2.8 series: <a href="http://www.gimp.org/downloads/">2.8.16</a>.
Head over to the downloads page to pick up a copy!!</p>
<h2 id="new-photoflow-tutorial"><a href="#new-photoflow-tutorial" class="header-link-alt">New PhotoFlow Tutorial</a></h2>
<p>Still working hard and fast on <a href="http://www.google.com">PhotoFlow</a>, <a href="http://photoflowblog.blogspot.com">Andreas</a> took some time to record a new video tutorial.
He walks through some basic usage of the program, in particular opening an image, adding layers and layer masks, and saving the results.
Have a look and if you have a moment give him some feedback!</p>
<div class='big-vid'>
<div class='fluid-vid'>
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/HQpyJapbxrY?rel=0" frameborder="0" allowfullscreen></iframe>
</div>
</div>

<p>Andreas is working on PhotoFlow at a very fast pace, so expect some more news about his progress very soon!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[News from the World of Tomorrow]]></title>
            <link>https://pixls.us/blog/2015/11/news-from-the-world-of-tomorrow/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/11/news-from-the-world-of-tomorrow/</guid>
            <pubDate>Mon, 02 Nov 2015 13:50:17 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/11/news-from-the-world-of-tomorrow/gmic_peppers.jpg" /><br/>
                <h1>News from the World of Tomorrow</h1> 
                <h2>And more awesome updates!</h2>  
                <p>Some awesome updates from the community and activity over on <a href="https://discuss.pixls.us">the forums</a>!
People have been busy doing some really neat things (that really never fail to astound me).
The level of expertise we have floating around on so many topics is quite inspiring.</p>
<div class='fluid-vid'>
<iframe width="480" height="360" src="https://www.youtube-nocookie.com/embed/aiwA0JrGfjA?rel=0" frameborder="0" allowfullscreen></iframe>
</div>

<p><br style="clear:both;"/></p>
<!-- more -->
<h2 id="darktable-2-0-release-candidate"><a href="#darktable-2-0-release-candidate" class="header-link-alt">darktable 2.0 Release Candidate</a></h2>
<h3 id="towards-a-better-darktable-"><a href="#towards-a-better-darktable-" class="header-link-alt">Towards a Better darktable!</a></h3>
<p>A nice Halloween weekend gift for the F/OSS photo community from <a href="http://www.darktable.org">darktable</a>: a first Release Candidate for a 2.0 release is now available!</p>
<p><a href="http://houz.org/">Houz</a> made the announcement on the forums this past weekend and includes some caveats. (Edits will be preserved going up, but it won’t be possible to downgrade back to 1.6.x).</p>
<p>Preliminary notes from houz (and <a href="https://github.com/darktable-org/darktable/releases/tag/release-2.0rc1">Github</a>):</p>
<ul>
<li>darktable has been ported to gtk-3.0</li>
<li>new thumbnail cache replaces mipmap cache (much improved speed, less crashiness)</li>
<li>added print mode</li>
<li>reworked screen color management (softproof, gamut check etc.)</li>
<li>text watermarks</li>
<li>color reconstruction module</li>
<li>raw black/white point module</li>
<li>delete/trash feature</li>
<li>addition to shadows&amp;highlights</li>
<li>more proper Kelvin temperature, fine-tuning preset interpolation in WB iop</li>
<li>noiseprofiles are in external JSON file now</li>
<li>monochrome raw demosaicing (not sure whether it will stay for release, like Deflicker, but hopefully it will stay)</li>
<li>aspect ratios for crop&amp;rotate can be added to conf (ae36f03)</li>
<li>navigating lighttable with arrow keys and space/enter</li>
<li>pdf export – some changes might happen there still</li>
<li>brush size/hardness/opacity have key accels</li>
<li>the facebook login procedure is a little different now</li>
<li>export can upscale</li>
<li>we no longer drop history entries above the selected one when leaving dr or switching images</li>
<li>text/font/color in watermarks</li>
<li>image information now supports gps altitude</li>
<li>allow adding tone- and basecurve nodes with ctrl-click</li>
<li>we renamed mipmaps to thumbnails in the preferences</li>
<li>new “mode” parameter in the export panel</li>
<li>high quality export now downsamples before watermark and frame to guarantee consistent results</li>
<li>lua scripts can now add UI elements to the lighttable view (buttons, sliders etc…)</li>
<li>a new repository for external lua scripts was started.</li>
</ul>
<p><br style="clear:both;"/></p>
<h2 id="g-mic-1-6-7"><a href="#g-mic-1-6-7" class="header-link-alt">G’MIC 1.6.7</a></h2>
<p>Because apparently David Tschumperlé doesn’t sleep, a new release of <a href="http://gmic.eu">G’MIC</a> was <a href="https://discuss.pixls.us/t/release-of-gmic-1-6-7/426">recently announced</a> as well!
This release includes a really neat new patch-based texture resynthesizer that David has been playing with for a while now.</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2015/11/news-from-the-world-of-tomorrow/gmic_syntexturize_patch.jpg" alt="G'MIC Syntexturize Patch" width='960' height='661' />
<figcaption>
Re-synthesizing an input texture to an output of arbitrary size.
</figcaption>
</figure>

<p>It will build an output texture of arbitrary size based on an input texture (and can result in some neat looking peppers apparently).</p>
<p>Speaking of G’MIC…</p>
<h3 id="g-mic-for-adobe-after-effects-and-premier-pro"><a href="#g-mic-for-adobe-after-effects-and-premier-pro" class="header-link-alt">G’MIC for Adobe After Effects and Premier Pro</a></h3>
<p>Yes, I know it’s Adobe.
Still, I can’t help but think that this might be an awesome way to introduce some people to the amazing work being done by so many F/OSS creators.</p>
<p>Tobias Fleischer announced on <a href="https://discuss.pixls.us/t/gmic-for-adobe-after-effects-and-premiere-pro/452">this post</a> that he has managed to get G’MIC working with After Effects and Premier Pro.
Even some of the more intensive filters like skeleton and Rodilius appear to be working fine (if a bit sluggish)!</p>
<figure class='big-vid'>
<img src='https://discuss.pixls.us/uploads/default/original/1X/fdef471a204c3f300f2bc435cf01ea64bb6b2b52.png' alt="Adobe After Effects G'MIC" />
</figure>


<h2 id="photoflow"><a href="#photoflow" class="header-link-alt">PhotoFlow</a></h2>
<p>You might remember <a href="http://photoflowblog.blogspot.ch/">PhotoFlow</a> as the project that creator <a href="http://photoflowblog.blogspot.com/">Andrea Ferrero</a> used when writing his <a href="https://pixls.us/blog/2015/07/photoflow-blended-panorama-tutorial/">Blended Panorama Tutorial</a> from a few months ago.
What you might not realize is that Andrea has also been working at a furious pace improving PhotoFlow (indeed it feels like every few days he is announcing new improvements - almost as fast as G’MIC!).</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2015/11/news-from-the-world-of-tomorrow/photoflow-persp-original.png" alt="PhotoFlow Perspective Correction Original" width='960' height='541' />
<img src="https://pixls.us/blog/2015/11/news-from-the-world-of-tomorrow/photoflow-persp-corrected.png" alt="PhotoFlow Perspective Correction Corrected" width='960' height='541' />
<figcaption>
Example of PhotoFlow perspective correction.
</figcaption>
</figure>

<p>His latest release was <a href="https://discuss.pixls.us/t/release-of-photoflow-version-0-2-3/476">announced a few days ago</a> as 0.2.3.
He’s incorporated some nice new improvements in this version:</p>
<ul>
<li>the additon of the <strong>LMMSE demosaicing</strong> method, directly derived from the algorithm implemented in RawTherapee</li>
<li>an <strong>impulse noise</strong> (also known as <strong>salt&amp;pepper</strong>) reduction tool, again derived from rawTherapee. It effectively reduces isolated bright and dark pixels.</li>
<li>a <strong>perspective correction</strong> tool, derived from Darktable. It can simultaneously correct horizontal and vertical perspective as well as tilting, and works interactively.</li>
</ul>
<p>Head on over to the <a href="http://photoflowblog.blogspot.com/">PhotoFlow Blog</a> to check things out!</p>
<h2 id="lightzone-4-1-3-released"><a href="#lightzone-4-1-3-released" class="header-link-alt">LightZone 4.1.3 Released</a></h2>
<p>We don’t hear as often from folks using <a href="http://lightzoneproject.org/">LightZone</a>, but that doesn’t mean they’re not working on things!
In fact, Doug Pardee just stopped by the forums a while ago to <a href="https://discuss.pixls.us/t/lightzone-4-1-3-released/447">announce a new release</a> is available, 4.1.3.
(Bonus fun - read that topic to see the <a href="http://opensource.org/licenses/BSD-3-Clause"><em>Revised BSD License</em></a> go flying right over my head!)</p>
<p>Head over to [their announcement] to see what they’re up to.
[their announcement]: <a href="http://lightzoneproject.org/content/september-27-2015-lightzone-v413-now-available">http://lightzoneproject.org/content/september-27-2015-lightzone-v413-now-available</a></p>
<h2 id="rapid-photo-downloader"><a href="#rapid-photo-downloader" class="header-link-alt">Rapid Photo Downloader</a></h2>
<p>We also had the developer of <a href="http://www.damonlynch.net/rapid/">Rapid Photo Downloader</a>, Damon Lynch, <a href="https://discuss.pixls.us/t/feedback-wanted-about-rapid-photo-downloader/463">stop by the forums to solicit feedback</a> from users just the other day.
A nice discussion ensued and is well worth reading (or even contributing to!).</p>
<p>Damon is working hard on the next release of RPD (apparently the biggest update since the projects inception in 2007!), so go show some support and provide some feedback for him.</p>
<h2 id="rawtherapee-forum"><a href="#rawtherapee-forum" class="header-link-alt">RawTherapee Forum</a></h2>
<figure>
<img src='https://discuss.pixls.us/uploads/default/original/1X/b5a07c7985e481a95344c2f0e4d6c2a2cac0bda0.png' alt="RawTherapee Logo"/>
</figure>

<p>The <a href="http://rawtherapee.com/">RawTherapee</a> team is testing out having a <a href="https://discuss.pixls.us/c/software/rawtherapee">forum over here on discuss</a> as well (we welcomed the <a href="https://discuss.pixls.us/c/software/gmic">G’MIC community</a> a little while ago).
This is currently an alternate forum for the project (which <em>may</em> become the official forum in the future).
The category is quiet as we only just set it up, so drop by and say hello!</p>
<p>Speaking of RawTherapee…</p>
<h2 id="lede-image"><a href="#lede-image" class="header-link-alt">Lede Image</a></h2>
<p>I want to thank <a href="http://www.londonlight.org/">Morgan Hardwood (LondonLight.org)</a> for providing us a wonderful view of Röstånga, Sweden as a background image on the <a href="https://pixls.us/">main page</a>.</p>
<figure class='big-vid'>
<img src='https://pixls.us/images/main-lede/2015-06-06_rostanga_-_2.jpg' alt='Rostanga by Morgan Hardwood LondonLight.org'/>
<figcaption>
Röstånga by <a href="http://www.londonlight.org">Morgan Hardwood</a> 
<a class="cc" href="https://creativecommons.org/licenses/by-sa/4.0/">cba</a></div>
</figcaption>
</figure>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Users Guide to High Bit Depth GIMP 2.9.2, Part 1]]></title>
            <link>https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/</link>
            <guid isPermaLink="true">https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/</guid>
            <pubDate>Sun, 01 Nov 2015 18:00:00 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/flying-bird-between-trees.jpg" /><br/>
                <h1>Users Guide to High Bit Depth GIMP 2.9.2, Part 1</h1> 
                <h2>Part 1: New high bit depth precision options, new color space algorithms, and new color management options</h2>  
                <!-- ## New high bit depth precision options, New color management options, New algorithms -->
<h3 id="contents">Contents<a href="#contents" class="header-link"><i class="fa fa-link"></i></a></h3>
<ol class='toc'>
    <li><a href="#introduction-high-bit-depth-gimp-2-9-2">Introduction: high bit depth GIMP 2.9.2</a>

        <ol>
        <li><a href="#purpose-of-this-guide">Purpose of this guide</a></li>
        <li><a href="#useful-links-the-official-gimp-website-builds-for-windows-and-mac-building-gimp-on-linux">Useful links: the official GIMP website, builds for Windows and MAC, building GIMP on Linux</a></li>
        <li><a href="#editing-in-srgb-vs-editing-in-other-color-spaces">Editing in sRGB vs editing in other color spaces</a></li>
        <li><a href="#a-note-about-the-gamma-hack-that-s-provided-for-many-editing-operations">A note about the “Gamma hack” that’s provided for many editing operations</a></li>
        </ol></li>

    <li><a href="#new-high-bit-depth-precision-options">New high bit depth precision options</a>

        <ol>
        <li><a href="#menu-for-choosing-the-image-precision">Menu for choosing the image precision</a></li>
        <li><a href="#which-precision-should-you-choose-for-editing-">Which precision should you choose for editing?</a></li>
        <li><a href="#using-the-image-precision-options-when-exporting-an-image-to-disk">Using the image precision options when exporting an image to disk</a></li>
        </ol></li>

    <li><a href="#new-color-management-options">New color management options</a>

        <ol>
        <li><a href="#gimp-2-9-2-automatically-detects-camera-dcf-information">GIMP 2.9.2 automatically detects camera DCF information</a></li>
        <li><a href="#black-point-compensation">Black point compensation</a></li>
        </ol></li>

    <li><a href="#new-and-updated-algorithms-for-converting-to-luminance-lab-and-lch">New and updated algorithms for converting to Luminance, LAB, and LCH</a>

        <ol>
        <li><a href="#converting-srgb-images-from-color-to-black-and-white-using-luma-and-luminance">Converting sRGB images from Color to Black and White using Luma and Luminance</a></li>
        <li><a href="#decomposing-from-srgb-to-lab">Decomposing from sRGB to LAB</a></li>
        <li><a href="#lch-the-actually-usable-replacement-for-the-entirely-inadequate-color-space-known-as-hsv-">LCH: the actually usable replacement for the entirely inadequate color space known as “HSV”</a></li>
        </ol></li>
</ol>

<hr>
<h2 id="introduction-high-bit-depth-gimp-2-9-2">Introduction: high bit depth GIMP 2.9.2<a href="#introduction-high-bit-depth-gimp-2-9-2" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="purpose-of-this-guide">Purpose of this guide<a href="#purpose-of-this-guide" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>As announced on the GIMP users and developers mailing lists, the recent (November 26, 2015) GIMP 2.9.2 release is <a title="GIMP user's mailing list: ANNOUNCE: GIMP 2.9.2 released" href="https://mail.gnome.org/archives/gimp-user-list/2015-November/msg00066.html">the first development release in the GIMP 2.9.x series leading to GIMP 2.10</a>. The release announcement summarizes the many code changes that were made to port the old GIMP code over to GEGL’s high bit depth processing. </p>
<p>This user’s guide to high bit depth GIMP 2.9.2 introduces you to some of high bit depth GIMP’s new editing capabilities that are made possible by GEGL’s high bit depth processing. The guide also points out a few “gotchas” that you should be aware of. Please keep in mind that GIMP 2.9 really is a development branch, so many things don’t yet work exactly like they will work when GIMP 2.10 is released. </p>
<h3 id="useful-links-the-official-gimp-website-builds-for-windows-and-mac-building-gimp-on-linux">Useful links: the official GIMP website, builds for Windows and MAC, building GIMP on Linux<a href="#useful-links-the-official-gimp-website-builds-for-windows-and-mac-building-gimp-on-linux" class="header-link"><i class="fa fa-link"></i></a></h3>
<ul>
<li><a title="The official GIMP (Gnu Image Manipulation Program) website" href="http://www.gimp.org/">GIMP website</a></li>
<li><a title="GIMP and GEGL mailing lists and IRC" href="http://www.gimp.org/mail_lists.html">GIMP IRC and mailing list information</a></li>
<li><a title="Partha's Place" href="http://partha.com/">Partha’s GIMP 2.9 builds for Windows and MAC</a>, including a portable Windows build of my patched GIMP plus information on compiling GIMP on Windows. </li>
<li>Precompiled versions of high bit depth GIMP are more or less widely available for the various Linux operating systems. If you run Linux and you’d like to compile high bit depth GIMP yourself, <a title="Nine Degrees Below Photography: Guide to building GIMP on Linux" href="http://ninedegreesbelow.com/photography/build-gimp-in-prefix-for-artists.html">Building GIMP for artists and photographers</a> has step-by-step instructions.</li>
</ul>

<p>High bit depth GIMP is a work in progress. If you read the release notes for GIMP 2.9.2, you already know that the primary goal for the GIMP 2.10 release is full “Geglification” of the GIMP code base. </p>
<h3 id="editing-in-srgb-vs-editing-in-other-color-spaces">Editing in sRGB vs editing in other color spaces<a href="#editing-in-srgb-vs-editing-in-other-color-spaces" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>For best results when using GIMP 2.9.2, <strong><em>only edit sRGB images</em></strong>. </p>
<p>GIMP 2.8 has hard-coded sRGB parameters that make many editing operations produce wrong results for images that are in RGB working spaces other than sRGB. GIMP 2.9.2 still has these hard-coded sRGB parameters. Almost certainly GIMP 2.10 also will have these same hard-coded sRGB parameters. </p>
<p>Full support for editing images in other RGB working spaces won’t happen
at least until GIMP 3.0, and maybe not until some time after GIMP 3.0.
The next big change for GIMP will be the change-over from GTK+2 to
GTK+3, which is a pretty critical step to make as GTK+2 is on the verge
of being retired. GIMP development is a volunteer effort, porting GIMP
over to GEGL has required an enormous amount of work, and porting from
GTK+2 to GTK+3 isn’t exactly a trivial task. <a title="Hacking:Developer FAQ" href="http://wiki.gimp.org/wiki/Hacking:Developer_FAQ">More GIMP developers would help a lot</a>, so if you have any coding skills, please consider volunteering.</p>
<p>If you really do want to edit in color spaces other than sRGB “right now”, and you are comfortable building GIMP from git, <a title="Nine Degrees Below Photography: Patching GIMP for artists and photographers" href="http://ninedegreesbelow.com/photography/patch-gimp-in-prefix-for-artists.html">my patched version of GIMP 2.9</a> is hard-coded to use the much larger Rec.2020 color space, and it should be obvious how to modify the patches for other RGB working spaces.</p>
<h3 id="a-note-about-the-gamma-hack-that-s-provided-for-many-editing-operations">A note about the “Gamma hack” that’s provided for many editing operations<a href="#a-note-about-the-gamma-hack-that-s-provided-for-many-editing-operations" class="header-link"><i class="fa fa-link"></i></a></h3>
<figure>
<img width="374" height="282" src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/gamma-hack.png" alt="Desaturate dialog with Gamma hack" />
</figure>

<p>A “Gamma hack” option is provided by many GIMP 2.9.2 editing operations. This option sits next to some text that says “(temp hack, please ignore)”. Unless you know exactly what you are doing, you really are better off not using the Gamma hack.</p>
<h2 id="new-high-bit-depth-precision-options">New high bit depth precision options<a href="#new-high-bit-depth-precision-options" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="menu-for-choosing-the-image-precision">Menu for choosing the image precision<a href="#menu-for-choosing-the-image-precision" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>As shown by the screenshot below, GIMP 2.9.2 offers six different image precisions:</p>
<ul><li>Three <em>integer</em> precisions: 8-bit integer, 16-bit integer, and 32-bit integer.</li> 
<li>Three <em>floating point</em> precisions: 16-bit floating point, 32-bit floating point, and 64-bit floating point.</li></ul>

<figure class=''>
<img width="739" src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/precision-menu.png" alt="Precision Menu" >
<figcaption>
<strong>Menu for choosing the image precision.</strong> <br/>
<span style="font-weight: normal;">(The “Perceptual gamma (sRGB)” and “Linear light” switches are explained in <a href="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-2/#radiometrically-correct-editing">Part 2 of this article, under “Radiometrically correct editing”</a>)</span>.
</figcaption>
</figure>



<h3 id="which-precision-should-you-choose-for-editing-">Which precision should you choose for editing?<a href="#which-precision-should-you-choose-for-editing-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>If you have a fast computer with a lot of RAM, I recommend that you always promote your images to 32-bit floating point before you begin editing. Here’s why:</p>
<ol class="double-space ">
<li><b>Regardless of which precision you choose, all babl/GEGL/GIMP <i>internal</i> processing is done at 32-bit floating point</b>. Read that sentence three times.</li>

<li><b>There seems to be a <a title="GIMP bug report: Use 32-bit floating-point linear by default unless 8-bit" href="https://bugzilla.gnome.org/show_bug.cgi?id=734657">small speed penalty for <em>not</em> using 32-bit floating point precision</a>.</b></li>

<li><b>The Precision menu options dictate <strong>how much memory is used to store in RAM</strong> the results of internal calculations:</b> 
<ul><li>Choosing 32-bit floating point precision allows you to take full advantage of GEGL’s 32-bit floating point processing.</li>
<li>If you are working on a lower-RAM machine, performance will benefit from using 16-bit floating point or integer precision, but of course the price is a loss in precision as new editing operations use the results of previous edits as stored in memory.</li>

<li>On very low RAM systems, performance will benefit even more from using 8-bit integer precision. But if you use 8-bit integer precision, you are throwing away most of the advantages of working with a high bit depth image editor.</li>

<li>64-bit precision is made available mostly to accommodate importing and exporting very high bit precision images for scientific editing.  <em>You don’t gain any computational precision from using 64-bit precision for actual editing</em>. If you choose 64-bit precision for editing, all you are really doing is wasting system RAM resources.</li></ul>
</li>

</ol>

<p>As discussed in <a href="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-2/#using-gimp-2-9-2-s-floating-point-precision-for-unclamped-editing">Part 2 of this article, “Using GIMP 2.9.2’s floating point precision for unclamped editing”</a> (and depending on your editing style and goals), instead of 32-bit floating point precision, sometimes you might prefer using 16-bit or 32-bit <em>integer</em> precision. But making full use of all of high bit depth GIMP’s new editing capabilities does require using floating point precision. </p>
<div class="more"><p>Sometimes people assume that floating point is “more precise” than integer, but this isn’t actually true: At any given bit-depth, integer precision is more precise than floating point precision, but uses about the same amount of RAM:</p>
<ul class="double-space"><li>16-bit integer precision is <em>more</em> precise than 16-bit floating point precision, and the two precisions use about the same amount of RAM.</li>
<li>32-bit integer is <em>more</em> precise than 32-bit floating point precision, and the two precisions use about the same amount of RAM. </li>
</ul>

<p>GEGL/GIMP’s internal processing uses 32-bit floating point precision, so both of GIMP’s 32-bit precisions actually provide the same degree of precision.</p>
</div>



<h3 id="using-the-image-precision-options-when-exporting-an-image-to-disk">Using the image precision options when exporting an image to disk<a href="#using-the-image-precision-options-when-exporting-an-image-to-disk" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The precision menu options have another extremely important use beside dictating the precision with which the results of editing operations are held in RAM. When you export the image to disk, the precision options allow you to change the bit depth of the exported image.</p>
<p>For example, some image editors can’t read floating point tiffs. So if you want to export an image as a tiff file that will be opened in another image editor that can only read 8-bit and 16-bit integer tiffs, and your GIMP XCF layer stack is currently using 32-bit floating point precision, you might want to change the XCF layer stack precision to 16-bit integer before exporting the tiff. </p>
<p>After exporting the image, don’t forget to hit “UNDO” (“Edit/Undo . . . “, or else just use the CNTL-Z keyboard shortcut) to get back to 32-bit floating point precision (or whatever other precision you were using).</p>
<h2 id="new-color-management-options">New color management options<a href="#new-color-management-options" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="gimp-2-9-2-automatically-detects-camera-dcf-information">GIMP 2.9.2 automatically detects camera DCF information<a href="#gimp-2-9-2-automatically-detects-camera-dcf-information" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>For reasons only the camera manufacturers know, instead of embedding a proper ICC profile in camera-saved jpegs, usually they embed <a title="Nine Degrees Below Photography: What is embedded color profile information?" href="http://ninedegreesbelow.com/photography/embedded-color-space-information.html">“DCF” and “maker note”</a> information. Whenever a camera manufacturer offers the option to embed a color space that isn’t officially supported by the DCF/Exif standards, each manufacturer feels free to improvise with new tags. </p>
<p>GIMP 2.9.2 does detect and assign the correct color space for most camera-saved jpegs. Like all editing software, GIMP has to play “catch up” with new tags for new color spaces offered by new camera models.</p>
<p>Tell your camera manufacturer that you want proper ICC profiles embedded in your camera-saved jpegs.</p>
<h3 id="black-point-compensation">Black point compensation<a href="#black-point-compensation" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Unlike GIMP 2.8, GIMP 2.9 does offer black point compensation as an explicit option, and it’s enabled by default.</p>
<figure>
<img width="768" src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/gimp292-preferences-color-management.png" alt="GIMP 2.9.2 color management preferences">
<img width="453" class="imgcenter" src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/gimp28-preferences-color-management.png" alt="GIMP 2.8 color management preferences"> 
<figcaption>
<strong>GIMP 2.9 offers black point compensation as an explicit option.</strong></br>
As an aside, GIMP 2.8 actually did offer black point compensation, but in a very round-about way: In GIMP 2.8, if you used the default “Perceptual intent” for the Display rendering intent, then black point compensation was <em>dis</em>abled. And if you chose “Relative colorimetric” for the Display rendering intent, then black point compensation was <em>en</em>abled.</figcaption>
</figure>

<p>Even though black point compensation is checked by default in GIMP 2.9.2, whether you should use black point compensation partly depends on the color management settings provided by the other imaging software that you routinely use. For example, <a title="Nine Degrees Below Photography: Viewing Photographs on the Web" href="http://ninedegreesbelow.com//galleries/viewing-photographs-on-the-web.html">Firefox doesn’t provide for black point compensation</a>. As far as I can tell, neither does RawTherapee or darktable. If one of your goals is to make sure that images look the same as displayed in various softwares, you need to <a title="GIMP bug report: Gimp changes contrast and color of images" href="https://bugzilla.gnome.org/show_bug.cgi?id=723498">make sure all the relevant color management settings match</a>.</p>
<p>What is black point compensation? LCD monitors can’t display “zero light”. There’s always some minimum amount of light coming from the screen. Fill your screen with a solid black image, turn out all the lights and close the doors and curtains, and you’ll see what I mean.</p>
<p>Black point compensation compensates for the fact that RGB working spaces like sRGB allow you to produce colors (for example solid black) that are darker than your monitor can actually display. GIMP uses the LCMS black point compensation algorithm, which very sensibly scales the image tonality so that “solid black” in the image file maps to “darkest dark” in the monitor profile’s color gamut.</p>
<figure>
<img width="768" src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/zero-nonzero-black-points.png" alt="Zero non-zero black points">
<figcaption><strong>Non-zero and zero black points</strong> <em>(images produced using icc_examin and ArgyllCMS)</em>.</figcaption>
</figure>

<p>However, depending on your monitor profile, using or not using black point compensation might not make any difference at all. The only time black point compensation makes a difference is if the Monitor profile you choose in “Preferences/Color management” actually does have a “higher than zero” black point. </p>
<p class="more">Why some monitor profiles do and some don’t have “higher than zero” black points is beyond the scope of this tutorial. Suffice it to say that a very accurate LCD monitor profile will always have a higher than zero black point. But sometimes, and especially for consumer-grade monitors, a very accurate monitor profile will make displayed images look worse than they will when using a less accurate monitor profile.</p>


<h2 id="new-and-updated-algorithms-for-converting-to-luminance-lab-and-lch">New and updated algorithms for converting to Luminance, LAB, and LCH<a href="#new-and-updated-algorithms-for-converting-to-luminance-lab-and-lch" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="converting-srgb-images-from-color-to-black-and-white-using-luma-and-luminance">Converting sRGB images from Color to Black and White using Luma and Luminance<a href="#converting-srgb-images-from-color-to-black-and-white-using-luma-and-luminance" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Under “Colors/Desaturate”, GIMP 2.8 offers three options for converting an sRGB image to black and white: Lightness, Luminosity, and Average:</p>
<ol>
<li>The “Lightness” option adds the lowest and highest RGB channel values and divides the result by two.</li>
<li>The “Luminosity” option is equal to (the Red channel times 0.213) plus (the Green channel times 0.715) plus (the Blue channel times 0.072).</li>
<li>The “Average” option sums all three RGB channel values and divides the result by three.</li>
</ol>

<p>GIMP 2.9.2 still offers all three options for converting an sRGB image to black and white. But the “Luminosity” option has been renamed <a title="Wikipedia: Luma (video)" href="https://en.wikipedia.org/wiki/Luma_%28video%29">Luma</a>, which is the technically correct term (<a title="Wikipedia: Luminosity (disambiguation)" href="https://en.wikipedia.org/wiki/Luminosity_%28disambiguation%29">though various image editors use the term “Luminosity” in various incorrect ways</a>. </p> 
<p>Also GIMP 2.9.2’s “Luma” option uses slightly different multipliers for calculating Luma, being (the Red channel times 0.222) plus (the Green channel times 0.717) plus (the Blue channel times 0.061). The GIMP 2.8 multipliers were wrong and the GIMP 2.9 multipliers are correct.</p>

<p class="more">Since I know you won’t be able to get any sleep until someone tells you why the multipliers for calculating Luma were changed, the GIMP 2.9 multipliers have been Bradford-adapted from D65 to D50, which is required for use in an ICC profile color-managed editing application (at least until the next version of the ICC specs is released and people figure out how to deal with the new freedom to use non-D50 reference white points).</p>

<p style="text-indent: 0;">GIMP 2.9.2 also offers a fourth option for converting sRGB images to black and white, which is “Luminance”. “Luminance” is short for <a title="Wikipedia: Relative Luminance" href="https://en.wikipedia.org/wiki/Relative_luminance">relative luminance</a>. Luminance is calculated using the same channel multipliers that are used to calculate Luma. The mathematical difference between calculating Luma and Luminance is as follows:</p> 
<ul>
<li>Luma is calculated using RGB channel values that are encoded using the sRGB TRC.</li>
<li>Luminance is calculated using linearized RGB channel values, producing a radiometrically correct and physically meaningful conversion from color to black and white.</li></ul>

<p>Of the various options in the “Colors/Desaturate” menu, “Luminance” is the only physically meaningful way to convert from color to black and white.</p> <p>The Red, Blue, and Green Luma and Luminance channel multipliers are specific to the sRGB color space. These channel multipliers are actually the “Y” components of the sRGB ICC profile’s XYZ primaries. As you might expect, different RGB working spaces have different “Y” values, and so the GIMP 2.9.2 conversions to Luma and Luminance only produce correct results for sRGB images.</p>

<figure class='big-vid'>
<img src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/rgb-luminance-conversion-to-black-and-white.jpg" alt=""  />
<figcaption style='text-align:left; max-width:772px; margin:0 auto;'>
<strong>GIMP 2.9 sRGB Luminance and Luma conversions to black and white</strong><br/>
Click to compare sRGB Luminance and Luma conversions to black and white:<br><span class="toggle-swap" data-fig-swap="rgb-luminance-conversion-to-black-and-white.jpg">1. “Colors/Desaturate/Luminance” conversion to black and white</span>
<span class="toggle-swap" data-fig-swap="rgb-luma-conversion-to-black-and-white.jpg">2. “Colors/Desaturate/Luma” conversion to black and white</span>
</figcaption>
</figure>



<h3 id="decomposing-from-srgb-to-lab">Decomposing from sRGB to LAB<a href="#decomposing-from-srgb-to-lab" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Decomposing to LAB does use hard-coded sRGB parameters and so will produce wrong results in other RGB working spaces. </p>
<p>In GIMP 2.8, decomposing an sRGB image to LAB produced flatly wrong results.
In GIMP 2.9.2, decomposing an sRGB image to LAB does produce mathematically correct results. But if you use “drag and drop” to pull the decomposed grayscale layers over to your sRGB layer stack, there is still a small error in the resulting RGB layer. Figure 3 below illustrates the problem:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/red-green-blue-glass-color-LAB-L-mathematically-correct.jpg" alt="RGB Glass Color LAB L Mathematically Correct"  />
<figcaption style='text-align: left; max-width: 768px; margin:0 auto;'>
<strong>Decomposing to LAB and retrieving the LAB Lightness (“L”) channel</strong><br/>
<em>Click the links below the image to see the original color image and the results of decomposing to LAB plus “dragging and dropping the L channel” in GIMP 2.8 vs GIMP 2.9.</em>
<span class="toggle-swap" data-fig-swap="red-green-blue-glass-color-LAB-L-mathematically-correct.jpg">1. Mathematically correct conversion to LAB Lightness</span>
<span class="toggle-swap" data-fig-swap="red-green-blue-glass-color-LAB-L-gimp29-drag-drop.jpg">2. GIMP 2.9.2 decompose to LAB + drag and drop (a little wrong)</span>
<span class="toggle-swap" data-fig-swap="red-green-blue-glass-gimp28-incorrect-LAB-L-to-RGB.jpg">3. GIMP 2.8 decompose to LAB + drag and drop (not done on linearized RGB, so results are very wrong)</span>
<span class="toggle-swap" data-fig-swap="red-green-blue-glass-color.jpg">4. The original color layer that was decomposed to LAB</span>
<span class="toggle-swap" data-fig-swap="xicclu-lstar-lab-l-srgb-trc.png">5. Difference between the LAB and sRGB companding curves (the reason why “drag and drop” in GIMP 2.9 produces slightly wrong results)</span>
</figcaption>
</figure>


<p>Assuming you start with an image in the regular sRGB color space, then:</p>
<ul class="double-space">
<li>In GIMP 2.9.2, decomposing a layer to LAB in GIMP 2.9 produces mathematically correct results.

<p>However, dragging the resulting grayscale channels back to the RGB XCF color stack results in a slightly wrong result. This is because the dropped grayscale layer(s), which don’t have an embedded ICC profile, are assumed to be encoded using the sRGB <a title="Bruce Lindbloom's Equations for converting from RGB and LAB to XYZ" href="http://brucelindbloom.com/index.html?Eqn_RGB_to_XYZ.html">companding curve</a> (Tone Reproduction Curve, “TRC”), when really they are encoded using the LAB companding curve. This is a color management problem that can be solved by enabling GIMP to do grayscale color management (all that’s needed is a little developer time — did I mention that GIMP really does need more developers?).</p>

<p>As an incredibly important aside, a mathematically correct conversion from sRGB to LAB Lightness and back to sRGB produces exactly the same thing as using GIMP 2.9.2’s “Colors/Desaturate/Luminance” option to change an sRGB image from color to black and white.</p></li>

<li>In GIMP 2.8, decomposing a layer to LAB produces wildly mathematically incorrect results, and dragging the resulting channel(s) back to the RGB XCF color stack also produces wildly mathematically incorrect results. So older GIMP tutorials on using the LAB Lightness channel to convert an image to black and white won’t produce anywhere near the same results when using GIMP 2.9/GIMP 2.10.</li> 
</ul>

<p>If you’d like to know more about “LAB Lightness to black and white”, the following two-part article untangles the massive amounts of confusion regarding converting an RGB image to black and white using the LAB Lightness channel:</p>
<ol>
<li><a title="LAB Tutorial, Part 1, Nine Degrees Below Photography" href="http://ninedegreesbelow.com/photography/lab-lightness-to-black-and-white-gimp28.html">LAB Lightness to black and white using GIMP 2.8</a>. </li>
<li><a title="LAB Tutorial, Part 2, Nine Degrees Below Photography" href="http://ninedegreesbelow.com/photography/lab-lightness-to-black-and-white-gimp29-photoshop.html">LAB Lightness to black and white using GIMP 2.9 and PhotoShop</a> (the typical PhotoShop tutorial on using the LAB Lightness channel to convert to black and white does produce mathematically <em>in</em>correct results).</li>
</ol>


<h3 id="lch-the-actually-usable-replacement-for-the-entirely-inadequate-color-space-known-as-hsv-">LCH: the actually usable replacement for the entirely inadequate color space known as “HSV”<a href="#lch-the-actually-usable-replacement-for-the-entirely-inadequate-color-space-known-as-hsv-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>LCH calculations do use hard-coded sRGB parameters, and so will produce wrong results in other RGB working spaces.</p>
<p><a title="Wikipedia: HSL and HSV" href="https://en.wikipedia.org/wiki/HSL_and_HSV">HSV</a> (“Hue/Saturation/Value”) is a <a title="Wikipedia: HSL and HSV Disadvantages" href="https://en.wikipedia.org/wiki/HSL_and_HSV#Disadvantages">sad little color space</a> designed for <a title="Wikipedia: HSL and HSV Motivations" href="https://en.wikipedia.org/wiki/HSL_and_HSV#Motivation">fast processing on slow computers, way back in the stone age of digital processing</a>. HSV is OK for picking colors from a color wheel. But it’s really wretched for just about any other editing application, because despite the fact that “HSV” stands for “Hue/Saturation/Value”, you actually can’t adjust color and tonality separately in the HSV color space.</p>
<p>“LCH” stands for “Lightness, Chroma, Hue”. LCH is mathematically derived from the <a title="Nine Degrees Below Photography: A small guided tour of color patches as located in the CIELAB reference color space." href="http://ninedegreesbelow.com/photography/pictures-of-color-spaces.html">CIELAB reference color space</a>, which in turn is a perceptually uniform transform of the <a title="Nine Degrees Below Photography: Completely Painless Programmer's Guide to XYZ, RGB, ICC, xyY, and TRCs" href="http://ninedegreesbelow.com/photography/xyz-rgb.html">CIEXYZ reference color space</a>. Unlike HSV, LCH is a physically meaningful color space that allows you to edit separately for color and tonality.</p>
<p>Very roughly speaking:</p>
<ul>
<li>LCH <em>Lightness</em> corresponds to HSV <em>Value</em>.</li>

<li>LCH <em>Chroma</em> corresponds to HSV <em>Saturation</em>.</li>

<li>LCH <em>Hue</em> corresponds to HSV <em>Hue</em> (the names are the same, but the two blend modes are based on very different mathematics).</li>

<li>LCH <em>Color</em> is a combination of LCH Chroma and Hue, and corresponds to HSV <em>Color</em>, which is a combination of HSV Hue and Saturation (again, the names are the same, but the two blend modes are based on very different mathematics).</li></ul>

<p>LCH blend modes and painting are a game-changing addition to high bit depth GIMP editing capabilities. If you’d like to see examples of what you can do with LCH, that you can’t even come close to doing with HSV, I’ve written a couple of tutorials on using GIMP’s LCH color space capabilities:</p>

<ol class="double-space">
<li><a title="LCH Blend modes tutorial, Nine Degrees Below Photograhy" href="http://ninedegreesbelow.com/photography/gimp-lch-blend-modes.html">A tutorial on GIMP’s very awesome LCH Blend Modes</a>, which shows how to use GIMP’s new LCH blend modes to repair a badly damaged image, and then to colorize a black and white rendering of the image.</li>

<li><a title="Tutorial on using LCH, Nine Degrees Below Photography" href="http://ninedegreesbelow.com/photography/high-bit-depth-gimp-tutorial-edit-tonality-color-separately.html">Autumn colors: An Introduction to High Bit Depth GIMP’s New Editing Capabilities</a>, which shows how to use GIMP’s new LCH blend modes to edit separately for color and tonality. </li>
</ol>

<figure class='big-vid'>
<img width="772" height="" src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/patch-front-fish.jpg" alt="Compare LCH vs HSV when restoring color.">
<figcaption style='max-width: 772px; text-align:left; margin:0 auto;'>Restoring color to a damaged image: LCH Color blend mode vs the HSV Color blend mode: The LCH Color blend mode produces smooth, believable color transitions. The HSV Color blend mode produces very splotchy results.
</figcaption>
</figure>

<figure class='big-vid'>
<img width="772" height="" src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/color-blend-modes-vs-tonality.jpg" alt="LCH vs HSV when changing color.">
<figcaption style='max-width: 772px; text-align:left; margin:0 auto;'>Changing an image’s color: LCH Color blend mode vs HSV Color blend mode: The LCH Color blend mode changes the image color without modifying the image tonality, whereas the HSV Color blend mode simultaneously changes tonality along with color (HSV blending with blue made the tonality darker, HSV blending with yellow made the tonality lighter).</figcaption>
</figure>

<p>I’m not an especially skilled programmer. In fact I find writing code to be a painfully slow exercise. But one major reason why I maintain a <a title="Nine Degrees Below Photography: Patching GIMP for artists and photographers" href="http://ninedegreesbelow.com/photography/patch-gimp-in-prefix-for-artists.html">patched version of high bit depth GIMP</a> is precisely so I can use the LCH color space not just for blending and painting, but also for <a title="GIMP bug report: Add LCH to the color picker" href="https://bugzilla.gnome.org/show_bug.cgi?id=749902">picking colors and as a replacement for the essentially useless HSV “Hue-Saturation” tool</a>. These particular editing capabilities will eventually make it into an official GIMP release, but I didn’t want to wait for “eventually” to happen.</p>

<p><a href="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-2/">Click here to go to Part 2</a> of this guide to GIMP 2.9.2!<br>Part 2 discusses using GIMP 2.9.2 to do radiometrically correct editing, unbounded ICC profile conversions, and unclamped editing.</p>
<p><small><strong>All text and images &copy;2015 <a href="http://ninedegreesbelow.com/">Elle Stone</a>, all rights reserved.</strong></small></p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Portrait Lighting Cheat Sheets]]></title>
            <link>https://pixls.us/blog/2015/09/portrait-lighting-cheat-sheets/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/09/portrait-lighting-cheat-sheets/</guid>
            <pubDate>Thu, 17 Sep 2015 14:23:35 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/09/portrait-lighting-cheat-sheets/Lighting-Samples.jpg" /><br/>
                <h1>Portrait Lighting Cheat Sheets</h1> 
                <h2>Blender to the Rescue!</h2>  
                <p>Many moons ago <a href="http://blog.patdavid.net/2012/03/visualize-photography-lighting-setups.html" title="Visualize Photography Lighting Setups in Blender">I had written about</a> acquiring a YN-560 speedlight for playing around with off-camera lighting.
At the time I wanted to experiment with how different modifiers might be used in a portrait setting.
Unfortunately, these were lighting modifiers that I didn’t own yet.</p>
<p>I wasn’t going to let that slow me down, though!</p>
<p>If you want to skip the how and why to get straight to the cheat sheets, <a href="https://pixls.us/blog/2015/09/portrait-lighting-cheat-sheets/#the-lighting-cheat-sheets">click here</a>.</p>
<p><a href="http://ir-ltd.net/">Infinite Realities</a> had released a full 3D scan by <a href="http://ir-ltd.net/tag/lee-perry-smith/" title="Possibly NSFW">Lee Perry-Smith</a> of his head that was graciously licensed under a <a href="http://creativecommons.org/licenses/by/3.0/" title="Creative Commons Attribution 3.0">Creative Commons Attribution 3.0 Unported License</a>.
For reference, here is a link to the <a href="http://www.ir-ltd.net/uploads/Infinite_Scan_Ver0.1.rar">object file and textures</a> (80MB) and the <a href="http://www.ir-ltd.net/uploads/Infinite_Scan_Displacements_Ver0.1.rar">displacement maps</a> (65MB) from the Infinite Realities website.</p>
<p>What I did was to bring the high resolution scan and displacement maps into <a href="http://www.blender.org/">Blender</a> and manually created my lights with modifiers in a virtual space.
Then I could simply render what a particular light/modifier would look like with a realistic person being lit in any way I wanted.</p>
<!-- more -->
<figure class="big-vid">
<img src="https://pixls.us/blog/2015/09/portrait-lighting-cheat-sheets/blender-view-256.png" alt="Blender View Lighting Setup"/>
</figure>

<p>This leads to all sorts of neat freedom to experiment with things to see how they might come out.
Here’s another look at the lede image:</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2015/09/portrait-lighting-cheat-sheets/th_Lighting-Samples.jpg" alt="Blender Lighting Samples" />
<figcaption>
Various lighting setups test in Blender.
</figcaption>
</figure>

<p>I had originally intended to make a nice bundled application that would allow someone to try all sorts of different lighting setups, but my skill in Blender only go so far.
My skills at convincing others to help me didn’t go very far either. :)</p>
<p>So, if you’re ok with navigating around Blender already, feel free to check out <a href="http://blog.patdavid.net/2012/03/visualize-photography-lighting-setups.html" title="Visualize Photography Lighting Setups in Blender">my original blog post</a>
 to download the .blend file and give it try!
<a href="https://about.me/jimmygunawan/bio">Jimmy Gunawan</a> even took it further and modified the .blend to work with Blender cycles rendering as well.</p>
<div class="fluid-vid">
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/irLcpDdnkcM?rel=0" frameborder="0" allowfullscreen></iframe>
</div>

<p>With the power to create a lighting visualization of any scenario I then had to see if there was something cool I could make for others to use…</p>
<h2 id="the-lighting-cheat-sheets"><a href="#the-lighting-cheat-sheets" class="header-link-alt">The Lighting Cheat Sheets</a></h2>
<p>I couldn’t help but generate some lighting cheat sheets to help others use as a reference.
I’ve seen some different ones around but I took advantage of having the most patient model in the world to do this with. :)</p>
<p>These were generated by rotating a 20” (<em>virtual</em>) softbox in a circle around the subject at 3 different elevations (0, 30&deg;, and 60&deg;).</p>
<p><em>Click the caption title for a link to the full resolution files</em>:</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2015/09/portrait-lighting-cheat-sheets/0-degrees-portrait-lighting-cheat-sheet-reference.jpg" alt='Blender Lighting Setup 0 degrees' />
<figcaption>
<a href="0-degrees-portrait-lighting-cheat-sheet-reference-full.jpg" title="Click for full resolution version">Softbox 0&deg; Portrait Lighting Cheat Sheet Reference</a><br/>
by Pat David (<a class='cc' href='https://creativecommons.org/licenses/by-sa/2.0/'>cba</a>)
</figcaption>
</figure>

<figure class='big-vid'>
<img src="https://pixls.us/blog/2015/09/portrait-lighting-cheat-sheets/30-degrees-portrait-lighting-cheat-sheet-reference.jpg" alt='Blender Lighting Setup 30 degrees' />
<figcaption>
<a href="30-degrees-portrait-lighting-cheat-sheet-reference-full.jpg" title="Click for full resolution version">Softbox 30&deg; Portrait Lighting Cheat Sheet Reference</a><br/>
by Pat David (<a class='cc' href='https://creativecommons.org/licenses/by-sa/2.0/'>cba</a>)
</figcaption>
</figure>

<figure class='big-vid'>
<img src="https://pixls.us/blog/2015/09/portrait-lighting-cheat-sheets/60-degrees-portrait-lighting-cheat-sheet-reference.jpg" alt='Blender Lighting Setup 60 degrees' />
<figcaption>
<a href="60-degrees-portrait-lighting-cheat-sheet-reference-full.jpg" title="Click for full resolution version">Softbox 60&deg; Portrait Lighting Cheat Sheet Reference</a><br/>
by Pat David (<a class='cc' href='https://creativecommons.org/licenses/by-sa/2.0/'>cba</a>)
</figcaption>
</figure>

<p>Hopefully these might prove useful as a reference for some folks.
Share them, print them out, tape them to your lighting setups! :)
I wonder if we could get some cool folks from the community to make something neat with them?</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Softness and Superresolution]]></title>
            <link>https://pixls.us/blog/2015/09/softness-and-superresolution/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/09/softness-and-superresolution/</guid>
            <pubDate>Tue, 08 Sep 2015 17:13:08 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/09/softness-and-superresolution/francis.jpg" /><br/>
                <h1>Softness and Superresolution</h1> 
                <h2>Experimenting and Clarifying</h2>  
                <p>A small update on how things are progressing (hint: well!) and some neat things the community is playing with.</p>
<p>I have been quiet these past few weeks because I decided I didn’t have enough to do and thought a rebuild/redesign of the <a href="http://static.gimp.org">GIMP website</a> would be fun, apparently.
Well, it _is_ fun and something that couldn’t hurt to do.
So I stepped up to help out.</p>
<!-- more -->
<h2 id="a-question-of-softness"><a href="#a-question-of-softness" class="header-link-alt">A Question of Softness</a></h2>
<p>There was <a href="https://www.facebook.com/groups/speedlightfundamentals/permalink/1627843414142335/">a thread</a> recently on a certain large social network in a group dedicated to off-camera flash.
The thread was started by someone with the comment:</p>
<blockquote>
<p>The most important thing you can do with your speed light is to put some rib <small>[sic]</small> stop sail cloth over the speed light to soften the light.</p>
</blockquote>
<p>Which just about gave me an aneurysm (those that know me and lighting can probably understand why).
Despite some sound explanations about why this won’t work to “soften” the light, there was a bit of back and forth about it.
To make matters worse, even after over 100 comments, <em>nobody</em> bothered to just go out and shoot some sample images to see it for themselves.</p>
<p>So I finally went out and shot some to illustrate and I figured they would be more fun if they were shared 
(I did actually post these <a href="https://discuss.pixls.us/t/light-source-softness/384">on our forum</a>).</p>
<p>I quickly set a lightstand up with a YN560 on it pointed at my garden statue.
I then took a shot with bare flash, one with diffusion material pulled over the flash head, and one with a 20” diy softbox attached.</p>
<p>Here’s what the setup looked like with the softbox in place:</p>
<figure>
<img src="https://pixls.us/blog/2015/09/softness-and-superresolution/softbox-setup.jpg" alt="Soft Light Test - Softbox Setup" width="640" height="480" />
<figcaption>
Simple light test setup (with a DIY softbox in place).
</figcaption>
</figure>

<p>Remember, this was done to demonstrate that simply placing some diffusion fabric over the head of a speedlight does nothing to “soften” the resulting light:</p>
<figure>
<img src="https://pixls.us/blog/2015/09/softness-and-superresolution/francis-bare.jpg" data-swap-src="francis-diffusion-panel.jpg" alt="Softness test image bare flash" width="640" height="640" />
<figcaption>
Bare flash result.  Click to compare with diffusion material.
</figcaption>
</figure>

<p>This shows clearly that diffusion material over the flash head does <em>nothing</em> to affect the “softness” of the resulting light.</p>
<p>For a comparison, here is the same shot with the softbox being used:</p>
<figure>
<img src="https://pixls.us/blog/2015/09/softness-and-superresolution/francis-softbox.jpg" data-swap-src="francis-diffusion-panel.jpg" alt="Softness test image softbox" width="640" height="640" />
<figcaption>
Same image with the softbox in place.  Click to compare with diffusion material.
</figcaption>
</figure>


<p>I also created some crops to help illustrate the difference up close:</p>
<figure>
<img src="https://pixls.us/blog/2015/09/softness-and-superresolution/crop-1-bare.jpg" alt="Softness test crop #1" width="640" height="640" />
<figcaption>
Click to compare: 
<span class='toggle-swap' data-fig-swap='crop-1-bare.jpg'>Bare Flash</span>
<span class='toggle-swap' data-fig-swap='crop-1-diffusion.jpg'>With Diffusion</span>
<span class='toggle-swap' data-fig-swap='crop-1-softbox.jpg'>With Softbox</span>
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2015/09/softness-and-superresolution/crop-2-bare.jpg" alt="Softness test crop #1" width="640" height="640" />
<figcaption>
Click to compare: 
<span class='toggle-swap' data-fig-swap='crop-2-bare.jpg'>Bare Flash</span>
<span class='toggle-swap' data-fig-swap='crop-2-diffusion.jpg'>With Diffusion</span>
<span class='toggle-swap' data-fig-swap='crop-2-softbox.jpg'>With Softbox</span>
</figcaption>
</figure>

<p>Hopefully this demonstration can help put to rest any notion of softening a light through close-set diffusion material (at not-close flash-to-subject distances).  At the end of the day, the “softness” quality of a light is a function of the <em>apparent size</em> of the light source <em>relative to the subject</em>. (The sun is the biggest light source I know of, but it’s so far it’s quality is quite harsh.)</p>
<h2 id="a-question-of-scaling"><a href="#a-question-of-scaling" class="header-link-alt">A Question of Scaling</a></h2>
<p>On <a href="https://discuss.pixls.us">discuss</a>, member <a href="https://discuss.pixls.us/users/paperdigits">Mica</a> <a href="https://discuss.pixls.us/t/whats-your-workflow-for-up-scaling-images/375/7">asked an awesome question</a> about what our workflows are for adding resolution (upsizing) to an image.
There were a bunch of great suggestions from the community.</p>
<p>One I wanted to talk about briefly I thought was interesting from a technical perspective.</p>
<p>Both Hasselblad and Olympus announced not too long ago the ability to drastically increase the resolution of images in their cameras that used a “sensor-shift” technology to shift the sensor by a pixel or so while shooting multiple frames, then combing the results into a much larger megapixel image (200MP in the case of Hasselblad, and 40MP in the Olympus).</p>
<p>It turns out we can do the same thing manually by burst shooting a series of images while handholding the camera (the subtle movement of our hand while shooting provides the requisite “shift” to the sensor).
Then we simply combine the images, upscale, and average the results to get a higher resolution result.</p>
<p>The basic workflow uses <a href="http://hugin.sourceforge.net/">Hugin</a> <code>align_image_stack</code>, <a href="http://imagemagick.org/script/index.php">Imagemagick</a> <code>mogrify</code>, and <a href="http://gmic.eu/">G’MIC</a> <code>mean blend script</code> to achieve the results.</p>
<ol>
<li>Shoot a bunch of handheld images in burst mode (if available).</li>
<li>Develop raw files if that’s what you shot.</li>
<li>Scale images up to 4x resolution (200% in width and height).  Straight nearest-neighbor type of upscale is fine.<ul>
<li>In your directory of images, create a new sub-directory called <em>resized</em>.</li>
<li>In your directory of images, run <code>mogrify -scale 200% -format tif -path ./resized *.jpg</code> if you use jpg’s, otherwise change as needed.
This will create a directory full of upscaled images.</li>
</ul>
</li>
<li>Align the images using Hugin’s <code>align_image_stack</code> script.<ul>
<li>In the <em>resized</em> directory, run <code>/path/to/align_image_stack -a OUT file1.tif file2.tif ... fileX.tif</code>
The <code>-a OUT</code> option will prefix all your new images with <code>OUT</code>.</li>
<li>I move all of the <code>OUT*</code> files to a new sub-directory called <code>aligned</code>.</li>
</ul>
</li>
<li>In the <code>aligned</code> directory, you now only need to mean average all of the images together.<ul>
<li>Using Imagemagick: <code>convert OUTfile*.tif -evaluate-sequence mean output.bmp</code></li>
<li>Using G’MIC: <code>gmic video-avg.gmic -avg \&quot; *.tif \&quot; -o output.bmp</code></li>
</ul>
</li>
</ol>
<p>I used 7 burst capture images from an iPhone 6+ (default resolution 3264x2448).
This is the test image:</p>
<figure>
<img src="https://pixls.us/blog/2015/09/softness-and-superresolution/Super-full.jpg" alt="Superresolution test image" width="640" height="480" />
<figcaption>
Sample image, red boxes show 100% crop areas.
</figcaption>
</figure>

<p>Here is a 100% crop of the first area:</p>
<figure>
<img src="https://pixls.us/blog/2015/09/softness-and-superresolution/crop-1-base.jpg" alt="Superresolution crop #1 example" width="500" height="250" />
<figcaption>
100% crop of the base image, straight upscale.
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2015/09/softness-and-superresolution/crop-1-super.jpg" alt="Superresolution crop #1 example result" width="500" height="250" />
<figcaption>
100% crop, super resolution process result.
</figcaption>
</figure>

<p>The second area crop:</p>
<figure>
<img src="https://pixls.us/blog/2015/09/softness-and-superresolution/crop-2-base.jpg" alt="Superresolution crop #2 example " width="500" height="250" />
<figcaption>
100% crop, super resolution process result.
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2015/09/softness-and-superresolution/crop-2-super.jpg" alt="Superresolution crop #2 example result" width="500" height="250" />
<figcaption>
100% crop, super resolution process result.
</figcaption>
</figure>


<p>Obviously this doesn’t replace the ability to have that many raw pixels available in a single exposure, but if the subject is relatively static this method can do quite well to help increase the resolution.
As with any mean/median blending technique, a nice side-effect of the process is great noise reduction as well…</p>
<p>Not sure if this warrants a full article post, but may consider it for later.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Freaky Details (Calvin Hollywood)]]></title>
            <link>https://pixls.us/articles/freaky-details-calvin-hollywood/</link>
            <guid isPermaLink="true">https://pixls.us/articles/freaky-details-calvin-hollywood/</guid>
            <pubDate>Mon, 31 Aug 2015 19:33:50 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/freaky.jpg" /><br/>
                <h1>Freaky Details (Calvin Hollywood)</h1> 
                <h2>Replicating Calvin Hollywood's Freaky Details in GIMP</h2>  
                <p>German photographer/digital artist/photoshop trainer <a href="http://www.calvinhollywood-blog.com">Calvin Hollywood</a> has a rather unique style to his photography. It’s a sort of edgy, gritty, hyper-realistic result, almost a blend between illustration and photography.</p>
<figure>
<a href="http://www.calvinhollywood-blog.com/portfolio/">
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/calvin-thumbs.jpg" alt="Calvin Hollywood Examples" width="470" height="315" />
</a>
</figure>

<p>As part of one of his courses, he talks about a technique for accentuating details in an image that he calls “Freaky Details”.  </p>
<p>Here is Calvin describing this technique using Photoshop:</p>
<div>
<div class='fluid-vid'>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/ZV9u0Wu8L0M" frameborder="0" allowfullscreen></iframe>
</div>
</div>

<p>In my meandering around different retouching tutorials I came across it a while ago, and wanted to replicate the results in <a href="http://www.gimp.org">GIMP</a> if possible. There were a couple of problems that I ran into for replicating the exact same workflow:  </p>
<ol>
<li>Lack of a “Vivid Light” layer blend mode in GIMP</li>
<li>Lack of a “Surface Blur” in GIMP</li>
</ol>
<p>Those problems have been rectified (and I have more patience these days to figure out what exactly was going on), so let’s see what it takes to replicate this effect in GIMP!</p>
<h2 id="replicating-freaky-details">Replicating Freaky Details<a href="#replicating-freaky-details" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="requirements">Requirements<a href="#requirements" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The only extra thing you’ll need to be able to replicate this effect is <a href="http://gmic.eu/">G’MIC for GIMP</a>.</p>
<p class='aside'>
You don’t <em>technically</em> need G’MIC to make this work, but the process of manually creating a <strong>Vivid Light</strong> layer is tedious and error-prone in GIMP right now.
Also, you won’t have access to G’MIC’s Bilateral Blur for smoothing. 
And, seriously, it’s G’MIC - you should have it anyway for all the other cool stuff it does!
</p>

<h3 id="summary-of-steps">Summary of Steps<a href="#summary-of-steps" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Here’s the summary of steps we are about to walk through to create this effect in GIMP:  </p>
<ol>
<li>Duplicate the background layer.</li>
<li>Invert the colors of the top layer.</li>
<li>Apply “Surface Blur” to top layer.</li>
<li>Set top layer blend mode to “Vivid Light”.</li>
<li>New layer from visible.</li>
<li>Set layer blend mode of new layer to “Overlay”, hide intermediate layer.</li>
</ol>
<p>There are just a couple of small things to point out though, so keep reading to be aware of them!  </p>
<h3 id="detailed-steps">Detailed Steps<a href="#detailed-steps" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I’m going to walk through each step to make sure it’s clear, but first we need an image to work with!  </p>
<p>As usual, I’m off to <a href="http://www.flickr.com/creativecommons">Flickr Creative Commons</a> to search for a <a href="https://creativecommons.org/" title="Creative Commons">CC licensed</a> image to illustrate this with. 
I found an awesome portrait taken by the <a href="https://www.flickr.com/photos/thenationalguard/">U.S. National Guard/Staff Sergeant Christopher Muncy</a>:</p>
<figure>
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/Base.jpg" alt="New York National Guard, on Flickr" width="640" height="808" />
<figcaption>
<a href="https://www.flickr.com/photos/thenationalguard/15941126053">New York National Guard</a> by <a href="https://www.flickr.com/photos/thenationalguard/">U.S. National Guard/Staff Sergeant Christopher Muncy</a> 
on Flickr (<span class='cc'><a href="https://creativecommons.org/licenses/by/2.0/" title="Creative Commons Attribution">cb</a></span>).<br/>
Airman First Class Anthony Pisano, a firefighter with the New York National Guard’s 106th Civil Engineering Squadron, 106th Rescue Wing conducts a daily equipment test during a major snowstorm on February 17, 2015.<br/>
(New York Air National Guard / Staff Sergeant Christopher S Muncy / released)
</figcaption>
</figure>

<p>This is a great image to test the effect, and to hopefully bring out the details and gritty-ness of the portrait.  </p>
<h4 id="1-2-duplicate-background-layer-and-invert-colors">1./2. Duplicate background layer, and invert colors<a href="#1-2-duplicate-background-layer-and-invert-colors" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>So, duplicate your base image layer (Background in my example).  </p>
<p><span class="Cmd">Layer → Duplicate<br> (Shift-Ctrl-D,Shift-⌘-D)
</span></p>
<p>I will usually name the duplicate layer something descriptive, like <strong>“Temp”</strong> ;).  </p>
<p>Next we’ll just invert the colors on this <strong>“Temp”</strong> layer.  </p>
<p><span class="Cmd">Colors → Invert</span></p>
<p>So right now, we should be looking at this on our canvas:  </p>
<figure>
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/Base-Invert.jpg" alt="GIMP Freaky Details Inverted Image" width="640" height="808" />
<figcaption>
The inverted duplicate of the base layer.
</figcaption>
</figure>


<figure>
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/Base-Invert-Layers.png" alt="GIMP Freaky Details Inverted Image Layers" width="249" height="213" />
<figcaption>
What the Layers dialog should look like.
</figcaption>
</figure>

<p>Now that we’ve got our inverted <strong>“Temp”</strong> layer, we just need to apply a little blur.  </p>
<h4 id="3-apply-surface-blur-to-temp-layer">3. Apply “Surface Blur” to Temp Layer<a href="#3-apply-surface-blur-to-temp-layer" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>There’s a couple of different ways you could approach this. Calvin Hollywood’s tutorial explicitly calls for a Photoshop <strong>Surface Blur</strong>. I think part of the reason to use a <strong>Surface Blur</strong> vs. <strong>Gaussian Blur</strong> is to cut down on any halos that will occur along edges of high contrast.  </p>
<p>There are three main methods of blurring this layer that you could use:  </p>
<ol>
<li><p>Straight Gaussian Blur (easiest/fastest, but may halo - worst results)  </p>
<p><span class="Cmd" style="font-size:0.9em;">Filters → Blur → Gaussian Blur</span></p>
</li>
<li><p>Selective Gaussian Blur (closer to true “Surface Blur”)  </p>
<p><span class="Cmd" style="font-size:0.9em;">Filters → Blur → Selective Gaussian Blur</span></p>
</li>
<li><p>G’MIC’s Smooth [bilateral] (closest to true “Surface Blur”)  </p>
<p><span class="Cmd" style="font-size:0.9em;">Filters → G’MIC → Repair → Smooth [bilateral]</span></p>
</li>
</ol>
<p>I’ll leave it as an exercise for the reader to try some different methods and choose one they like. (At this point I personally pretty much just always use G’MIC’s Smooth [bilateral] - this produces the best results by far).  </p>
<p>For the Gaussian Blurs, I’ve had good luck with radius values around 20% - 30% of an image dimension. As the blur radius increases, you’ll be acting more on larger local contrasts (as opposed to smaller details) and run the risk of halos. So just keep an eye on that.  </p>
<p>So, let’s try applying some G’MIC Bilateral Smoothing to the <strong>“Temp”</strong> layer and see how it looks!  </p>
<p>Run the command:  </p>
<p><span class="Cmd" >Filters → G’MIC → Repair → Smooth [bilateral]</span></p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/GMIC-bilateral.png" alt="GIMP Freaky Details G'MIC Bilateral Filter" width="960" height="735" />
<figcaption>
The values I used in this example for Spatial/Value Variance.
</figcaption>
</figure>

<p>The values you want to fiddle with are the Spatial Variance and Value Variance (25 and 20 respectively in my example). You can see the values I tried for this walkthrough, but I encourage you to <em>experiment a bit on your own as well</em>!  </p>
<p>Now we should see our canvas look like this:  </p>
<figure >
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/Base-Bilateral.jpg" alt="GIMP Freaky Details G'MIC Bilateral Filter Result" width="640" height="808" />
<figcaption>
Our <strong>“Temp”</strong> layer after applying G’MIC Smoothing [bilateral]
</figcaption>
</figure>


<figure>
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/Base-Invert-Layers.png" alt="GIMP Freaky Details Inverted Image Layers" width="249" height="213" />
<figcaption>
Layers should still look like this.
</figcaption>
</figure>


<p>Now we just need to blend the <strong>“Temp”</strong> layer with the base background layer using a <strong>“Vivid Light”</strong> blending mode…  </p>
<h4 id="4-5-set-temp-layer-blend-mode-to-vivid-light-new-layer">4./5. Set <em>Temp</em> Layer Blend Mode to <em>Vivid Light</em> &amp; New Layer<a href="#4-5-set-temp-layer-blend-mode-to-vivid-light-new-layer" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Now we need to blend the <strong>“Temp”</strong> layer with the Background layer using a <strong>“Vivid Light”</strong> blending mode. Lucky for me, I’m friendly with the G’MIC devs, so I asked nicely, and ﻿<a href="https://tschumperle.users.greyc.fr/">David Tschumperlé</a> added this blend mode for me.  </p>
<p>So, again we start up G’MIC:  </p>
<p><span class="Cmd">Filters → G’MIC → Layers → Blend [standard] - Mode: Vivid Light</span></p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/GMIC-Vivid.png" alt="GIMP Freaky Details Vivid Light Blending" width="960" height="735" />
<figcaption>
G’MIC <strong>Vivid Light</strong> blending mode, pay attention to <span style="color:green;">Input/Output!</span>
</figcaption>
</figure>

<p>Pay careful attention to the <span style="color:green;">Input/Output</span> portion of the dialog. You’ll want to set the <strong>Input Layers</strong> to <strong>All visibles</strong> so it picks up the <strong>Temp</strong> and <strong>Background</strong> layers. You’ll also probably want to set the <strong>Output</strong> to <strong>New layer(s)</strong>.  </p>
<p>When it’s done, you’re going to be staring at a very strange looking layer, for sure:  </p>
<figure> 
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/Base-Vivid.jpg" alt="GIMP Freaky Details Vivid Light Blend Mode" width="640" height="808" />
<figcaption>
Well, sure it looks weird out of context…
</figcaption>
</figure>


<figure> 
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/GMIC-Vivid-Layers.png" alt="GIMP Freaky Details Vivid Light Blend Mode Layers" width="249" height="258" />
<figcaption>
The layers should now look like this.
</figcaption>
</figure>


<p>Now all that’s left is to hide the <strong>“Temp”</strong> layer, and set the new <strong>Vivid Light</strong> result layer to <strong>Overlay</strong> layer blending mode…  </p>
<h4 id="6-set-vivid-light-result-to-overlay-hide-temp-layer">6. Set Vivid Light Result to Overlay, Hide <em>Temp</em> Layer<a href="#6-set-vivid-light-result-to-overlay-hide-temp-layer" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>We’re just about done. Go ahead and hide the <strong>“Temp”</strong> layer from view (we won’t need it anymore - you could delete it as well if you wanted to).  </p>
<p>Finally, set the G’MIC <strong>Vivid Light</strong> layer output to <strong>Overlay</strong> layer blend mode:  </p>
<figure> 
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/GMIC-Final-Layers.png" alt="GIMP Freaky Details Final Blend Mode Layers" width="249" height="259" />
<figcaption>
Set the resulting G’MIC output layer to <strong>Overlay</strong> blend mode.
</figcaption>
</figure>


<p>The results we should be seeing will have enhanced details and contrasts, and should look like this (mouseover to compare the original image):  </p>
<figure> 
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/Final.jpg" alt="GIMP Freaky Details Final" data-swap-src="Base.jpg" width="640" height="808" />
<figcaption>
Our final results (whew!)<br/>
(click to compare to original)
</figcaption>
</figure>


<p>This technique will emphasize any noise in an image so there may be some masking and selective application required for a good final effect.</p>
<h3 id="summary">Summary<a href="#summary" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>This is not an effect for everyone. I can’t stress that enough. It’s also not an effect for every image. But if you find an image it works well on, I think it can really do some interesting things. It can definitely bring out a very dramatic, gritty effect (it works well with nice hard rim lighting and textures).  </p>
<p>The original image used for this article is another great example of one that works well with this technique:</p>
<figure> 
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/Final2-curves.jpg" alt="GIMP Freaky Details Alternate Final" data-swap-src="Base2.jpg" width="640" height="962" />
<figcaption>
<a href="http://www.flickr.com/photos/shakeskc/6519028411/">After a Call</a> by <a href="http://markshaiken.com/">Mark Shaiken</a> on Flickr. (<span class='cc'><a href="https://creativecommons.org/licenses/by-nc-sa/2.0/" title="Creative Commons Attribution Non-Commercial Share-Alike">cbna</a></span>)
</figcaption>
</figure>

<p>I had muted the colors in this image before applying some Portra-esque color curves to the final result..</p>
<p>Finally, a <strong>BIG THANK YOU</strong> to <a href="https://tschumperle.users.greyc.fr/">David Tschumperlé</a> for taking the time to add a <strong>Vivid Light</strong> blend mode in G’MIC.  </p>
<p>Try the method out and let me know what you think or how it works out for you! And as always, if you found this useful in any way, please share it, pin it, like it, or whatever you kids do these days…  </p>
<p>This tutorial was originally published <a href="http://blog.patdavid.net/2013/02/calvin-hollywood-freaky-details-in-gimp.html">here</a>.</p>
<h2 id="addendum">Addendum<a href="#addendum" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>For those looking for a faster/easier way to achieve this effect, it has now been integrated as a filter into <a href="http://gmic.eu/">G’MIC</a>. (Again, thanks to David Tschumperlé!)</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Notes from the dark(table) Side]]></title>
            <link>https://pixls.us/blog/2015/08/notes-from-the-dark-table-side/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/08/notes-from-the-dark-table-side/</guid>
            <pubDate>Fri, 14 Aug 2015 14:32:34 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/08/notes-from-the-dark-table-side/darktable_2.jpg" /><br/>
                <h1>Notes from the dark(table) Side</h1> 
                <h2>A review of the Open Source Photography Course</h2>  
                <p>We recently posted about the Open Source Photography Course from photographer Riley Brandt.
We now also have a review of the course as well.</p>
<p>This review is actually by one of the <a href="http://wwww.darktable.org">darktable</a> developers, <a href="http://houz.org">houz</a>!
He had originally <a href="https://discuss.pixls.us/t/review-of-riley-brandts-open-source-photography-course/344/1">posted it on discuss</a> as a topic but I think it deserves a blog post instead.
(When a developer from a favorite project speaks up, it’s usually worth listening…)</p>
<p>Here is houz’s review:</p>
<hr>
<h2 id="the-open-source-photography-course-review"><a href="#the-open-source-photography-course-review" class="header-link-alt">The Open Source Photography Course Review</a></h2>
<h3 id="by-houz"><a href="#by-houz" class="header-link-alt">by houz</a></h3>
<figure>
<img src="https://pixls.us/blog/2015/08/notes-from-the-dark-table-side/houz.jpg" alt="Author houz headshot" />
</figure>


<p>It seems that there is no topic to discuss <a href="https://discuss.pixls.us/t/the-open-source-photography-course/263">The Open Source Photography Course</a> yet so let’s get started.</p>
<h3 id="disclaimer"><a href="#disclaimer" class="header-link-alt">Disclaimer</a></h3>
<p>First of all, as a darktable developer I am biased so take everything I write with a grain of salt. Second, I didn’t pay for my copy of the videos but Riley was kind enough to provide a free copy for me to review. So add another pinch of salt. I will therefore not tell you if I would encourage you to buy the course. You can have my impressions nevertheless.</p>
<h3 id="review"><a href="#review" class="header-link-alt">Review</a></h3>
<p>I won’t say anything about the GIMP part, not because it wouldn’t know how to use that software but it’s relatively short and I just didn’t notice anything to comment on. It’s solid basics of how to use GIMP and the emphasis on layer masks is really important in real world usage.</p>
<!-- more -->
<p>Now for the darktable part, I have to say that I liked it a lot. It showcases a viable workflow and is relatively complete – not by explaining every module and becoming the audio book of the user manual but by showing at least one tool for every task. And as we all know, in darktable there are many ways to skin a cat, so concentrating on your favourites is a good thing.</p>
<p>What I also appreciate is that Riley managed to cut the single topics to manageable chunks of around 10 minutes or less so you can easily watch them in your lunch break and have no problem to come back to one topic later and easily find what you are looking for.</p>
<p>Before this starts to sound like an advertisement I will just point out some small nitpicking things I noticed while watching the videos. Most of these were not errors in the videos but are just extra bits of information that might make your workflow even smoother, so it’s more of an addendum than an erratum.</p>
<ul>
<li>When going through your images on lighttable you can either zoom in till you only see a single image (alt-1 is a shortcut for that) or hold the z key pressed. Both are shown in the videos. The latter can quickly become tedious since releasing z just once bring you back to where you were. There are however two more keyboard shortcuts that are not assigned by default under views&gt;lighttable: ‘sticky preview’ and ‘sticky preview with focus detection’. Both work just like normal z and ctrl-z, just without the need to keep the key pressed. You can assign a key to these, for example by reusing z and ctrl-z.</li>
<li>Color labels can be set with F1 .. F5, similar to rating.</li>
<li>Basecurve and tonecurve allow very fine up/down movement of points with the mouse wheel. Hover over a node and scroll.</li>
<li>Gaussian in shadows&amp;highlights tends to give stronger halos than bilateral in normal use, see <a href="http://www.darktable.org/2012/09/edge-aware-image-development/">the darktable blog</a> for an example.</li>
<li>For profiled denoising better use ‘HSV color’ instead of ‘color’ and ‘HSV lightness’ instead of ‘lightness’, see <a href="http://darktable.org/usermanual/ch03s02s06.html.php">the user manual</a> for details.</li>
<li>When using the mouse wheel to zoom the image you can hold ctrl to get it smaller than fitting to the screen. That’s handy to draw masks over the image border.</li>
<li>When moving the triangles in color zones apart you actually widen the scope of affected values since the curve gets moved off the center line on a wider range.</li>
<li>Also color zones: You can also change reds and greens in the same instance, no need for multiple instances. Riley knows that and used two instances to be able to control the two changes separately.</li>
<li>When loading sidecar files from lighttable, you can even treat a JPEG that was exported from darktable like an XMP file and manually select that since the JPEGs get the processing data embedded. It’s like a backup of the XMP with a preview. <strong>Caveat:</strong> When using LOTS of mask nodes (mostly with the brush mask) the XMP data might get too big so it’s no longer possible to embed in the JPEG, but in general it works.</li>
<li>The collect module allows to store presets so you can quickly access often used search rules. And since presets only store the module settings and not 
the resulting image set these will be updated when new images are imported.</li>
<li>In neutral density you can draw a line with the right mouse button, similar to rotating images.</li>
<li>Styles can also be created from darkroom, there is a small button next to the history compression button.</li>
</ul>
<p>So, that’s it from me. Did you watch the videos, too? What was your impression? Do you have any remarks?</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Color Curves Matching]]></title>
            <link>https://pixls.us/articles/color-curves-matching/</link>
            <guid isPermaLink="true">https://pixls.us/articles/color-curves-matching/</guid>
            <pubDate>Tue, 04 Aug 2015 19:10:36 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/color-curves-matching/dorothy.jpg" /><br/>
                <h1>Color Curves Matching</h1> 
                <h2>Sample points and matching tones</h2>  
                <p>In my previous post on <a href="https://pixls.us/articles/basic-color-curves/">Color Curves for Toning/Grading</a>, I looked at the basics of what the Curves dialog lets you do in <a href="http://www.gimp.org">GIMP</a>.
I had been meaning to revisit the subject with a little more restraint (the color curve in that post was a little rough and gross, but it was for illustration so I hope it served its purpose).</p>
<p>This time I want to look at the use of curves a little more carefully.
You’d be amazed at the subtlety that gentle curves can produce in toning your images.
Even small changes in your curves can have quite the impact on your final result.
For instance, have a look at the four film emulation curves created by <a href="http://www.prime-junta.net/pont/How_to/100_Curves_and_Films/_Curves_and_films.html">Petteri Sulonen</a> (if you haven’t read his page yet on creating these curves, it’s well worth your time):</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/dot-original.jpg" alt='Dot Original Headshot' width='550' height='469'>
<figcaption>
Original
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/color-curves-matching/dot-portra.jpg" alt='Dot Portra NC400 Film' width='550' height='469'>
<figcaption>
Portra<em>esque</em> (Kodak Portra NC400 Film)
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/color-curves-matching/dot-provia.jpg" alt='Dot Fuji Provia Film' width='550' height='469'>
<figcaption>
Provia<em>esque</em> (Fujichrome Provia)
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/color-curves-matching/dot-velvia.jpg" alt='Dot Fuji Velvia Film' width='550' height='469'>
<figcaption>
Velvia<em>esque</em> (Fujichrome Velvia)
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/color-curves-matching/dot-xpro.jpg" alt='Dot crossprocessed C41 Film' width='550' height='469'>
<figcaption>
Crossprocess (E6 slide film in C-41 neg. processing)
</figcaption>
</figure>

<p>I can’t thank Petteri enough for releasing these curves for everyone to use (for us GIMP users, there is a .zip file at the bottom of his post that contains these curves packaged up).
Personally I am a huge fan of the Portra<em>esque</em> curve that he has created.
If there is a person in my images, it’s usually my go-to curve as a starting point.
It really does generate some wonderful skin tones overall.</p>
<p>The problem in generating these curves is that one has to be very, very familiar with the characteristics of the film stocks you are trying to emulate.
I never shot Velvia personally, so it is hard for me to have a reference point to start from when attempting to emulate this type of film.</p>
<p>What we can do, however, is to use our personal vision or sense of aesthetic to begin toning our images to something that we like.  GIMP has some great tools for helping us to become more aware of color and the effects of each channel on our final image.  That is what we are going to explore…</p>
<p class='aside'>
<span>Disclaimer</span>

I cannot stress enough that what we are approaching here is an entirely subjective interpretation of what is pleasing to our own eyes.  Color is a very complex subject and deserves study to really understand.  Hopefully some of the things I talk about here will help pique your interest to push further and experiment!
<br/>
There is no right or wrong, but rather what you find pleasing to your own eye.
</p>



<h2 id="approximating-tones">Approximating Tones<a href="#approximating-tones" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>What we will be doing is using <strong>Sample Points</strong> and the <strong>Curves</strong> dialog to modify the color curves in my image above to emulate something else.  It could be another photograph, or even a painting.</p>
<p>I’ll be focusing on the skin tones, but the method can certainly be used for other things as well.</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/dot-original.jpg" alt='Dot Original Headshot' width='550' height='469'>
<figcaption>
My wonderful model.
</figcaption>
</figure>

<p>With an image you have, begin considering what you might like to approximate the tones on.  For instance, in my image above I want to work on the skin tones to see where it leads me.</p>
<p>Now find an image that you like, and would like to approximate the tones from.  It helps if the image you are targeting already has tones <em>somewhat</em> similar to what you are starting with (for instance, I would look for another Caucasian image with similar tones to start from, as opposed to Asian).  Keeping tones at least similar will reduce the violence you’ll do to your final image.</p>
<p>So for my first example, perhaps I would like to use the knowledge that the Old Masters already had in regards to color, and would like to emulate the skin tones from Vermeer’s <em>Girl with the Pearl Earring</em>.</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/537px-Johannes_Vermeer_%281632-1675%29_-_The_Girl_With_The_Pearl_Earring_%281665%29.jpg" alt='Johannes Vermeer Girl with the Pearl Earring' width='537' height='768'>
<figcaption>
<a href="http://en.wikipedia.org/wiki/Johannes_Vermeer">Johannes Vermeer</a> - <a href="http://en.wikipedia.org/wiki/Girl_with_a_Pearl_Earring">The Girl With The Pearl Earring (1665)</a>
</figcaption>
</figure>

<p>In GIMP I will have my original image already opened, and will then open my target image as a new layer.  I’ll pull this layer to one side of my image to give me a view of the areas I am interested in (faces and skin).</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/vermeer-initial.jpg" alt='Vermeer setup GIMP' width='640' height='539'>
</figure>

<p>I will be using <a href="http://docs.gimp.org/en/gimp-sample-point-dialog.html"><strong>Sample Points</strong></a> extensively as I proceed.  Read up on them if you haven’t used them before.  They are basically a means of giving you real-time feedback of the values of a pixel in your image (you can track up to four points at one time).</p>
<p>I will put a first sample point somewhere on the higher skin tones of my base image.  In this case, I will put one on my models forehead (we’ll be moving it around shortly, so somewhere in the neighborhood is fine).</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/sample-point-first.png" alt='GIMP first sample point' width='381' height='195'>
</figure>

<p><strong>Ctrl + Left Click</strong> in the ruler area of your main window (shown in <span style="color: #00FF00;">green above</span>), and drag out into your image.  There should be crosshairs across your entire image screen showing you where you are dragging.</p>
<p>When you release the mouse button, you’ve dropped a <strong>Sample Point</strong> onto your image.  You can see it in my image above as a small crosshair with the number <strong>1</strong> next to it.</p>
<p>GIMP <i>should</i> open the sample points dialog for you when you create the first point, but if not you can access it from the image menu under:</p>
<p><span class='Cmd'>Windows → Dockable Dialogs → Sample Points</span></p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/Sample-point-first-dialog.png" alt='Sample points dialog' width='208' height='330'>
</figure>

<p>This is what the dialog looks like.
You can see the RGB pixel data for the first sample point that I have already placed.
As you place more sample points, they will each be reflecting their data on this dialog.</p>
<p>You can go ahead and place more sample points on your image now.  I’ll place another sample point, but this time I will put it on my target image where the tones seem similar in brightness.</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/vermeer-2-points.jpg" alt='Sample point placed' width='550' height='167'>
</figure>

<p>What I’ll then do is change the data being shown in the <strong>Sample Points</strong> dialog to show HSV data instead of Pixel data.</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/Sample-point-value-match.png" alt='Sample points dialog with 2 points' width='208' height='330'>
</figure>

<p>Now, I will shoot for around 85% value on my source image, and try to find a similar value level in similar tones from my target image as well.  Once you’ve placed a sample point, you can continue to move it around and see what types of values it gives you.  (If you use another tool in the meantime, and can no longer move just the sample point - you can just select the <strong>Color Picker Tool</strong> to be able to move them again).</p>
<p>Move the points around your skin tones until you get about the same <strong>Value</strong> for both points.</p>
<p>Once you have them, make sure your original image layer is active, then start up the curves dialog.</p>
<p><span class='Cmd'>Colors → Curves…</span></p>
<p>Now here is something really handy to know while using the Curves dialog: if you hover your mouse over your image, you’ll notice that the cursor is a dropper - you can click and drag on an area of your image, and the corresponding value will show up in your curves dialog for that pixel (or averaged area of pixels if you turn that on).  </p>
<p>So click and drag to about the same pixel you chose in your original image for the sample point.</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/curve-first-point.png" alt='Curve base' width='378' height='521'>
<figcaption>
Curves dialog with a value point (217) for my sampled pixel.
</figcaption>
</figure>

<p>Here is what my working area currently looks like:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/color-curves-matching/workspace-1.jpg" alt='GIMP workspace for sample point color matching' width='960' height='439'>
</figure>

<p>I have my curves dialog open, and an area around my sample point chosen so that the values will be visible in the dialog, my images with their associated sample points, and the sample points dialog showing me the values of those points.</p>
<p>The basic idea now is to adjust my RGB channels to get my original image sample point (#1) to match my target image sample point (#2).</p>
<p>Because I selected an area around my sample point with the curves dialog open, I will know roughly where those values need to be adjusted.  Let’s start with the <b style="color: #FF0000;">Red</b> channel.</p>
<p>First, set the <strong>Sample Points</strong> dialog back to <strong><i>Pixel</i></strong> to see the RGBA data for that pixel.</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/Sample-point-rgb-match.png" alt='GIMP Sample point Red Green Blue matching' width='208' height='330'>
</figure>

<p>We can now see that to match the pixel colors we will need to make some adjustments to each channel.  Specifically, </p>
<p>the <b style="color: #ff0000">Red</b> channel will have to come down a bit (218 → 216), </p>
<p>the <b style="color: #00ff00">Green</b> down some as well (188 → 178), </p>
<p>and <b style="color: #0000ff">Blue</b> much more (171 → 155).</p>
<p>You may want to resize your <strong>Curves</strong> dialog window larger to be able to more finely control the curves.  If we look at the Red channel in my example, we would want to adjust the curve down slightly at the vertical line that shows us where our pixel values are:</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/earring-red.png" alt='Color Curve Adjustment Red' width='370' height='495'>
</figure>

<p>We can adjust the red channel curve along this vertical axis (marked x:217) until our pixel red value matches the target (216).</p>
<p>Then just change over to the green channel and do the same:</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/earring-green.png" alt='Color Curve Adjustment Green' width='370' height='495'>
</figure>

<p>Here we are adjusting the green curve vertically along the axis marked x:190 until our pixel green value matches the target (178).</p>
<p>Finally, follow the same procedure for the blue channel:</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/earring-blue.png" alt='Color Curve Adjustment Blue' width='370' height='495'>
</figure>

<p>As before, we adjust along the vertical axis x:173 until our blue channel matches the target (155).</p>
<p>At this point, our first sample point pixel should be the same color as from our target.</p>
<p>The important thing to take away from this exercise is to be watching your image as you are adjusting these channels to see what types of effects they produce.  Dropping the green channel should have seen a slight addition of magenta to your image, and dropping the blue channel should have shown you the addition of a yellow to balance it.</p>
<p>Watch your image as you make these changes.</p>
<p><em><strong>Don’t</strong> hit <em>OK</em> on your curves dialog yet!</em></p>
<p>You’ll want to repeat this procedure, but using some sample points that are darker than the previous ones.  Our first sample points had values of about 85%, so now let’s see if we can match pixels down below 50% as well.</p>
<p><em>Without</em> closing your curves dialog, you should be able to click and drag your sample points around still.  So I would set your <strong>Sample Points</strong> dialog to show you HSV values again, and now drag your first point around on your image until you find some skin that’s in a darker value, maybe around 40-45%.</p>
<p>Once you do, try to find a corresponding value in your target image (or something close at least).</p>
<p>I managed to find skin tones with values around 45% in both of my images:</p>
<div style='text-align: center; height: 366px;'>
<img style='display: inline; width: initial;' src="https://pixls.us/articles/color-curves-matching/sample-point-45.png" width='208' height='330' alt="Color CUrve Skin Dark">
<img style='display: inline; width: initial;' src="https://pixls.us/articles/color-curves-matching/sample-point-45-rgb.png" width='208' height='330' alt="Color Curve Sking Dark RGB">
</div>

<p>In these darker tones, I can see that the adjustments I will have to make are for:</p>
<p><b style="color: #ff0000">Red</b> down a bit (116 → 114),</p>
<p><b style="color: #00ff00">Green</b> bumped up some (60 → 73),</p>
<p><b style="color: #0000ff">Blue</b> slightly down (55 → 53).</p>
<p>With the curves dialog still active, I then click and drag on my original image until I am in the same area as my sample point again.  This give me my vertical line showing me the value location in my curves dialog, just as before:</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/earring-dark-red.png" alt='Dark tones red' width='370' height='495'>
<figcaption>
<b style="color: #FF0000">Red</b> down to 114.
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/color-curves-matching/earring-dark-green.png" alt='Dark tones green' width='370' height='495'>
<figcaption>
<b style="color: #00FF00;">Green</b> up to 73.
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/color-curves-matching/earring-dark-blue.png" alt='Dark tones blue' width='370' height='495'>
<figcaption>
<b style="color: #0000FF">Blue</b> down to 53.
</figcaption>
</figure>

<p>At this point you <i>should</i> have something similar to the tones of your target image.  Here is my image after these adjustments so far:</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/vermeer-final.jpg" data-swap-src='dot-original.jpg' width='550' height='469' alt='Results so far GIMP Matching'>
<figcaption>
Effects of the curves so far (click to compare to original).
</figcaption>
</figure>

<p>Once you’ve got things in a state that you like, it would be a good idea to save your progress.
At the top of the Curves dialog there is a <strong>“+”</strong> symbol.
This will let you add the current settings to your favorites.
This will allow you to recall these settings later to continue working on them.</p>
<p><strong>However</strong>, you’re results might not quite look right at the moment.  So why not?</p>
<p>Well, the first problem is that <strong>Sample Points</strong> will only allow you to sample a single pixel value.  There’s a chance that the pixels you pick are not truly representative of the correct skin tones in that range (for instance you may have inadvertently clicked a pixel that represents the oil paint cracks in the image).  It would be nice if there were an option for Sample Points to allow an adjustable sample radius (if there is an option I haven’t found it yet).</p>
<p>The second issue is that similar value points might be very different colors overall.  Hopefully your sources will be nice for you to pick in areas that you know are relatively consistent and representative of the tones you want, but it’s not always a guarantee.</p>
<p>If the results are not quite what you want at the moment, you can do what I will sometimes do and go back to the beginning…</p>
<p>While still keeping the curves dialog open you can pull your sample points to another location, and match the target again.  Try choosing another sample point with a similar value as the first one.  This time instead of adding new points the curve as you make adjustments, just drag the existing points you previously placed.</p>
<h2 id="it-s-an-iterative-process">It’s an Iterative Process<a href="#it-s-an-iterative-process" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Depending on how interested you are in tweaking your resulting curve, you may find yourself going around a couple of times.  That’s ok.</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/iterate.png" alt='Iterative flowchart' width='550' height='752'>
</figure>

<p>I would recommend keeping your curves to having two control points at first.  You want your curves to be smooth across the range (any abrupt changes will do strange things to your final image).</p>
<p>If you are doing a couple of iterations, try modifying existing points on your curves instead of adding new ones.  <b style="font-size:1.3em;"><i>It may not be an exact match</i></b>, but it doesn’t have to be.  It only needs to look nice to your eyes.</p>
<p>There won’t be a perfect solution for a perfect color matching between images, but we can produce pleasing curves that emulate the results we are looking for.</p>
<h2 id="in-conclusion">In Conclusion<a href="#in-conclusion" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>I personally have found the process of doing this with different images to be quite instructive in how the curves will affect my image.
If you try this out and pay careful attention to what is happening while you do it, I’m hopeful you will come away with a similar appreciation of what these curves will do.</p>
<p>Most importantly, don’t be constrained by what you are targeting, but rather use it as a stepping off point for inspiration and experimentation for your own expression!</p>
<p>I’ll finish with a couple of other examples…</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/botticelli-final.jpg" data-swap-src='dot-original.jpg' alt='Dot Botticelli Birth of Venus' width="550" height="469" >
<figcaption>
<a href="http://en.wikipedia.org/wiki/Sandro_Botticelli">Sandro Botticelli</a> - <a href="http://en.wikipedia.org/wiki/The_Birth_of_Venus_(Botticelli"><em>The Birth of Venus</em></a>) (click to compare to original)
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/color-curves-matching/stmichael-final.jpg" data-swap-src='dot-original.jpg' width="550" height="469" >
<figcaption>
<a href="http://www.googleartproject.com/collection/gemaldegalerie-staatliche-museen-zu-berlin/artwork/st-michael-fa-presto/320372/">Fa Presto - St. Michael</a> (click to compare original)
</figcaption>
</figure>

<p>And finally, as promised, here’s the video tutorial that steps through everything I’ve explained above:</p>
<div class='big-vid'>
<div class='fluid-vid'>
<iframe width="560" height="315" src="https://www.youtube.com/embed/rVfIuYV5Ghs" frameborder="0" allowfullscreen=""></iframe>
</div>
</div>

<p class='aside'>
From a request, I’ve packaged up some of the curves from this tutorial (Pearl Earring, St. Michael, the previous Orange/Teal Hell, and another I was playing with from a Norman Rockwell painting): 

<span style="font-size: 1.2rem;">
<a href="https://docs.google.com/open?id=0B21lPI7Ov4CVT1gyVlpvc3psWVU">Download the Curves (7zip .7z)</a>
</span>
</p>


  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[New Discuss Categories and Logging In]]></title>
            <link>https://pixls.us/blog/2015/07/new-discuss-categories-and-logging-in/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/07/new-discuss-categories-and-logging-in/</guid>
            <pubDate>Thu, 30 Jul 2015 21:56:42 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/07/new-discuss-categories-and-logging-in/R0001640-carvac-full.jpg" /><br/>
                <h1>New Discuss Categories and Logging In</h1> 
                <h2>Software, Showcase, and Critiques. Oh My!</h2>  
                <p>Hot on the heels of our <a href="https://pixls.us/blog/2015/07/welcome-g-mic/">last post</a> about welcoming <a href="http://gmic.eu">G’MIC</a> to the forums at <a href="https://pixls.us//discuss.pixls.us">discuss.pixls.us</a>, I thought I should speak briefly about some other additions I’ve recently made.</p>
<p>These were tough for me to finally make a decision about.
I want to be careful and not get crazy with <em>over</em>-categorization.
At the same time, I <em>do</em> want to make good logical breakdowns for people that is still intuitive.</p>
<!-- more -->
<p>Here is what the current category breakdown looks like for discuss:</p>
<ul>
<li><a href="https://discuss.pixls.us/c/pixls-us">PIXLS.US</a><br><small>The comment/posts from articles/blogposts here on the main site.</small></li>
<li><a href="https://discuss.pixls.us/c/processing">Processing</a><br><small>Processing and managing images after they’ve been captured.</small></li>
<li><a href="https://discuss.pixls.us/c/capturing">Capturing</a><br><small>Capturing an image and the ways we go about doing it.</small></li>
<li><a href="https://discuss.pixls.us/c/showcase"><strong>Showcase</strong></a>  </li>
<li><a href="https://discuss.pixls.us/c/critique"><strong>Critique</strong></a>  </li>
<li><a href="https://discuss.pixls.us/c/meta">Meta</a><br><small>Discussions related to the website or the forum itself.</small><ul>
<li><a href="https://discuss.pixls.us/c/meta/help">Help!</a><br><small>Help with the website or forums.</small></li>
</ul>
</li>
<li><a href="https://discuss.pixls.us/c/software">Software</a><br><small>Discussions about various software in general.</small><ul>
<li><a href="https://pixls.us//discuss.pixls.us/c/software/gmic">G’MIC</a><br><small>Topics all about G’MIC.</small></li>
</ul>
</li>
</ul>
<p>Along with the addition of the <a href="https://discuss.pixls.us/c/software">Software</a> category (and the <a href="https://pixls.us//discuss.pixls.us/c/software/gmic">G’MIC subcategory</a>), I decided that the <a href="https://discuss.pixls.us/c/meta/help">Help!</a> category would make more sense under the <a href="https://discuss.pixls.us/c/meta">Meta</a> category.
That is, the Help! section is for website/forum help, which is more of a Meta topic (hence moving it).</p>
<h3 id="software"><a href="#software" class="header-link-alt"><a href="https://discuss.pixls.us/c/software">Software</a></a></h3>
<p>As we’ve already seen, there is now a <a href="https://discuss.pixls.us/c/software">Software</a> category for all discussions about the various software we use.
The first sub-category to this is of course, the <a href="https://pixls.us//discuss.pixls.us/c/software/gmic">G’MIC subcategory</a>.</p>
<figure>
<img src="https://pixls.us/blog/2015/07/new-discuss-categories-and-logging-in/projects2.jpg" alt="F/OSS Project Logos" />
</figure>

<p>If there is enough interest in it, I am open to creating more sub-categories as needed to support particular software projects (GIMP, darktable, RawTherapee, etc…).
I will wait until there is some interest before adding more categories here.</p>
<h3 id="showcase"><a href="#showcase" class="header-link-alt"><a href="https://discuss.pixls.us/c/showcase">Showcase</a></a></h3>
<p>This category had some interest from members and I agree that it’s a good idea.
It’s intended as a place for members to showcase the works they’re proud of and to hopefully serve as a nice example of what we’re capable of producing using F/OSS tools.</p>
<p>A couple of examples from the <em>Showcase</em> category so far:</p>
<figure>
<img src="https://pixls.us/blog/2015/07/new-discuss-categories-and-logging-in/R0001640-carvac.jpg" alt='Filmulator Output Example, by Carlo Vaccari'>
<figcaption>
<em>New Life</em>, <a href="https://discuss.pixls.us/t/new-life-how-to-get-great-colors-with-filmulator/304">Filmulator Output Sample</a>, by <a href="https://discuss.pixls.us/users/carvac/activity">CarVac</a>
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2015/07/new-discuss-categories-and-logging-in/Mairi-Troisieme.jpg" alt='Mairi Troisieme, by Pat David'>
<figcaption>
<a href="https://discuss.pixls.us/t/mairi-troisieme/302">Mairi Troisième</a> by <a href="https://www.flickr.com/photos/patdavid">Pat David</a> (<a href='https://creativecommons.org/licenses/by-nc-sa/2.0/' class='cc'>cbna</a>)
</figcaption>
</figure>

<p>There may be a use of this category later for storing submissions for a <a href="https://discuss.pixls.us/t/poll-main-site-frontpage-lede/244/7">rotating lede image</a> on the main page of the site.</p>
<h3 id="critique"><a href="#critique" class="header-link-alt"><a href="https://discuss.pixls.us/c/critique">Critique</a></a></h3>
<p>This is intended as a place for members to solicit advice and critiques on their works from others.
It took me a little work to come up with an initial take on the <a href="https://discuss.pixls.us/t/about-the-critique-category/309">overall description</a> for the category.</p>
<p>I can promise that I will do my best to give honest and constructive feedback to anyone that asks in this category.
I also promise to do my best to make sure that no post goes un-answered here (I know how beneficial feedback has been to me in the past, so it’s the least I could do to help others out in return).</p>
<h2 id="discuss-login-options"><a href="#discuss-login-options" class="header-link-alt">Discuss Login Options</a></h2>
<p>I also bit the bullet this week and <em>finally</em> caved to sign up for a Facebook account.
The only reason was because I had to have a personal account to get an API key to allow people to log in using their FB account (with OAuth).</p>
<figure>
<img src="https://pixls.us/blog/2015/07/new-discuss-categories-and-logging-in/discuss-logins.png" alt='dicuss.pixls.us login options'>
<figcaption>
We can now use Google, Facebook, Twitter, and Yahoo! to Log In.
</figcaption>
</figure>


<p>On the other hand, we now accept <strong>four</strong> different methods of logging in automatically along with signing up for a normal account.
I have been trying to make it as frictionless as possible to join the conversation and hopefully this most recent addition (FB) will help in some small way.</p>
<p>Oh, and if you want to add me on Facebook, my <a href="https://www.facebook.com/profile.php?id=100009722205862">profile can be found here</a>.
I also took the time to create a page for the site here: <a href="https://www.facebook.com/PIXLSUS">PIXLS.US on Facebook</a>.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Basic Color Curves]]></title>
            <link>https://pixls.us/articles/basic-color-curves/</link>
            <guid isPermaLink="true">https://pixls.us/articles/basic-color-curves/</guid>
            <pubDate>Mon, 27 Jul 2015 15:26:49 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/basic-color-curves/tranquil.jpg" /><br/>
                <h1>Basic Color Curves</h1> 
                <h2>An introduction and simple color grading/toning</h2>  
                <p>Color has this amazing ability to evoke emotional responses from us.
From the warm glow of a sunny summer afternoon to a cool refreshing early evening in fall.
We associate colors with certain moods, places, feelings, and memories (consciously or not).</p>
<p>Volumes have been written on color and I am in no ways even remotely qualified to speak on it.
So I won’t.</p>
<p>Instead, we are going to take a look at the use of the <strong>Curves</strong> tool in <a href="http://www.gimp.org">GIMP</a>.
Even though GIMP is used to demonstrate these ideas, the principles are generic to just about any RGB curve adjustments.</p>
<h2 id="your-pixels-and-you">Your Pixels and You<a href="#your-pixels-and-you" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>First there’s something you need to consider if you haven’t before, and that’s what goes into representing a colored pixel on your screen.</p>
<figure>
<img height="250" width="250"  src="https://pixls.us/articles/basic-color-curves/curves-house-square-full.jpg" alt="PIXLS.US House Zoom Example"/>
<figcaption>
Open up an image in GIMP.
</figcaption>
</figure>

<figure>
<img height="250" width="250"  src="https://pixls.us/articles/basic-color-curves/curves-house-square-zoom-1.jpg" alt="PIXLS.US House Zoom Example" />
<figcaption>
Now zoom in.
</figcaption>
</figure>

<figure>
<img height="250" width="250"  src="https://pixls.us/articles/basic-color-curves/curves-house-square-zoom-2.jpg" alt="PIXLS.US House Zoom Example" />
<figcaption>
Nope - don’t be shy now, zoom in more!
</figcaption>
</figure>

<figure>
<img height="250" width="250"  src="https://pixls.us/articles/basic-color-curves/curves-house-square-zoom-3.png" alt="PIXLS.US House Zoom Example" />
<figcaption>
Aaand there’s your pixel.
So let’s investigate what goes into making your pixel.
</figcaption>
</figure>

<p>Remember, each pixel is represented by a combination of 3 colors: <b style="color:red">Red</b>, <b style="color: green;">Green</b>, and <b style="color: blue;">Blue</b>.
In GIMP (currently at 8-bit), that means that each RGB color can have a value from <strong>0 - 255</strong>, and combining these three colors with varying levels in each channel will result in all the colors you can see in your image.</p>
<p>If all three channels have a value of 255 - then the resulting color will be pure white.
If all three channels have a value of 0 - then the resulting color will be pure black.</p>
<p>If all three channels have the same value, then you will get a shade of gray (128,128,128 would be a middle gray color for instance).</p>
<p>So now let’s see what goes into making up your pixel:</p>
<figure>
<img height="233" width="256"  src="https://pixls.us/articles/basic-color-curves/curves-your-pixel-info.png" alt="GIMP Color Picker Pixel View" />
<figcaption>
The RGB components that mix into your final <span style="color: #7ba3ce;">blue pixel.
</figcaption>
</figure>

<p>As you can see, there is more blue than anything else (it is a blue-ish pixel after all), followed by green, then a dash of red.
If we were to change the values of each channel, but kept ratio the same between Red, Green, and Blue, then we would keep the same color and just lighten or darken the pixel by some amount.</p>
<h2 id="curves-value">Curves: Value<a href="#curves-value" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>So let’s leave your pixel alone for the time being, and actually have a look at the <strong>Curves</strong> dialog.
I’ll be using this wonderful image by <a href="http://www.flickr.com/photos/qsimple/">Eric</a> from <a href="http://www.flickr.com">Flickr</a>.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-original.jpg" width="500" height="750" alt="Hollow Moon by Eric qsimple Flickr" />
<figcaption>
<a href="http://www.flickr.com/photos/qsimple/5636649561/">Hollow Moon</a> by <a href="http://www.flickr.com/photos/qsimple/">qsimple/Eric</a> on <a href="http://www.flickr.com">Flickr</a>. (<a class='cc' href="http://creativecommons.org/licenses/by-nc-sa/2.0/">cbna</a>)
</figcaption>
</figure>

<p>Opening up my <strong>Curves</strong> dialog shows me the following:</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-original.png" width="378" height="524" alt="GIMP Base Curves Dialog" />
</figure>

<p>We can see that I start off with the curve for the <strong>Value</strong> of the pixels.
I could also use the drop down for <strong>“Channel”</strong> to change to red, green or blue curves if I wanted to.
For now let’s look at <strong>Value</strong>, though.</p>
<p>In the main area of the dialog I am presented with a linear curve, behind which I will see a histogram of the value data for the entire image (showing the amount of each value across my image).
Notice a spike in the high values on the right, and a small gap at the brightest values.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-original-IO.png" width="378" height="524" alt="GIMP Base Curves Dialog Input Output" />
</figure>

<p>What we can do right now is to adjust the values of each pixel in the image using this curve.
The best way to visualize it is to remember that the bottom range from black to white represents the <span style="color: #0000ff"><strong><i>current</i></strong> value of the pixels</span>, and the left range is the <span style="color: #ff6f00">value to be mapped to</span>.</p>
<p>So to show an example of how this curve will affect your image, suppose I wanted to remap all the values in the image that were in the midtones, and to make them all lighter.
I can do this by clicking on the curve near the midtones, and dragging the curve higher in the Y direction:</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-midtones.png" width="378" height="524" alt="GIMP Base Curves Dialog Push Midtones" />
</figure>

<p>What this curve does is takes the values around the midtones, and pushes their values to be much lighter than they were.
In this case, values around <span style="color: #0000ff">128</span> were re-mapped to now be closer to <span style="color: #ff6f00">192</span>.</p>
<p>Because the curve is set <strong>Smooth</strong>, there will be a gradual transition for all the tones surrounding my point to be pulled in the same direction (this makes for a smoother fall-off as opposed to an abrupt change at one value).
Because there is only a single point in the curve right now, this means that all values will be pulled higher.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-mid-boostl.jpg" data-swap-src='flickr-qsimple-5636649561-original.jpg' width="500" height="750" alt='Hollow Moon Example Pushed Midtones'>
<figcaption>
The results of pushing the midtones of the value curve higher (click to compare to original).
</figcaption>
</figure>

<p>Care should be taken when fiddling with these curves to not blow things out or destroy detail, of course.
I only push the curves here to illustrate what they do.</p>
<p>A very common curve adjustment you may hear about is to apply a slight “S” curve to your values.
The effect of this curve would be to darken the dark tones, and to lighten the light tones - in effect increasing global contrast on your image.
For instance, if I click on another point in the curves, and adjust the points to form a shape like so:</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-slight-s.png" width="378" height="524" alt="GIMP Base Curves Dialog S shaped curve" />
<figcaption>
A slight “S” curve
</figcaption>
</figure>

<p>This will now cause dark values to become even darker, while the light values get a small boost.
The curve still passes through the midpoint, so middle tones will stay closer to what they were.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-slight-s.jpg" data-swap-src='flickr-qsimple-5636649561-original.jpg' width="500" height="750" alt='Hollow Moon Example S curve applied'>
<figcaption>
Slight “S” curve increases global contrast (click for original).
</figcaption>
</figure>

<p>In general, I find it easiest to visualize in terms of which regions in the curve will effect different tones in your image.
Here is a quick way to visualize it (that is true for value as well as RGB curves):</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-darksmidslights.png" width="378" height="524" alt="GIMP Base Curves darks mids lights zones"  />
</figure>

<p>If there is one thing you take away from reading this, let it be the image above.</p>
<h2 id="curves-span-style-color-red-co-span-span-style-color-green-lo-span-span-style-color-blue-rs-span-">Curves: <span style="color:red;">Co</span><span style="color:green;">lo</span><span style="color:blue;">rs</span><a href="#curves-span-style-color-red-co-span-span-style-color-green-lo-span-span-style-color-blue-rs-span-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>So how does this apply to other channels?  Let’s have a look.</p>
<p>The exact same theory applies in the RGB channels as it did with values.
The relative positions of the darks, midtones, and lights are still the same in the curve dialog.
The primary difference now is that you can control the contribution of color in specific tonal regions of your image.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-value-rgb-select.png" width="378" height="523"/>
<figcaption>
Value, Red, Green, Blue channel picker.
</figcaption>
</figure>

<p>You choose which channel you want to adjust from the <strong>“Channel”</strong> drop-down.</p>
<p>To begin demonstrating what happens here it helps to have an idea of generally what effect you would like to apply to your image.
This is often the hardest part of adjusting the color tones if you don’t have a clear idea to start with.</p>
<p>For example, perhaps we wanted to “cool” down the shadows of our image.
“Cool” shadows are commonly seen during the day in shadows out of direct sunlight.
The light that does fall in shadows is mostly reflected light from a blue-ish sky, so the shadows will trend slightly more blue.  </p>
<p>To try this, let’s adjust the <b style="color: blue;">Blue</b> channel to be a little more prominent in the darker tones of our image, but to get back to normal around the midtones and lighter.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-darks-blue-boost.png"  width="378" height="524"/>
<figcaption>
Boosting blues in darker tones
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-dark-blue-boost.jpg" data-swap-src='flickr-qsimple-5636649561-original.jpg' alt='' width='500' height='750'>
<figcaption>
Pushing up blues in darker tones (click for original).
</figcaption>
</figure>

<p>Now, here’s a question:  If I wanted to “cool” the darker tones with more blue, what if I wanted to “warm” the lighter tones by adding a little yellow?</p>
<p>Well, there’s no “Yellow” curve to modify, so how to approach that?  Have a look at this HSV color wheel below:</p>
<figure>
<img height="400" width="400"  src="https://pixls.us/articles/basic-color-curves/Color_circle_%2528hue-sat%2529_trans.png" />
</figure>

<p>The thing to look out for here is that opposite your blue tones on this wheel, you’ll find yellow.
In fact, for each of the Red, Green, and Blue channels, the opposite colors on the color wheel will show you what an absence of that color will do to your image.
So remember:</p>
<p class='aside'>
<span><span style="color: red;">Red</span> &rarr; <span style="color: cyan;">Cyan</span></span>
<span><span style="color: green;">Green</span> &rarr; <span style="color: magenta;">Magenta</span></span>
<span><span style="color: blue;">Blue</span> &rarr; <span style="color: yellow;">Yellow</span></span>
</p>

<p>What this means to you while manipulating curves is that if you drag a curve for blue up, you will boost the blue in that region of your image.
If instead you drag the curve for blue down, you will be <strong><i>removing</i></strong> blues (or boosting the <strong>Yellows</strong> in that region of your image).</p>
<p>So to boost the blues in the dark tones, but increase the yellow in the lighter tones, you could create a sort of “reverse” S-curve in the blue channel:</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-darks-blue-boost-add-yellow.png"  width="378" height="524"/>
</figure>

<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-dark-blue-boost-add-yellow.jpg" data-swap-src='flickr-qsimple-5636649561-original.jpg' alt='' width='500' height='750'>
<figcaption>
Boost blues in darks, boost yellow in high tones (click for original).
</figcaption>
</figure>

<p>In the green channel for instance, you can begin to introduce more magenta into the tones by decreasing the curve.
So dropping the green curve in the dark tones, and letting it settle back to normal towards the high tones will produce results like this:</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-darks-green-suppress.png"  width="378" height="524"/>
</figure>

<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-dark-green-suppresst.jpg" data-swap-src='flickr-qsimple-5636649561-original.jpg' alt='' width='500' height='750'>
<figcaption>
Suppressing the <b style="color: green;">green</b> channel in darks/mids adds a bit of <b style="color: magenta;">magenta</b>
<br>(click for original).
</figcaption>
</figure>

<p>In isolation, these curves are fun to play with, but I think that perhaps walking through some actual examples of color toning/grading would help to illustrate what I’m talking about here.
I’ll choose a couple of common toning examples to show what happens when you begin mixing all three channels up.</p>
<h2 id="color-toning-grading">Color Toning/Grading<a href="#color-toning-grading" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="-b-style-color-orange-orange-b-and-b-style-color-teal-teal-b-hell"><b style="color: orange;">Orange</b> and <b style="color: teal;">Teal</b> Hell<a href="#-b-style-color-orange-orange-b-and-b-style-color-teal-teal-b-hell" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I use the (<em>cinema film</em>) term <em>color grading</em> here because the first adjustment we will have a look at to illustrate curves is a horrible hollywood trend that is best described by <a href="http://theabyssgazes.blogspot.com/2010/03/teal-and-orange-hollywood-please-stop.html" target="_blank">Todd Miro on his blog</a>.</p>
<p><em>Grading</em> is a term for color toning on film, and Todd’s post is a funny look at the prevalence of orange and teal in modern film palettes.
So it’s worth a look just to see how silly this is (and hopefully to raise awareness of the obnoxiousness of this practice).</p>
<p>The general thought here is that caucasian skin tones trend towards orange, and if you have a look at a complementary color on the color wheel, you’ll notice that directly opposite orange is a teal color.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/Kuler_orange_teal.jpg" width='600' height='322'/>
<figcaption>
Screenshot from <a href="https://color.adobe.com">Kuler</a> borrowed from Todd.
</figcaption>
</figure>

<p class='aside'>
If you don’t already know about it, Adobe has online a fantastic tool for color visualization and palette creation called <a href="http://kuler.adobe.com"><del>Kuler</del></a> <a href="https://color.adobe.com"><strong>Adobe Color CC</strong></a>.
It lets you work on colors based on some classic rules, or even generate a color palette from images.
Well worth a visit and a fantastic bookmark  for fiddling with color.
</p>

<p>So a quick look at the desired effect would be to keep/boost the skin tones into a sort of orange-y pinkish color, and to push the darker tones into a teal/cyan combination.
(Colorists on films tend to use a Lift, Gamma, Gain model, but we’ll just try this out with our curves here).</p>
<p class='aside'>
Quick disclaimer - I am purposefully exaggerating these modifications to illustrate what they do.
Like most things, moderation and restraint will go a long ways towards not causing your viewers eyeballs to bleed.
<em>Remember - <strong>light touch!</strong></em>
</p>

<p>So I know that I want to see my skin tones head into an orange-ish color.
In my image the skin tones are in the upper mids/low highs range of values, so I will start around there.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-orangeteal-red-high.png" width="378" height="524"/>
</figure>

<p>What I’ve done is put a point around the low midtones to anchor the curve closer to normal for those tones.
This lets me fiddle with the red channel and to isolate it roughly to the mid and high tones only.
The skin tones in this image in the red channel will fall toward the upper end of the mids, so I’ve boosted the reds there.
Things may look a little weird at first:</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-orangeteal-red-highs.jpg"  width="500" height="750"/>
</figure>

<p>If you look back at the color wheel again, you’ll notice that between red and green, there is a yellow, and if you go a bit closer towards red the yellow turns to more of an orange.
What this means is that if we add some more green to those same tones, the overall colors will start to shift towards an orange.</p>
<p>So we can switch to the green channel now, put a point in the lower midtones again to hold things around normal, and slightly boost the green.
Don’t boost it all the way to the reds, but about 2/3<sup>rds</sup> or so to taste.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-orangeteal-green-high.png" width="378" height="524"/>
</figure>

<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-orangeteal-green-highs.jpg" width="500" height="750"/>
</figure>

<p>This puts a little more red/orange-y color into the tones around the skin.
You could further adjust this by perhaps including a bit more yellow as well.
To do this, I would again put an anchor point in the low mid tones on the blue channel, then slightly drop the blue curve in the upper tones to introduce a bit of yellow.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-orangeteal-blue-high.png" width="378" height="524"/>
</figure>

<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-orangeteal-blue-highs.jpg" width="500" height="750"/>
</figure>

<p>Remember, we’re experimenting here so feel free to try things out as we move along.
I may consider the upper tones to be finished at the moment, and now I would want to look at introducing a more blue/teal color into the darker tones.</p>
<p>I can start by boosting a bit of blues in the dark tones.
I’m going to use the anchor point I already created, and just push things up a bit.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-orangeteal-blue-low.png" width="378" height="524"/>
</figure>

<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-orangeteal-blue-lows.jpg" width="500" height="750"/>
</figure>

<p>Now I want to make the darker tones a bit more teal in color.
Remember the color wheel - <b style="color: teal;">teal</b> is the absence of red - so we will drop down the red channel in the lower tones as well.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-orangeteal-red-low.png" width="378" height="524"/>
</figure>

<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-orangeteal-red-lows.jpg" width="500" height="750"/>
</figure>

<p>And finally to push a very slight magenta into the dark tones as well, I’ll push down the green channel a bit.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-orangeteal-green-low.png" width="378" height="524"/>
</figure>

<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-orangeteal-green-lows.jpg" width="500" height="750"/>
</figure>

<p>If I wanted to go a step further, I could also put an anchor point up close to the highest values to keep the brightest parts of the image closer to a white instead of carrying over a color cast from our previous operations.  </p>
<p>If your previous operations also darkened the image a bit, you could also now revisit the <strong>Value</strong> channel, and make modifications there as well.
In my case I bumped the midtones of the image just a bit to brighten things up slightly.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-orangeteal-value-final.png" width="378" height="524"/>
</figure>

<p>Finally to end up at something like this.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-orangeteal-value-final.jpg" data-swap-src='flickr-qsimple-5636649561-original.jpg' alt='' width="500" height="750">
<figcaption>
After fooling around a bit - disgusting, isn’t it?
<br>(click for original).
</figcaption>
</figure>

<p>I am exaggerating things here to illustrate a point.
Please don’t do this to your photos. :)</p>
<p class='aside'>
If you’d like to download the curves file of the results we reached above, get it here:<br><a href="https://docs.google.com/open?id=0B21lPI7Ov4CVdmJnOXpkQjN4aWc">Orange Teal Hell Color Curves</a>
</p>


<h2 id="conclusion">Conclusion<a href="#conclusion" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Remember, think about what the color curves represent in your image to help you achieve your final results.
Begin looking at the different tonalities in your image and how you’d like them to appear as part of your final vision.</p>
<p>For even more fun - realize that the colors in your images can help to evoke emotional responses in the viewer, and adjust things accordingly.
I’ll leave it as an exercise for the reader to determine some of the associations between colors and different emotions.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Welcome G'MIC]]></title>
            <link>https://pixls.us/blog/2015/07/welcome-g-mic/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/07/welcome-g-mic/</guid>
            <pubDate>Wed, 22 Jul 2015 21:49:52 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/07/welcome-g-mic/gmic-logo.jpg" /><br/>
                <h1>Welcome G'MIC</h1> 
                <h2>Moving G'MIC to a modern forum</h2>  
                <p>Anyone who’s followed me for a while likely knows that I’m friends with <a href="http://gmic.eu">G’MIC</a> (GREYC’s Magic for Image Computing) creator <a href="https://plus.google.com/100527311518040751439/about">David Tschumperlé</a>.
I was also able to release all of my film <a href="http://blog.patdavid.net/2013/08/film-emulation-presets-in-gmic-gimp.html">emulation</a> <a href="http://blog.patdavid.net/2013/09/film-emulation-presets-in-gmic-gimp.html">presets</a> on G’MIC for everyone to use with David’s help and we collaborated on a bunch of different fun processing filters for photographers in G’MIC (split details/wavelet decompose, <a href="http://blog.patdavid.net/2013/02/calvin-hollywood-freaky-details-in-gimp.html">freaky details</a>, <a href="http://blog.patdavid.net/2013/09/film-emulation-presets-in-gmic-gimp.html">film emulation</a>, <a href="http://blog.patdavid.net/2013/12/mean-averaged-music-videos-g.html">mean/median averaging</a>, and more).</p>
<!-- more -->
<figure>
<img src="https://pixls.us/blog/2015/07/welcome-g-mic/David-and-the-Beauty-Dish.jpg" alt='David Tschumperle beauty dish GMIC'>
<figcaption>
<a href="https://www.flickr.com/photos/patdavid/13898506065/in/dateposted-public/">David</a>, by Me (at <a href="http://libregraphicsmeeting.org/2014/">LGM2014</a>)
</figcaption>
</figure>

<p>It’s also David that helped me by writing a G’MIC script to <a href="http://blog.patdavid.net/2013/12/mean-averaged-music-videos-g.html">mean average images</a> for me when I started making my amalgamations 
(Thus moving me away from my previous method of using <a href="http://imagemagick.org/script/index.php">Imagemagick</a>):</p>
<figure>
<a data-flickr-embed="true" href="https://www.flickr.com/photos/patdavid/17247263555/in/dateposted-public/" title="Mad Max Fury Road Trailer 2 - Amalgamation">
<img src="https://pixls.us/blog/2015/07/welcome-g-mic/max-max-fury-road.jpg" width="640" height="360" alt="Mad Max Fury Road Trailer 2 - Amalgamation"></a>
<figcaption>
<a href="https://www.flickr.com/photos/patdavid/17247263555/in/dateposted-public/">Mad Max Fury Road Trailer 2 - Amalgamation</a>
</figcaption>
</figure>

<p>So when the forums here on <a href="https://discuss.pixls.us">discuss.pixls.us</a> were finally up and running, it only made sense to offer G’MIC its own part of the forums.
They had previously been using a combination of <a href="https://www.flickr.com/groups/gmic">Flickr groups</a> and <a href="http://gimpchat.com/viewforum.php?f=28">gimpchat.com</a>.
These are great forums, they were just a little cumbersome to use.</p>
<p><strong>You can find the new <a href="https://discuss.pixls.us/t/release-of-gmic-1-6-5-1/284">G’MIC category here</a>.</strong>
Stop in and say hello!</p>
<p>I’ll also be porting over the tutorials and articles on work we’ve collaborated on soon (freaky details, film emulation).</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Congratulations]]></title>
            <link>https://pixls.us/blog/2015/07/congratulations/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/07/congratulations/</guid>
            <pubDate>Wed, 22 Jul 2015 18:40:41 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/07/congratulations/riley-brandt-course-2x.png" /><br/>
                <h1>Congratulations</h1> 
                <h2>To the winners of the Open Source Photography Course Giveaway</h2>  
                <p>I compiled the list of entries this afternoon across the various social networks and let <a href="http://random.org">random.org</a> pick an integer in the domain of all of the entries…</p>
<p>So a big congratulations goes out to:</p>
<p><a href="http://dennyweinmann.com/"><strong> Denny Weinmann </strong></a> (<small><a href="https://www.facebook.com/dennyweinmannphotography">Facebook</a>, <a href="https://twitter.com/dennyweinmann">@dennyweinmann</a>, <a href="https://plus.google.com/+DennyWeinmann/posts">Google+</a> </small>)<br>and<br><a href="http://www.nhaines.com/"><strong> Nathan Haines </strong></a> (<small><a href="https://twitter.com/nhaines">@nhaines</a>, <a href="https://plus.google.com/+thenathanhaines">Google+</a></small>)</p>
<p>I’ll be contacting you shortly (assuming you don’t read this announcement here first…)!
I will need a valid email address from you both in order to send your download links.
You can reach me at <a href="mailto:pixlsus@pixls.us">pixlsus@pixls.us</a>.</p>
<!-- more -->
<p>Thank you to everyone who shared the post to help raise awareness!
The lessons are still on sale until August 1<sup>st</sup> for $35<small>USD</small> over on <a href="http://www.rileybrandt.com/lessons/">Riley’s site</a>.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[The Open Source Photography Course]]></title>
            <link>https://pixls.us/blog/2015/07/the-open-source-photography-course/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/07/the-open-source-photography-course/</guid>
            <pubDate>Wed, 15 Jul 2015 17:12:35 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/07/the-open-source-photography-course/riley-brandt-course-2x.png" /><br/>
                <h1>The Open Source Photography Course</h1> 
                <h2>A chance to win a free copy</h2>  
                <p>Photographer <a href="http://www.rileybrandt.com/">Riley Brandt</a> recently released his <a href="http://www.rileybrandt.com/lessons/"><em>Open Source Photography Course</em></a>.
I managed to get a little bit of his time to answer some questions for us about his photography and the course itself.
You can read the full interview <a href="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/">right here</a>:</p>
<p><a href="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/"><strong>A Q&amp;A with Photographer Riley Brandt</strong></a></p>
<p>As an added bonus just for <a href="https://pixls.us//pixls.us">PIXLS.US</a> readers, he has gifted us a nice surprise!</p>
<h2 id="did-someone-say-free-stuff-"><a href="#did-someone-say-free-stuff-" class="header-link-alt">Did Someone Say Free Stuff?</a></h2>
<p>Riley went above and beyond for us.
He has graciously offered us an opportunity for 2 readers to win a <em>free</em> copy of the course (one in an open format like WebM/VP8, and another in a popular format like MP4/H.264)!</p>
<!-- more -->
<p>For a chance to win, I’m asking you to share a link to this post on:</p>
<ul>
<li><a href="https://twitter.com/intent/tweet?hashtags=PIXLSGiveAway&amp;url=https://pixls.us/blog/2015/07/the-open-source-photography-course/">Twitter</a> </li>
<li><a href="https://plus.google.com/share?url=https://pixls.us/blog/2015/07/the-open-source-photography-course/">Google+</a> </li>
<li><a href="https://www.facebook.com/sharer/sharer.php?u=https://pixls.us/blog/2015/07/the-open-source-photography-course/">Facebook</a> </li>
</ul>
<p>with the hashtag <strong>#PIXLSGiveAway</strong> (you can click those links to share to those networks).
Each social network counts as one entry, so you can triple your chances by posting across all three.</p>
<p>Next week (<del>Monday, 2015-07-20</del> Wednesday, 2015-07-22 to give folks a full week), I will search those networks for all the posts and compile a list of people, from which I’ll pick the winners (using random.org).
Make sure you get that hashtag right! :)</p>
<h2 id="some-previews"><a href="#some-previews" class="header-link-alt">Some Previews</a></h2>
<p>Riley has released three nice preview videos to give a taste of what’s in the courses:</p>
<div class='fluid-vid'>
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/TGwuMYsuAuY?list=PL33t7emXCBHkg6a6Ao_ULh7fsgWXg5ua9" frameborder="0" allowfullscreen></iframe>
</div>

  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[A Q&A with Photographer Riley Brandt]]></title>
            <link>https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/</link>
            <guid isPermaLink="true">https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/</guid>
            <pubDate>Wed, 15 Jul 2015 13:47:30 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/riley-brandt-lede.jpg" /><br/>
                <h1>A Q&A with Photographer Riley Brandt</h1> 
                <h2>On creating a F/OSS photography course</h2>  
                <p><a href="http://www.rileybrandt.com/">Riley Brandt</a> is a full-time photographer (<em>and sometimes videographer</em>) at the <a href="http://www.ucalgary.ca/">University of Calgary</a>.
He previously worked for the weekly (Calgary) local magazine <a href="http://www.ffwdweekly.com/">Fast Forward Weekly (FFWD)</a> as well as <a href="http://www.sophiamodels.com/">Sophia Models International</a>,
and his work has been published in many places from the <em>Wall Street Journal</em> to <em>Der Spiegel</em> (and <a href="http://www.rileybrandt.com/about/">more</a>).</p>
<figure>
<a href='http://www.rileybrandt.com/'>
<img src="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/rb-logo.png" alt='Riley Brandt Logo' width='244' height='46'>
</a>
</figure>

<p>He recently announced the availability of <a href="http://www.rileybrandt.com/lessons/"><em>The Open Source Photography Course</em></a>.
It’s a full photographic workflow course using only free, open source software that he has spent the last <em>ten months</em> putting together.</p>
<p class='aside'>
Riley has graciously offered two free copies for us to give away!<br>For a chance to win, see <a href="https://pixls.us/blog/2015/07/the-open-source-photography-course/">this blog post</a>.
</p>

<figure class='big-vid'>
<a href="http://www.rileybrandt.com/lessons/">
    <img src="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/riley-brandt-course.png" alt='Riley Brandt Photography Course Banner' width='940' height='345'>
</a>
</figure>

<p>I was lucky enough to get a few minutes of Riley’s time to ask him a few questions about his photography and this course.</p>
<h2 id="a-chat-with-riley-brandt">A Chat with Riley Brandt<a href="#a-chat-with-riley-brandt" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="tell-us-a-bit-about-yourself-">Tell us a bit about yourself!<a href="#tell-us-a-bit-about-yourself-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Hello, my name is Riley Brandt and I am a professional photographer at the University of Calgary. </p>
<p>At work, I get to spend my days running around a university campus taking pictures of everything from a rooster with prosthetic legs made in a 3D printer, to wild students dressed in costumes jumping into freezing cold water for charity. It can be pretty awsome.</p>
<p>Outside of work, I am a supporter of Linux and open source software. I am also a bit of a film geek.</p>
<figure>
<img src="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/Pixls-Interview-Riley-B_Web_10.jpg" alt='Univ. Calgary Prosthetic Rooster' width='640' height='419' title='Gentlemen, we can rebuild him.  We have the technology.'>
<figcaption>
<small>[<em>ed. note: He’s not kidding - That’s a rooster with prosthetic legs…</em>]</small>
</figcaption>
</figure>


<h3 id="i-see-you-were-trained-in-photojournalism-is-this-still-your-primary-photographic-focus-">I see you were trained in photojournalism.  Is this still your primary photographic focus?<a href="#i-see-you-were-trained-in-photojournalism-is-this-still-your-primary-photographic-focus-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Though I definitely enjoy portraits, fashion and lifestyle photography, my day to day work as a photographer at a university is very similar to my photojournalism days.</p>
<p>I have to work with whatever poor lighting conditions I am given, and I have to turn around those photos quickly to meet deadlines.</p>
<p>However, I recently became an uncle for the first time to a baby boy, so I imagine I will be expanding into new born and toddler photography very soon :)</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/Pixls-Interview-Riley-B_Web_07.jpg" alt='Riley Brandt Environment Portrait Sample' width='960' height='592'>
<figcaption>
<a href="http://www.rileybrandt.com/project/enviro-portraits/">Environmental Portrait</a> by Riley Brandt 
</figcaption>
</figure>


<h3 id="how-long-have-you-been-a-photographer-">How long have you been a photographer?<a href="#how-long-have-you-been-a-photographer-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Photography started as a hobby for me when I was living the Czech Republic in the late 90s and early 2000s. My first SLR camera was the classic Canon AE1 (which I still have).</p>
<p>I didn’t start to work as a full time professional photographer until I graduated from the Journalism program at SAIT Polytechnic in 2008.</p>
<h3 id="what-type-of-photography-do-you-enjoy-doing-the-most-">What type of photography do you enjoy doing the most?<a href="#what-type-of-photography-do-you-enjoy-doing-the-most-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>In a nutshell, I enjoy photographing people. This includes both portraits and candid moments at events.</p>
<p>I love meeting someone with an interesting story, and then trying to capture some of their personality in an image.</p>
<p>At events, I’ve witnessed everything from the joy of someone meeting an astronaut they idolize, to the anguish of a parent at graduation collecting a degree instead of their child who was killed. Capturing genuine emotion at events is challenging, and overwhelming at times, but is also very gratifying.</p>
<p>It would be hard for me to choose between candids or portraits. I enjoy them both.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/Project_Portraits_Update_0003.jpg" alt='Riley Brandt Portraits' width='940' height='715'>
<figcaption>
<a href="http://www.rileybrandt.com/project/portraits/">Portraits</a> by Riley Brandt
</figcaption>
</figure>


<h3 id="how-would-you-describe-your-personal-style-">How would you describe your personal style?<a href="#how-would-you-describe-your-personal-style-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I’ve been told several times that my images are very “clean”. Which I think means I limit the image to only a few key elements, and remove any major distractions.</p>
<h3 id="if-you-had-to-choose-your-favorite-image-from-your-portfolio-what-would-it-be-">If you had to choose your favorite image from your portfolio, what would it be?<a href="#if-you-had-to-choose-your-favorite-image-from-your-portfolio-what-would-it-be-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I don’t have a favorite image in my collection.</p>
<p>However, at the end of a work week, I usually have at least one image that I am really happy with. A photo that I will look at again when I get home from work. An image that I look forward to seeing published. Those are my favorites.</p>
<h3 id="has-free-software-always-been-the-foundation-of-your-workflow-">Has free-software always been the foundation of your workflow?<a href="#has-free-software-always-been-the-foundation-of-your-workflow-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Definitely not. I started with Adobe software, and still use it (and other non-free software) at work. Though hopefully that will change.</p>
<p>I switched to free software for all my personal work at home, because all my computers at home run Linux.</p>
<p>I also dislike at lot of Adobe’s actions as a company, ie: horrible security and switching to a “cloud” version of their software which is really just a DRM scheme. </p>
<p>There many significant reasons to not run non-free software, but what really motivated my switch initially was simply that Adobe never released a Linux version of their software.</p>
<h3 id="what-is-your-normal-os-platform-">What is your normal OS/platform?<a href="#what-is-your-normal-os-platform-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I guess I am transitioning from Ubuntu to Fedora (both GNU/Linux). My main desktop is still running Ubuntu Gnome 14.04. But my laptop is running Fedora 21.</p>
<p>Ubuntu doesn’t offer an up to date version of the Gnome desktop environment. It also doesn’t use the Gnome Software Centre or many Gnome apps. Fedora does. So my desktop will be running Fedora in the near future as well.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/Pixls-Interview-Riley-B_Web_02.jpg" alt='Riley Brandt Summer Days' width='960' height='470' >
<img src="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/Pixls-Interview-Riley-B_Web_03.jpg" alt='Riley Brandt Summer Days' width='960' height='598' >
<figcaption>
<a href="http://www.rileybrandt.com/project/lifestyle/">Lifestyle</a> by Riley Brandt
</figcaption>
</figure>



<h3 id="what-drove-you-to-consider-creating-a-free-software-centric-course-">What drove you to consider creating a free-software centric course?<a href="#what-drove-you-to-consider-creating-a-free-software-centric-course-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Because it was so difficult for me to transition from Adobe software to free software, I wanted to provide an easier option for others trying to do the same thing.</p>
<p>Instead of spending weeks or months searching through all the different manuals, tutorials and websites, someone can spend a weekend watching my course and be up and running quickly.</p>
<p>Also, it was just a great project to work on. I got to combine two of my passions, Linux and photography.</p>
<h3 id="is-the-course-the-same-as-your-own-approach-">Is the course the same as your own approach?<a href="#is-the-course-the-same-as-your-own-approach-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Yes, it’s the same way I work. </p>
<p>I start with fundamentals like monitor calibration and file management. Then onto basics like correcting exposure, color, contrast and noise. After that, I cover less frequently used tools. It’s the same way I work.</p>
<h3 id="the-course-focuses-heavily-on-darktable-for-raw-processing-have-you-also-tried-any-of-the-other-options-such-as-rawtherapee-">The course focuses heavily on <a href="http://www.darktable.org">darktable</a> for RAW processing - have you also tried any of the other options such as RawTherapee?<a href="#the-course-focuses-heavily-on-darktable-for-raw-processing-have-you-also-tried-any-of-the-other-options-such-as-rawtherapee-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I originally tried <a href="https://www.digikam.org/">digiKam</a> because it looked like it had most of the features I needed. However, KDE and I are like oil and water. The user interface felt impenetrable to me, so I moved on.</p>
<p>I also tried <a href="http://rawtherapee.com/">RawTherapee</a>, but only briefly. I got some bad results in the beginning, but that was probably due to my lack of familiarity with the software. I might give it another go one day.</p>
<p>Once <a href="http://www.darktable.org">darktable</a> added advanced selective editing with masks, I was all in. I like the photo management element as well.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/Pixls-Interview-Riley-B_Web_09.jpg" alt='Riley Brandt Portraits' width='960' height='470'>
</figure>

<h3 id="have-you-considered-expanding-your-course-offerings-to-include-other-aspects-of-photography-">Have you considered expanding your (course) offerings to include other aspects of photography?<a href="#have-you-considered-expanding-your-course-offerings-to-include-other-aspects-of-photography-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Umm.. not just yet. I first need to rest :)</p>
<h3 id="if-you-were-to-expand-the-current-course-what-would-you-like-to-focus-on-next-">If you were to expand the current course, what would you like to focus on next?<a href="#if-you-were-to-expand-the-current-course-what-would-you-like-to-focus-on-next-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>It’s hard to say right now. Possibly a more in depth look at GIMP. Or a series where viewers watch me edit photos from start to finish.</p>
<h3 id="it-took-10-months-to-create-this-course-will-you-be-taking-a-break-or-starting-right-away-on-the-next-installment-">It took 10 months to create this course, will you be taking a break or starting right away on the next installment? :)<a href="#it-took-10-months-to-create-this-course-will-you-be-taking-a-break-or-starting-right-away-on-the-next-installment-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>A break for sure :) I spent most of my weekends preparing and recording a lesson for the past year. So yes, first a break.</p>
<h3 id="some-parting-words-">Some parting words?<a href="#some-parting-words-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p> I would like to recommend the <a href="http://gimpmagazine.org/courses/">Desktop Publishing course</a> created by <a href="http://gimpmagazine.org/">GIMP Magazine</a> editor Steve Czajka for anyone who is trying to transition from Adobe InDesign to Scribus.</p>
<p>I would also love to see someone create a similar course for <a href="https://inkscape.org">Inkscape</a>.</p>
<h2 id="the-course">The Course<a href="#the-course" class="header-link"><i class="fa fa-link"></i></a></h2>
<figure> 
<a href="http://www.rileybrandt.com/lessons/">
    <img src="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/riley-brandt-course.png" alt='Riley Brandt Photography Course Banner' width='640' height='235'>
</a>
</figure>

<p><a href="http://www.rileybrandt.com/lessons/"><em>The Open Source Photography Course</em></a> is available for order now at <a href="http://www.rileybrandt.com/">Riley’s website</a>.
The course is:</p>
<ul>
<li>Over 5 <em>hours</em> of video material</li>
<li>DRM free</li>
<li>10% of net profits donated back to FOSS projects</li>
<li>Available in open format (WebM/VP8) or popular (H.264), all 1080p</li>
<li>$50USD </li>
</ul>
<p>He has also released some preview videos of the course:</p>
<div class='big-vid'>
<div class='fluid-vid'>
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/TGwuMYsuAuY?list=PL33t7emXCBHkg6a6Ao_ULh7fsgWXg5ua9" frameborder="0" allowfullscreen></iframe>
</div>
</div>

<p>From his site is a nice course outline to get a feel for what is covered:</p>
<h2 id="course-outline">Course Outline<a href="#course-outline" class="header-link"><i class="fa fa-link"></i></a></h2>
<h4 id="chapter-1-getting-started">Chapter 1. Getting Started<a href="#chapter-1-getting-started" class="header-link"><i class="fa fa-link"></i></a></h4>
<ol>
<li>Course Introduction<br><small>Welcome to The Open Source Photography Course</small></li>
<li>Calibrate Your Monitor<br><small>Start your photography workflow the right way by calibrating your monitor with dispcalGUI</small></li>
<li>File Management<br><small>Make archiving and searching for photos easier by using naming conventions and folder organization</small></li>
<li>Download and Rename<br><small>Use Rapid Photo Downloader to rename all your photos during the download process</small></li>
</ol>
<h4 id="chapter-2-raw-editing-in-darktable">Chapter 2. Raw Editing in darktable<a href="#chapter-2-raw-editing-in-darktable" class="header-link"><i class="fa fa-link"></i></a></h4>
<ol>
<li>Introduction to darktable, Part One<br><small>Get to know darktable’s user interface</small></li>
<li>Introduction to darktable, Part Two<br><small>Take a quick look at the slideshow view in darktable</small></li>
<li>Import and Tag<br><small>Import photos into darktable and tag them with keywords, copyright information and descriptions</small></li>
<li>Rating Images<br><small>Learn an efficient way to cull, rate, add color labels and filter photos in lighttable</small></li>
<li>Darkroom Overview<br><small>Learn the basics of the darkroom view including basic module adjustments and creating favorites</small></li>
<li>Correcting Exposure, Part 1<br><small>Correct exposure with the base curves, levels, exposure, and curves modules</small></li>
<li>Correcting Exposure, Part 2<br><small>See several examples of combining modules to correct an image’s exposure</small></li>
<li>Correct White Balance<br><small>Use presets and make manual changes in the white balance module to color correct your images</small></li>
<li>Crop and Rotate<br><small>Navigate through the many crop and rotate options including guides and automatic cropping</small></li>
<li>Highlights and Shadows<br><small>Recover details lost in the shadows and highlights of your photos</small></li>
<li>Adding Contrast<br><small>Make your images stand out by adding contrast with the levels, tone curve and contrast modules</small></li>
<li>Sharpening<br><small>Fix those soft images with the sharpen, equalizer and local contrast modules</small></li>
<li>Clarity<br><small>Sharpen up your midtones by utilizing the local contrast and equalizer modules</small></li>
<li>Lens Correction<br><small>Learn how to fix lens distortion, vignetting and chromatic aberrations</small></li>
<li>Noise Reduction<br><small>Learn the fastest, easiest and best way to clean up grainy images taken in low light</small></li>
<li>Masks, Part one<br><small>Discover the possibilities of selective editing with the shape, gradient and path tools</small></li>
<li>Masks, Part Two<br><small>Take you knowledge of masks further in this lesson about parametric masks</small></li>
<li>Color Zones<br><small>Learn how to limit your adjustments to a specific color’s hue, saturation or brightness</small></li>
<li>Spot Removal<br><small>Save time by making simple corrections in darktable, instead of opening up GIMP</small></li>
<li>Snapshots<br><small>Quickly compare different points in your editing history with snapshots</small></li>
<li>Presets and Styles<br><small>Save your favorite adjustments for later with presets and styles</small></li>
<li>Batch Editing<br><small>Save time by editing one image, then quickly applying those same edits to hundreds of images</small></li>
<li>Searching for Images<br><small>Learn how to sort and search through a large collection of images in Lighttable</small></li>
<li>Adding Effects<br><small>Get creative in the effects group with vignetting, framing, split toning and more</small></li>
<li>Exporting Photos<br><small>Learn how to rename, resize and convert you RAW photos to JPEG, TIFF and other formats</small></li>
</ol>
<h4 id="chapter-3-touch-ups-in-gimp">Chapter 3. Touch Ups in GIMP<a href="#chapter-3-touch-ups-in-gimp" class="header-link"><i class="fa fa-link"></i></a></h4>
<ol>
<li>Introduction to GIMP<br><small>Install GIMP, then get to know your way around the user interface</small></li>
<li>Setting Up GIMP, Part 1<br><small>Customize the user interface, adjust a few tools and install color profiles</small></li>
<li>Setting Up GIMP, Part 2<br><small>Set keyboard shortcuts that mimic Photoshop’s and install a couple of plugins</small></li>
<li>Touch Ups<br><small>Use the heal tool and the clone tool to clean up your photos</small></li>
<li>Layer Masks<br><small>Learn how to make selective edits and non-destructive edits using layer masks</small></li>
<li>Removing Distractions<br><small>Combine layers, a helpful plugin and layer masks to remove distractions from your photos</small></li>
<li>Preparing Images for the Web<br><small>Reduce file size while retaining quality before you upload your photos to the web</small></li>
<li>Getting Help and Finding the Community<br><small>Find out which websites, mailing lists and forums to go to for help and friendly discussions</small></li>
</ol>
<hr>
<div class='center'><small>All the images in this post &copy; <a href="http://www.rileybrandt.com/">Riley Brandt</a>.</small></div>

  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[darktable on Windows]]></title>
            <link>https://pixls.us/blog/2015/07/darktable-on-windows/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/07/darktable-on-windows/</guid>
            <pubDate>Mon, 13 Jul 2015 21:54:23 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/07/darktable-on-windows/three-windows.jpg" /><br/>
                <h1>darktable on Windows</h1> 
                <h2>Why don't you provide a Windows build?</h2>  
                <p>Due to the heated debate lately, a short foreword:</p>
<p>We do not want to harass, insult or criticize anyone due to his or her choice of operating system. Still, from time to time we encounter comments from people accusing us of ignorance or even disrespect towards Windows users. If any of our statements can be interpreted such, we want to apologize for that – and once more give the full explanation of our lacking Windows support.</p>
<h2 id="the-darktable-project"><a href="#the-darktable-project" class="header-link-alt">The darktable project</a></h2>
<p>darktable is developed and maintained by a small group of people in their spare time, just for fun. We do not have any funds, do not provide travel reimbursements for conferences or meetings, and don’t even have a legal entity at the moment. In other words: None of the developers has ever seen (and most likely will ever see) a single $(INSERT YOUR CURRENCY) for the development of darktable, which is thus a project purely driven by enthusiasm and curiosity.</p>
<!-- more -->
<h2 id="the-development-environment"><a href="#the-development-environment" class="header-link-alt">The development environment</a></h2>
<p>The team is quite mixed, some have a professional background in computing, others don’t. But all love photography and like exploring the full information recorded by the camera themselves. Most new features are added to darktable as an expert for, let’s say GPU computing, steps up and is willing to provide and maintain code for the new feature.</p>
<p>Up till now there is one technical thing that unites all developers: None of them is using Windows as operating system. Some are using Mac OSX, Solaris, etc, but most run some Linux distribution. New flavors of operating systems kept being added to our list with people willing to support their favorite system joining the team.</p>
<p>Also (since this stands out a bit as “commercial operating system”) Mac OS X support arrived in exactly this way. Someone (parafin!) popped up, said: “I like this software, and I want to run darktable on my Mac.”, compiled it on OS X and since then does testing and package building for the Mac OS X operating system. And this is not an easy job. Initially there were just snapshot builds from git, no official releases, not even release candidates – but already the first complaints about the quality arrived. Finally, there was a lot of time invested in working around specific peculiarities of this operating system to make it work and provide builds for every new version of darktable released.</p>
<p>This nicely shows one of the consequences of the project’s organizational (non-) structure and development approach: at first, every developer cares about darktable running on his personal system.</p>
<h2 id="code-contributions-and-feature-requests"><a href="#code-contributions-and-feature-requests" class="header-link-alt">Code contributions and feature requests</a></h2>
<p>Usually feature requests from users or from the community are treated like a brainstorming session. Someone proposes a new feature, people think and discuss about it – and if someone likes the idea and has time to code it, it might eventually come – if the team agrees on including the feature.</p>
<p>But life is not a picnic. You probably wouldn’t pass by your neighbor and demand from him to repair your broken car – just because you know he loves to tinker with his vintage car collection at home.<br> Same applies here. No one feels comfortable if suddenly request are being made that would require a non-negligible amount of work – but with no return for the person carrying out the work, neither moneywise nor intellectually.</p>
<p>This is the feeling created every time someone just passes by leaving as only statement: “Why isn’t there a Windows build (yet)?”.</p>
<h2 id="providing-a-windows-build-for-darktable"><a href="#providing-a-windows-build-for-darktable" class="header-link-alt">Providing a Windows build for darktable</a></h2>
<p>The answer has always been the same: because no one stepped up doing it. None of the passers-by requesting a Windows build actually took the initiative, just downloaded the source code and started the compilation. No one approached the development team with actual build errors and problems encountered during a compilation using MinGW or else on Windows. The only thing ever aired were requests for ready-made binaries.</p>
<p>As stated earlier here, the development of darktable is totally about one’s own initiative. This project (as many others) is not about ordering things and getting them delivered. It’s about starting things, participating and contributing. It’s about trying things out yourself. It’s FLOSS.</p>
<p>One argument that pops up from time to time is: “darktable’s user base would grow immensely with a Windows build!”. This might be true. But – what’s the benefit from this? Why should a developer care how many people are using the software if his or her sole motivation was producing a nice software that he/she could process raw files with?</p>
<p>On the contrary: more users usually means more support, more bug tracker tickets, more work. And this work usually isn’t the pleasing sort, hunting seldom bugs occurring with some rare camera’s files on some other operating system is usually not exactly what people love to spent their Saturday afternoon on.</p>
<p>This argumentation would totally make sense if darktable would be sold, the developers paid and the overall profit would depend on the number of people using the software. No one can be blamed for sending such requests to a company selling their software or service (for your money or your data, whatever) – and it is up to them to make an economical decision on whether it makes sense to invest the time and manpower or not.</p>
<p>But this is different.</p>
<p>Not building darktable on Windows is not a technical issue after all. There certainly are problems of portability, and code changes would be necessary, but in the end it would probably work out. The real problem is (as has been pointed out by the darktable development team many times in the past) the maintenance of the build as well as all the dependencies that the package requires.</p>
<p>The darktable team is trying to deliver a high-quality reliable software. Photographers rely on being able to re-process their old developments with recent versions of darktable obtaining exactly the same result – and that on many platforms, being it CPUs or GPUs with OpenCL. Satisfying this objective requires quite some testing, thinking and maintenance work.</p>
<p>Spawning another build on a platform that not a single developer is using would mean lots and lots of testing – in unfamiliar terrain, and with no fun attached at all. Releasing a half-way working, barely tested build for Windows would harm the project’s reputation and diminish the confidence in the software treating your photographs carefully.</p>
<p>We hope that this reasoning is comprehensible and that no one feels disrespected due to the choice of operating system.</p>
<h1 id="references">References</h1>
<p><a href="http://www.darktable.org/2011/07/that-other-os/">That other OS</a></p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[PhotoFlow Blended Panorama Tutorial]]></title>
            <link>https://pixls.us/blog/2015/07/photoflow-blended-panorama-tutorial/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/07/photoflow-blended-panorama-tutorial/</guid>
            <pubDate>Tue, 07 Jul 2015 14:29:45 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/07/photoflow-blended-panorama-tutorial/pano_final2.jpg" /><br/>
                <h1>PhotoFlow Blended Panorama Tutorial</h1> 
                <h2>Andrea Ferrero has been busy!</h2>  
                <p>After quite a bit of back and forth I am quite happy to be able to announce that the latest tutorial is up: <a href="https://pixls.us/articles/a-blended-panorama-with-photoflow/">A Blended Panorama with PhotoFlow</a>!
This contribution comes from <a href="http://photoflowblog.blogspot.fr/">Andrea Ferrero</a>, the creator of a new project: <a href="http://aferrero2707.github.io/PhotoFlow/">PhotoFlow</a>.</p>
<p>In it, he walks through a process of stitching a panorama together using Hugin and blending multiple exposure options through masking in PhotoFlow (see lede image).
The results are quite nice and natural looking!</p>
<!-- more -->
<h2 id="local-contrast-enhancement-gaussian-vs-bilateral"><a href="#local-contrast-enhancement-gaussian-vs-bilateral" class="header-link-alt">Local Contrast Enhancement: Gaussian vs. Bilateral</a></h2>
<p>Andrea also runs through a quick video comparison of doing LCE using both a Gaussian and Bilateral blur, in case you ever wanted to see them compared side-by-side:</p>
<div class='fluid-vid'>
<iframe width="640" height="480" src="https://www.youtube-nocookie.com/embed/Uj4cmXlezVc?rel=0" frameborder="0" allowfullscreen></iframe>
</div>

<p>He <a href="https://discuss.pixls.us/t/local-contrast-enhancement-gaussian-vs-bilateral-blurring/241">started a topic post</a> about it in the forums as well.</p>
<h2 id="thoughts-on-the-main-page"><a href="#thoughts-on-the-main-page" class="header-link-alt">Thoughts on the Main Page</a></h2>
<p>Over on <a href="https://pixls.us//discuss.pixls.us">discuss</a> I started a thread to <a href="https://discuss.pixls.us/t/main-site-frontpage-lede/244/4">talk about some possible changes</a> to the main page of the site.</p>
<p>Specifically I’m talking about the background lede image at the very top of the main page:</p>
<figure>
<img src='https://discuss.pixls.us/uploads/default/optimized/1X/ef803873985000ea678778d99362ad0666dd7c49_1_690x437.png'>
</figure>

<p>I had originally created that image as a placeholder in <a href="https://pixls.us//blender.org">Blender</a>.
The site is intended as a photography-centric site, so the natural thought was why not use photos as a background instead?</p>
<p>The thought is to rotate through images as provided by the community.
I’ve also mocked up two version of using an image as a background.</p>
<p><a href="https://pixls.us/lede-image.html"><strong>Simple replacement of the image</strong></a> with photos from the community.
This is the most popular in the poll on the forum at the moment.
The image will be rotated amongst images provided by community members.
I just need to make sure that the text shown is legible over whatever the image may be…</p>
<p><a href="https://pixls.us/lede-image-full.html"><strong>Full viewport splash</strong></a> version, where the image fills the viewport.
This is not very popular from the feedback I received (thank you akk, ankh, muks, DrSlony, LebedevRI, and others on irc!). 
I personally like the idea but I can understand why others may not like it.</p>
<p>If anyone wants to chime in (or vote in the poll) then head <a href="https://discuss.pixls.us/t/main-site-frontpage-lede/244/4">over to the forum topic</a> and let us know your thoughts!</p>
<p>Also, a big <strong>thank you</strong> to <a href="http://londonlight.org/zp/">Morgan Hardwood</a> for allowing us to use that image as a background example.
If you want a nice way to support F/OSS development, it just so happens that Morgan is a developer for <a href="https://pixls.us//www.rawtherapee.com">RawTherapee</a>, and a print of that image is available for purchase.
<a href="mailto:photography2015@londonlight.org">Contact him</a> for details.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[A Blended Panorama with PhotoFlow]]></title>
            <link>https://pixls.us/articles/a-blended-panorama-with-photoflow/</link>
            <guid isPermaLink="true">https://pixls.us/articles/a-blended-panorama-with-photoflow/</guid>
            <pubDate>Fri, 26 Jun 2015 16:31:39 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_lede.jpg" /><br/>
                <h1>A Blended Panorama with PhotoFlow</h1> 
                <h2>Creating panoramas with Hugin and PhotoFlow</h2>  
                <p>The goal of this tutorial is to show how to create a sort-of-HDR panoramic image using only Free and Open Source tools.
To explain my workflow I will use the image below as an example.</p>
<p>This panorama was obtained from the combination of six views, each consisting of three bracketed shots at -1EV, 0EV and +1EV exposure.
The three exposures are stitched together with the <a href="http://hugin.sourceforge.net/">Hugin</a> suite, and then exposure-blended with <a href="">enfuse</a>.
The <a href="https://github.com/aferrero2707/PhotoFlow">PhotoFlow RAW editor</a> is used to prepare the initial images and to finalize the processing of the assembled panorama.
The final result of the post-processing is below:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_final2.jpg" data-swap-src="pano_+1EV.jpg" alt="Final result" width="960" height="457"> 
<figcaption>
Final result of the panorama editing (click to compare to simple +1EV exposure) 
</figcaption>
</figure>

<p>In this case I have used the brightest image for the foreground, the darkest one for the sky and clouds, and and exposure-fused one for a seamless transition between the two.</p>
<p>The rest of the post will show how to get there…</p>
<p>Before we continue, let me advise you that I’m not a pro, and that the tips and “recommendations” that I’ll be giving in this post are mostly derived from trial-and-error and common sense.
Feel free to correct/add/suggest anything… <strong>we are all here to learn</strong>! </p>
<h2 id="taking-the-shots">Taking the shots<a href="#taking-the-shots" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Shooting a panorama requires a bit of preparation and planning to make sure that one can get the best out of Hugin when stitching the shots together. Here is my personal “checklist”:</p>
<ul>
<li><strong>Manual Focus</strong> - set the camera to manual focus, so that the focus plane is the same for all shots</li>
<li><strong>Overlap Shots</strong> - make sure that each frame has sufficient overlap with the previous one (something between 1/2 and 1/3 of the total area), so that hugin can find enough control points to align the images and determine the lens correction parameters</li>
<li><strong>Follow A Straight Line</strong> - when taking the shots, try to follow as much as possible a straight line (keeping for example the horizon at the same height in your viewfinder); if you have a tripod, use it!</li>
<li><strong>Frame Appropriately</strong> - to maximize the angle of view, frame vertically for an horizontal panorama (and vice-versa for a vertical one)</li>
<li><strong>Leave Some Room</strong> - frame the shots a bit wider than needed, to avoid bad surprises when cropping the stitched panorama</li>
<li><strong>Fixed Exposure</strong> - take all shots with a fixed exposure (manual or locked) to avoid luminance variations that might not be fully compensated by hugin</li>
<li><strong>Bracket if Needed</strong> - if you shoot during a sunny day, the brightness might vary significantly across the whole panorama; in this case, take three or more bracketed exposures for each view (we will see later how to blend them in the post-processing)</li>
</ul>
<h2 id="processing-the-raw-files">Processing the RAW files<a href="#processing-the-raw-files" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>If you plan to create the panorama starting from the in-camera Jpeg images, you can safely skip this section. On the other hand, if you are shooting RAW you will need to process and prepare all the input images for Hugin. In this case it is important to make sure that the RAW processing parameters are exactly the same for all the shots. The best is to adjust the parameters on one reference image, and then batch-process the rest of the images using those settings.</p>
<h3 id="using-photoflow">Using PhotoFlow<a href="#using-photoflow" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Loading and processing a RAW file is rather easy:</p>
<ol>
<li><p>Click the “Open” button and choose the appropriate RAW file from your hard disk; the image preview area will show at this point a grey and rather dark image</p>
</li>
<li><p>Add a “RAW developer” layer; a configuration dialog will show up which allows to access and modify all the typical RAW processing parameters (white balance, exposure, color conversion, etc… see screenshots below).</p>
</li>
</ol>
<figure>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_raw_wb2.png" width="380" height="409">
</figure>

<figure>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_raw_exposure.png" width="380" height="243" > 
</figure>

<figure>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_raw_demo.png" width="380" height="243" > 
</figure>

<figure>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_raw_output.png" width="380" height="243" > 
</figure>

<p>More details on the RAW processing in PhotoFlow can be found in <a href="http://photoflowblog.blogspot.fr/2014/09/tutorial-how-to-process-raw-image-in.html">this tutorial</a>.</p>
<p>Once the result is ok the RAW processing parameters need to be saved into a preset. This can be done following a couple of simple steps:</p>
<ol>
<li><p>Select the “RAW developer” layer and click on the “Save” button below the layers list widget (at the bottom-right of the photoflow’s window)</p>
</li>
<li><p>A file chooser dialog chooser dialog will pop-up, where one has to choose an appropriate file name and location for the preset and then click “Save”;<br><strong>the preset file name must have a “.pfp” extension</strong></p>
</li>
</ol>
<p>The saved preset needs then to be applied to all the RAW files in the set. Under Linux, PhotoFlow comes with an handy script that automates the process. The script is called <em>pfconv</em> and can be found <a href="https://github.com/aferrero2707/PhotoFlow/blob/master/tools/pfconv">here</a>. It is a wrapper around the <em>pfbatch</em> and <em>exiftool</em> commands, and is used to process and convert a bunch of files to TIFF format. Save the script in one of the folders included in your <code>PATH</code> environment variable (for example <code>/usr/local/bin</code>) and make it executable:</p>
<pre><code>sudo chmod u+x /usr/local/bin/pfconv
</code></pre><p>Processing all RAW files of a given folder is quite easy. Assuming that the RAW processing preset is stored in the same folder under the name <code>raw_params.pfp</code>, run this commands in your preferred terminal application:</p>
<pre><code>cd panorama_dir
pfconv -p raw_params.pfp *.NEF
</code></pre><p>Of course, you have to change <code>panorama_dir</code> to your actual folder and the <code>.NEF</code> extension to the one of your RAW fles.</p>
<p>Now go for a cup of coffee, and be patient… a panorama with three or five bracketed shots for each view can easily have more than 50 files, and the processing can take half an hour or more. Once the processing completed, there will be one tiff file for each RAW image, an the fun with Hugin can start!</p>
<h2 id="assembling-the-shots">Assembling the shots<a href="#assembling-the-shots" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Hugin is a powerful and free software suite for stitching multiple shots into a seamless panorama, and more. Under Linux, Hugin can be usually installed through the package manager of your distribution. In the case of Ubuntu-based distros it can be usually installed with:</p>
<pre><code>sudo apt-get install hugin
</code></pre><p>If you are running Hugin for the first time, I suggest to switch the interface type to <strong>Advanced</strong> in order to have full control over the available parameters. </p>
<p>The first steps have to be done in the <em>Photos</em> tab:</p>
<p><img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/hugin_1.png" width="667" height="500"> </p>
<ol>
<li><p>Click on <em>Add images</em> and load all the tiff files included in your panorama. Hugin should automatically determine the lens focal length and the exposure values from the EXIF data embedded in the tiff files. </p>
</li>
<li><p>Click on <em>Create control points</em> to let hugin determine the anchor points that will be used to align the images and to determine the lens correction parameters so that all shots overlap perfectly. If the scene contains a large amount of clouds that have likely moved during the shooting, you can try setting the feature matching algorithm to <em>cpfind+celeste</em> to automatically exclude non-reliable control points in the clouds.</p>
</li>
<li><p>Set the geometric parameters to <em>Positions and Barrel Distortion</em> and hit the <em>Calculate</em> button.</p>
</li>
<li><p>Set the photometric parameters to <em>High dynamic range, fixed exposure</em> (since we are going to stitch bracketed shots that have been taken with fixed exposures), and hit the <em>Calculate</em> button again.</p>
</li>
</ol>
<p>At this point we can have a first look at the assembled panorama. Hugin provides an OpenGL-based previewer that can be opened by clicking on the on the <em>GL</em> icon in the top toolbar (marked with the arrow in the above screenshot). This will open a window like this:</p>
<p><img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/hugin_2.png" width="689" height="417"> </p>
<p>If the shots have been taken handheld and are not perfectly aligned, the panorama will probably look a bit “wavy” like in my example. This can be easily fixed by clicking on the <em>Straighten</em> button (at the top of the <em>Move/Drag</em> tab). Next, the image can be centered in the preview area with the <em>Center</em> and <em>Fit</em> buttons.</p>
<p>If the horizon is still not straight, you can further correct it by dragging the center of the image up or down:</p>
<p><img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/hugin_3.png" width="690" height="417"> </p>
<p>At this point, one can switch to the <em>Projection</em> tab and play with the different options. I usually find the <em>Cylindrical</em> projection better than the <em>Equirectangular</em> that is proposed by default (the vertical dimension is less “compressed”). For architectural panoramas that are not too wide, the <em>Rectilinear</em> projection can be a good option since vertical lines are kept straight.</p>
<p><img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/hugin_4.png" width="690" height="398"> </p>
<p>If the projection type is changed, one has to click once more on the <em>Center</em> and <em>Fit</em> buttons.</p>
<p>Finally, you can switch to the <em>Crop</em> tab and click on the <em>HDR Autocrop</em> button to determine the limits of the area containing only valid pixels.</p>
<p>We are now done with the preview window; it can be closed and we can go back to the main window, in the <em>Stitcher</em> tab. Here we have to set the options to produce the output images the way we want. The idea is to blend each bracketed exposure into a separate panorama, and then use <strong>enfuse</strong> to create the final exposure-blended version. The intermediate panoramas, which will be saved along with the enfuse output, are already aligned with respect to each other and can be combined using different type of masks (luminosity, gradients, freehand, etc…).</p>
<p>The <em>Stitcher</em> tab has to be configured as in the image below, selecting <em>Exposure fused from any arrangement</em> and <em>Blended layers of similar exposure, without exposure correction</em>. I usually set the output format to <em>TIFF</em> to avoid compression artifacts.</p>
<p><img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/hugin_5.png" width="592" height="500"> </p>
<p>The final act starts by clicking on the <em>Stitch!</em> button. The input images will be distorted, corrected for the lens vignetting and blended into seamless panoramas. The whole process is likely to take quite long, so it is probably a good opportunity for taking a pause…</p>
<p>At the end of the processing, few new images should appear in the output directory: one with an “_blended_fused.tif” suffix containing the output of the final enfuse step, and few with an “<em>exposure</em>????.tif” suffix that contain intermediate panoramas for each exposure value.</p>
<h2 id="blending-the-exposures">Blending the exposures<a href="#blending-the-exposures" class="header-link"><i class="fa fa-link"></i></a></h2>
<blockquote>
<p><em>Very often, photo editing is all about getting <strong>what your eyes have seen</strong> out of <strong>what your camera has captured</strong>.</em> </p>
</blockquote>
<p>The image that will be edited through this tutorial is no exception: the human vision system can “compensate” large luminosity variations and can “record” scenes with a wider dynamic range than your camera sensor. In the following I will attempt to restore such large dynamics by combining under- and over-exposed shots together, in a way that does not produce unpleasing halos or artifacts. Nevertheless, I have intentionally pushed the edit a bit “over the top” in order to better show how far one can go with such a technique. </p>
<p>This second part introduces a certain number of quite general editing ideas, mixed with details specific to their realization in PhotoFlow. Most of what is described here can be reproduced in GIMP with little extra effort, but without the ease of non-destructive editing.</p>
<p>The steps that I followed to go from one to the other can be more or less outlined like that:</p>
<ol>
<li><p>take the foreground from the +1EV version and the clouds from the -1EV version; use the exposure-blended Hugin output to improve the transition between the two exposures</p>
</li>
<li><p>apply an S-shaped tonal curve to increase the overall brightness and add contrast. </p>
</li>
<li><p>apply a combination of the <em>a</em> and <em>b</em> channels of the CIE-Lab colorspace in <strong>overlay</strong> blend mode to give more “pop” to the green and yellow regions in the foreground</p>
</li>
</ol>
<p>The image below shows side-by-side three of the output images produced with Hugin at the end of the first part. The left part contains the brightest panorama, obtained by blending the shots taken at +1EV. The right part contains the darkest version, obtained from the shots taken at -1EV. Finally, the central part shows the result of running the <strong>enfuse</strong> program to combine the -1EV, 0EV and +1EV panoramas. </p>
<figure>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_exp_comp.jpg" width="640" height="299">
<figcaption> Comparison between the +1EV exposure (left), the enfuse output (center) and the -1EV exposure (right) 
</figcaption> </figure>




<h3 id="exposure-blending-in-general">Exposure blending in general<a href="#exposure-blending-in-general" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>In scenes that exhibit strong brightness variations, one often needs to combine different exposures in order to compress the dynamic range so that the overall contrast can be further tweaked without the risk of losing details in the shadows or highlights.</p>
<p>In this case, the name of the game is “seamless blending”, i.e. combining the exposures in a way that looks natural, without visible transitions or halos.
In our specific case, the easiest thing would be to simply combine the +1EV and -1EV images through some smooth transition, like in the example below.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_+1EV_-1EV_blend.jpg" width="925" height="433" style="width: initial;"> 
<figcaption>
Simple blending of the +1EV and -1EV exposures 
</figcaption>
</figure>

<p>The result is not too bad, however it is very difficult to avoid some brightening of the bottom part of the clouds (or alternatively some darkening of the hills), something that will most likely look artificial even if the effect is subtle (our brain will recognize that something is wrong, even if one cannot clearly explain the reason…). We need something to “bridge” the two images, so that the transition looks more natural. </p>
<p>At this point it is good to recall that the last step performed by Hugin was to call the <strong>enfuse</strong> program to blend the three bracketed exposures. The enfuse output is somehow intermediate between the -1EV and +1EV versions, however a side-by-side comparison with the 0EV image reveals the subtle and sophisticated work done by the program: the foreground hill is brighter and the clouds are darker than in the 0EV version. And even more importantly, this job is done without triggering any alarm in your brain! Hence, the enfuse output is a perfect candidate to improve the transition between the hill and the sky.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_enfuse.jpg" data-swap-src="pano_0EV.jpg" alt="Final result" width="960" height="449"> 
<figcaption> Enfuse output (click to see 0EV version) 
</figcaption> </figure>




<h3 id="exposure-blending-in-photoflow">Exposure blending in PhotoFlow<a href="#exposure-blending-in-photoflow" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>It is time to put all the stuff together.
First of all, we should open <strong>PhotoFlow</strong> and load the +1EV image.
Next we need to add the enfuse output on top of it: for that you first need to add a new layer (<strong>1</strong>) and choose the <em>Open image</em> tool from the dialog that will open up (<strong>2</strong>)(see below).</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_add_layer_edit.png" width="960" height="578"> 
<figcaption> Inserting as image from disk as a layer
</figcaption> </figure>

<p>After clicking the “OK” button, a new layer will be added and the corresponding configuration dialog will be shown. There you can choose the name of the file to be added; in this case, choose the one ending with “_blended_fused.tif” among those created by Hugin:</p>
<figure>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_open_image_edit.png" width="469" height="235"> 
<figcaption> “Open image” tool dialog
</figcaption> </figure>



<h4 id="layer-masks-theory-a-bit-and-practice-a-lot-">Layer masks: theory (a bit) and practice (a lot)<a href="#layer-masks-theory-a-bit-and-practice-a-lot-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>For the moment, the new layer completely replaces the background image. This is not the desired result: instead, we want to keep the hills from the background layer and only take the clouds from the “_blended_fused.tif” version. In other words, we need a <strong>layer mask</strong>.</p>
<p>To access the mask associated to the “enfuse” layer, double-click on the small gradient icon next to the name of the layer itself. This will open a new tab with an initially empty stack, where we can start adding layers to generate the desired mask.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_enfuse_before_blend_edit.png" width="960" height="581"> 
<figcaption>
How to access the grayscale mask associated to a layer
</figcaption>
</figure>

<p>In PhotoFlow, masks are edited the same way as the rest of the image: through a stack of layers that can be associated to most of the available tools. In this specific case, we are going to use a combination of gradients and curves to create a smooth transition that follows the shape of the edge between the hills and the clouds. The technique is explained in detail in <a href="https://www.youtube.com/watch?v=kapppq-PbTk">this screencast</a>.</p>
<div class='big-vid'>
<div class='fluid-vid'>
<iframe width="960" height="540" src="https://pixls.us//www.youtube.com/embed/kapppq-PbTk?rel=0" frameborder="0" allowfullscreen></iframe>
</div>
</div>


<p>To avoid the boring and lengthy procedure of creating all the necessary layers, you can download  <a href="http://aferrero2707.github.io/PhotoFlow/data/presets/gradient_modulation.pfp">this preset file</a> and load it as shown below:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_enfuse_mask_initial.png" width="960" height="456"> 
</figure>

<p>The mask is initially a simple vertical linear gradient. At the bottom (where the mask is black) the associated layer is completely transparent and therefore hidden, while at the top (where the mask is white) the layer is completely opaque and therefore replaces anything below it. Everywhere in between, the layer has a degree of transparency equal to the shade of gray in the mask.</p>
<p>In order to show the mask, activate the “show active layer” radio button below the preview area, and then select the layer that has to be visualized. In the example above, I am showing the output of the topmost layer in the mask, the one called “transition”. Double-clicking on the name of the “transition layer allows to open the corresponding configuration dialog, where the parameters of the layer (a <a href="http://photoflowblog.blogspot.fr/2014/09/tutorial-using-curves-tool-in-photoflow.html"><strong>curves</strong> adjustment</a> in this case) can be modified. The curve is initially a simple diagonal: output values exactly match input ones.</p>
<p>If the rightmost point in the curve is moved to the left, and the leftmost to the right, it is possible to modify the vertical gradient and the reduce the size of the transition between pure black and pure white, as shown below:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_transition_example.jpg" width="960" height="581"> 
</figure>

<p>We are getting closer to our goal of revealing the hills from the background layer, by making the corresponding portion of the mask purely black. However, the transition we have obtained so far is straight, while the contour of the hills has a quite complex curvy shape… this is where the second <strong>curves</strong> adjustment, associated to the “modulation” layer, comes into play.</p>
<p>As one can see from the screenshot above, between the bottom gradient and the “transition” curve there is a group of three layers: an <strong>horizontal</strong> gradient, a modulation curve and <strong>invert</strong> operation. Moreover, the group itself is combined with the bottom vertical gradient in <a href="http://docs.gimp.org/en/gimp-concepts-layer-modes.html"><strong>grain merge</strong></a> blending mode.</p>
<p>Double-clicking on the “modulation” layer reveals a tone curve which is initially flat: output values are always 50% independently of the input. Since the output of this “modulation” curve is combined with the bottom gradient in <strong>grain merge</strong> mode, nothing happens for the moment. However, something interesting happens when a new point is added and dragged in the curve: the shape of the mask matches exactly the curve, like in the example below.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_modulation_example.jpg" width="960" height="581"> 
</figure>




<h3 id="the-sky-hills-transition">The sky/hills transition<a href="#the-sky-hills-transition" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The technique introduced above is used here to create a precise and smooth transition between the sky and the hills. As you can see, with a sufficiently large number of points in the modulation curve one can precisely follow the shape of the hills:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_enfuse_mask.png" width="960" height="433"> 
</figure>

<p>The result of the blending looks like that (click the image to see the initial +1EV version):</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_enfuse_blended.jpg" data-swap-src="pano_+1EV.jpg" alt="Final result" width="690" height="328"> 
<figcaption>
Enfuse output blended with the +1EV image (click to see the initial +1EV version) 
</figcaption>
</figure>

<p>The sky looks already much denser and saturated in this version, and the clouds have gained in volume and tonal variations. However, the -1EV image looks even better, therefore we are going to take the sky and clouds from it. </p>
<p><a name="sky_blend"></a>
To include the -1EV image we are going to follow the same procedure done already in the case of the enfuse output:</p>
<ol>
<li><p>add a new layer of type “Open image” and load the -1EV Hugin output (I’ve named this new layer “sky”)</p>
</li>
<li><p>open the mask of the newly created layer and add a transition that reveals only the upper portion of the image</p>
</li>
</ol>
<p>Fortunately we are not obliged to recreate the mask from scratch. PhotoFlow includes a feature called <strong>layer cloning</strong>, which allows to <strong>dynamically</strong> copy the content of one layer into another one. Dynamically in the sense that the pixel data gets copied <em>on the fly</em>, such that the destination always reflects the most recent state of the source layer.</p>
<p>After activating the mask of the “sky” layer, add a new layer inside it and choose the “clone layer” tool (see screenshot below).</p>
<figure>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_clone_layer.png" width="640" height="487"> 
<figcaption>
Cloning a layer from one mask to another
</figcaption>
</figure>

<p>In the tool configuration dialog that will pop-up, one has to choose the desired source layer among those proposed in the list under the label “Layer name”. The generic naming scheme of the layers in the list is “[root group name]/root layer name/OMap/[mask group name]/[maks layer name]”, where the items inside square brackets are optional. </p>
<figure>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_sky_mask_clone_layer.png" width="470" height="398"> 
<figcaption>
Choice of the clone source layer 
</figcaption>
</figure>

<p>In this specific case, I want to apply a smoother transition curve to the same base gradient already used in the mask of the “enfuse” layer. For that we need to choose “enfuse/OMap/gradient modulation (blended)” in order to clone the output of the “gradient modulation” group <strong>after the <em>grain merge</em> blend</strong>, and then add a new <strong>curves</strong> tool above the cloned layer:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_sky_mask.jpg" width="960" height="413"> 
<figcaption>The final transition mask between the hills and the sky
</figcaption>
</figure>

<p>The result of all the efforts done up to now is shown below; it can be compared with the initial starting point by clicking on the image itself:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_sky_blended.jpg" data-swap-src="pano_+1EV.jpg" alt="Final result" width="690" height="322"> 
<figcaption>
Edited image after blending the upper portion of the -1EV version through a layer mask. Click to see the initial +1EV image.
</figcaption>
</figure>

<h2 id="contrast-and-saturation">Contrast and saturation<a href="#contrast-and-saturation" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>We are not quite done yet, as the image is still a bit too dark and flat, however this version will “tolerate” some contrast and luminance boost much better than a single exposure. In this case I’ve added a <strong>curves</strong> adjustment at the top of the layer’s stack, and I’ve drawn an S-shaped RGB tone curve as shown below:</p>
<figure>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_tone_curve_edit.png" width="468" height="672"> 
</figure>

<p>The effect of this tone curve is to increase the overall brightness of the image (the middle point is moved to the left) and to compress the shadows and highlights without modifying the black and white points (i.e. the extremes of the curve). This curve definitely gives “pop” to the image (click to see the version before the tone adjustment):</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_contrast.jpg" data-swap-src="pano_sky_blended.jpg" alt="Final result" width="960" height="457"> 
<figcaption>
Result of the S-shaped tonal adjustment (click the image to see the version before the adjustment).
</figcaption>
</figure>

<p>However, this comes at the expense of an overall increase in the color saturation, which is a typical side effect of RGB curves.
While this saturation boost looks quite nice in the hills, the effect is rather disastrous in the sky.
The blue as turned electric, and is far from what a nice, saturated blue sky should look like!</p>
<p>However, there is a simple fix to this problem: change the blend mode of the <strong>curves</strong> layer from <strong>Normal</strong> to <strong>Luminosity</strong>. 
The tone curve in this case only modified the luminosity of the image, but preserves as much as possible the original colors.
The difference between normal and lumnosity blending is shown below (click to see the <strong>Normal</strong> blending).
As one can see, the <strong>Luminosity</strong> blend tends to produce a duller image, therefore we will need to fix the overall saturation in the next step.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_contrast_lumi.jpg" data-swap-src="pano_contrast.jpg" alt="Luminosity blend" width="960" height="457"> 
<figcaption>
S-shaped tonal adjustment with <strong>Luminosity</strong> blend mode (click the image to see the version with <strong>Normal</strong> blend mode).
</figcaption>
</figure>

<p>To adjust the overall saturation of the image, let’s now add an <strong>Hue/Saturation</strong> layer above the tone curve and set the saturation value to <strong>+50</strong>.
The result is shown below (click to see the <strong>Luminosity</strong> blend output).</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_saturation.jpg" data-swap-src="pano_contrast_lumi.jpg" alt="Saturation boost" width="960" height="457"> 
<figcaption>
Saturation set to <strong>+50</strong> (click the image to see the <strong>Luminosity</strong> blend output).
</figcaption>
</figure>

<p>This definitely looks better on the hills, however the sky is again “too blue”.
The solution is to decrease the saturation of the top part through an opacity mask.
In this case I have followed the same steps as for the mask of the <a href="#sky_blend">sky blend</a>, but I’ve changed the transition curve to the one shown here:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_saturation_mask.jpg" alt="Saturation mask" width="960" height="488">
</figure>

<p>In the bottom part the mask is perfectly white, and therefore a <strong>+50</strong> saturation boost is applied. On the top the mask is instead just about 30%, and therefore the saturation is increased of only about <strong>+15</strong>. This gives a better overall color balance to the whole image:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_saturation_masked.jpg" data-swap-src="pano_contrast_lumi.jpg" alt="Saturation boost after mask" width="960" height="457"> 
<figcaption>Saturation set to <strong>+50</strong> through a transition mask (click the image to see the <strong>Luminosity</strong> blend output).
</figcaption>
</figure>




<p>###Lab blending
The image is already quite ok, but I still would like to add some more tonal variations in the hills.
This could be done with lots of different techniques, but in this case I will use one that is very simple and straightforward, and that does not require any complex curve or mask since it uses the image data itself.
The basic idea is to take the <strong>a</strong> and/or <strong>b</strong> channels of the <a href="https://en.wikipedia.org/wiki/Lab_color_space"><strong>Lab</strong></a> colorspace, and combine them with the image itself in <strong>Overlay</strong> blend mode.
This will introduce <strong>tonal</strong> variations depending on the <strong>color</strong> of the pixels (since the <strong>a</strong> and <strong>b</strong> channels only encode the color information).
Here I will assume you are quite familiar wit the Lab colorspace.
Otherwise, <a href="https://en.wikipedia.org/wiki/Lab_color_space">here</a> is the link to the Wikipedia page that should give you enough informations to follow the rest of the tutorial.</p>
<p>Looking at the image, one can already guess that most of the areas in the hills have a yellow component, and will therefore be positive in the <strong>b</strong> channel, while the sky and clouds are neutral or strongly blue, and therefore have <strong>b</strong> values that are negative or close to zero. The grass is obviously green and therefore <strong>negative</strong> in the <strong>a</strong> channel, while the wineyards are brownish and therefore most likely with positive <strong>a</strong> values. In PhotoFlow the <strong>a</strong> and <strong>b</strong> values are re-mapped to a range between 0 and 100%, so that for example <strong>a=0</strong> corresponds to 50%. You will see that this is very convenient for channel blending.</p>
<p>My goal is to lighten the green and the yellow tones, to create a better contrast around the wineyards and add some “volume” to the grass and trees. Let’s first of all inspect the <strong>a</strong> channel: for that, we’ll need to add a group layer on top of everything (I’ve called it “ab overlay”) and then added a <strong>clone</strong> layer inside this group. The source of the clone layer is set to the <strong>a</strong> channel of the “backgroud” layer, as shown in this screenshot:</p>
<figure>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_a_channel_clone.png" alt="a channel clone" width="470" height="263"> 
<figcaption>
Cloning of the Lab “a” channel of the background layer
</figcaption>
</figure>

<p>A copy of the <strong>a</strong> channel is shown below, with the contrast enhanced to better see the tonal variations (click to see the original versions):</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_a_contrast.jpg" data-swap-src="pano_a_channel.jpg" alt="Saturation boost after mask" width="960" height="457"> 
<figcaption>
The Lab <strong>a</strong> channel (boosted contrast)
</figcaption>
</figure>

<p>As we have already seen, in the <strong>a</strong> channel the grass is negative and therefore looks dark in the image above. If we want to lighten the grass we therefore need to invert it, to obtain this:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_a_invert_contrast.jpg" alt="Saturation boost after mask" width="960" height="457"> 
<figcaption> The inverted Lab <strong>a</strong> channel (boosted contrast)
</figcaption> </figure>

<p>Let’s now consider the <strong>b</strong> channel: as sursprising as it might seem, the grass is actually more yellow than green, or at least the <strong>b</strong> channel values in the grass are higher than the inverted <strong>a</strong> values. In addition, the trees at the top of the hill stick nicely out of the clouds, much more than in the <strong>a</strong> channel. All in all, a combination of the two Lab channels seems to be the best for what we want to achieve.</p>
<p>With one exception: the blue sky is very dark in the <strong>b</strong> channel, while the goal is to leave the sky almost unchanged. The solution is to blend the <strong>b</strong> channel into the <strong>a</strong> channel in <strong>Lighten</strong> mode, so that only the <strong>b</strong> pixels that are lighter than the corresponding <strong>a</strong> ones end up in the blended image. The result is shown below (click on the image to see the <strong>b</strong> channel).</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_b_lighten_contrast.jpg" data-swap-src="pano_b_contrast.jpg" alt="b channel lighten blend" width="960" height="457"> 
<figcaption>
<strong>b</strong> channel blended in <strong>Lighten</strong> mode (boosted contrast, click the image to see the <strong>b</strong> channel itself).
</figcaption>
</figure>

<p>And this are the blended <strong>a</strong> and <strong>b</strong> channels with the original contrast:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_b_lighten.jpg" alt="b channel lighten blend" width="960" height="457"> 
<figcaption>
The final <strong>a</strong> and <strong>b</strong> mask, without contrast correction
</figcaption>
</figure>

<p>The last act is to change the blending mode of the “ab overlay” group to <strong>Overlay</strong>: the grass and trees get some nice “pop”, while the sky remains basically unchanged:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_ab_overlay.jpg" data-swap-src="pano_saturation_masked.jpg" alt="ab overlay" width="960" height="457"> 
<figcaption> Lab channels overlay (click to see the image after the saturation adjustment).
</figcaption> </figure>

<p>I’m now almost satisfied with the result, except for one thing: the Lab overlay makes the yellow area on the left of the image way too bright. The solution is a gradient mask (horizontal this time) associated to the “ab overlay group”, to exclude the left part of the image as shown below:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_ab_overlay_mask.jpg" alt="overlay blend mask" width="960" height="491">
</figure>

<p>The final, masked image is shown here, to be compared with the initial starting point:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_ab_overlay_masked.jpg" data-swap-src="pano_+1EV.jpg" alt="final result" width="960" height="457"> 
<figcaption> The image after the masked Lab overlay blend (click to see the initial +1EV version).
</figcaption> </figure>




<h2 id="the-final-touch">The Final Touch<a href="#the-final-touch" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Through the tutorial I have intentionally pushed the editing quite above what I would personally find acceptable. The idea was to show how far one can go with the techniques I have described; fortunatey, the non-destructive editing allows us to go back on our steps and reduce the strength of the various effects until the result looks really ok.</p>
<p>In this specific case, I have lowered the opacity of the <strong>“contrast”</strong> layer to <strong>90%</strong>, the one of the <strong>“saturation”</strong> layer to <strong>80%</strong> and the one of the <strong>“ab overlay”</strong> group to <strong>40%</strong>. Then, feeling that the <strong>“b channel”</strong> blend was still brightening the yellow areas too much, I have reduced the opacity of the <strong>“b channel”</strong> layer to <strong>70%</strong>.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_adjusted_opacity.jpg" data-swap-src="pano_ab_overlay_masked.jpg" alt="opacity adjustment" width="960" height="457"> 
<figcaption> Opacities adjusted for a “softer” edit (click on the image to see the previous version).
</figcaption> </figure>

<p>Another thing I still did not like in the image was the overall color balance: the grass in the foreground looked a bit too <strong>“emerald”</strong> instead of <strong>“yellowish green”</strong>, therefore I thought that the image could profit of a general warming up of the colors. For that I have added a curves layer at the top of the editing stack, and brought down the middle of the curve in both the <strong>green</strong> and <strong>blue</strong> channels. The move needs to be quite subtle: I brought the middle point down from <strong>50%</strong> to <strong>47%</strong> in the greens and <strong>45%</strong> in the blues, and then I further reduced the opacity of the adjustment to <strong>50%</strong>. Here comes the warmed-up version, compared with the image before:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_warmer.jpg" data-swap-src="pano_adjusted_opacity.jpg" alt="opacity adjustment" width="960" height="457"> 
<figcaption> “Warmer” version (click to see the previous version)
</figcaption> </figure>

<p>At this point I was almost satisfied. However, I still found that the green stuff at the bottom-right of the image attracted too much my attention and distracted the eye. Therefore I darkened the bottom of the image with a slightly curved gradient applied in <strong>“soft light”</strong> blend mode. The gradient was created with the same technique used for blending the various exposures. The transition curve is shown below: in this case, the top part was set to <strong>50% gray</strong> (remember that we blend the gradient in <strong>“soft light”</strong> mode) and the bottom part was moved a bit below 50% to obtain a slightly darkening effect:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_vignetting.png" alt="vignetting gradient" width="960" height="415"> 
<figcaption>
Gradient used for darkening the bottom of the image.
</figcaption>
</figure>

<p><strong>It’s done!</strong> If you managed to follow me ‘till the end, you are now rewarded with the final image in all its glory, that you can again compare with the initial starting point.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_final2.jpg" data-swap-src="pano_+1EV.jpg" alt="final result" width="960" height="457"> 
<figcaption> 
The final image (click to see the initial +1EV version).
</figcaption>
</figure>

<p>It has been a quite long journey to arrive here… and I hope not to have lost too many followers on the way!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Basic Landscape Exposure Blending with GIMP and G'MIC]]></title>
            <link>https://pixls.us/articles/basic-landscape-exposure-blending-with-gimp-and-g-mic/</link>
            <guid isPermaLink="true">https://pixls.us/articles/basic-landscape-exposure-blending-with-gimp-and-g-mic/</guid>
            <pubDate>Tue, 09 Jun 2015 15:34:49 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/basic-landscape-exposure-blending-with-gimp-and-g-mic/basic landscape exposure blend lede.jpg" /><br/>
                <h1>Basic Landscape Exposure Blending with GIMP and G'MIC</h1> 
                <h2>Exploring exposure blending entirely in GIMP</h2>  
                <p>Photographer <a href="http://lightsweep.co.uk/">Ian Hex</a> had previously explored the topic of exposure blending with us by <a href="https://pixls.us/articles/luminosity-masking-in-darktable/">using luminosity masks in darktable</a>.
For his first <em>video</em> tutorial he’s revisiting the subject entirely in <a href="http://www.gimp.org">GIMP</a> and <a href="http://gmic.eu">G’MIC</a>.</p>
<!-- more -->
<div class="big-vid">
<div class="fluid-vid">
<iframe width="1280" height="720" src="https://www.youtube-nocookie.com/embed/OmwnHoIP2vE?rel=0&amp;showinfo=0" frameborder="0" allowfullscreen></iframe>
</div>
</div>

<p>Have a look and let him know what you think in the forum.
He’s promised more if he gets a good response from people - so let’s give him some encouragement!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Interesting Usertest and Incoming]]></title>
            <link>https://pixls.us/blog/2015/06/interesting-usertest-and-incoming/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/06/interesting-usertest-and-incoming/</guid>
            <pubDate>Sat, 06 Jun 2015 01:00:37 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/06/interesting-usertest-and-incoming/pano_heading.jpg" /><br/>
                <h1>Interesting Usertest and Incoming</h1> 
                <h2>A view of someone using the site and contributing</h2>  
                <p>I ran across a neat website the other day for getting actual user feedback when viewing your website: <a href="http://www.usertesting.com/">UserTesting</a>.
They have a free option called <a href="http://peek.usertesting.com/">peek</a> that records a short (~5 min.) screencast of a user visiting the site and narrating their impressions.</p>
<figure>
<img src="https://pixls.us/blog/2015/06/interesting-usertest-and-incoming/peeklogo.png" alt="Peek Logo" >
</figure>

<p>You can imagine this to be quite interesting to someone building a site.</p>
<!-- more -->
<p>It appears the service asks its testers to answer three specific questions (I am assuming this is for the free service mainly):</p>
<ul>
<li>What is your first impression of this web page? What is this page for?</li>
<li>What is the first thing you would like to do on this page?
Please go ahead and try to do that now.
Please describe your experience.</li>
<li>What stood out to you on this website?
What, if anything, frustrated you about this site?
Please summarize your thoughts regarding this website.</li>
</ul>
<p>Here’s the actual video they sent me (can also be found <a href="http://peek.usertesting.com/result/40917409038587">on their website</a>):</p>
<div class="fluid-vid">
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/p3CBdw6E9bc?rel=0&amp;showinfo=0" frameborder="0" allowfullscreen></iframe>
</div>

<p>I don’t have much to say about the testing.
It was very insightful and helpful to hear someones view coming to the site fresh.
I’m glad that my focus on simplicity is appreciated!</p>
<p>It was interesting that the navigation drawer wasn’t used, or found, until the very end of the session.
It was also interesting to hear the testers thoughts around scrolling down the main page (is it so rare these days for content to be longer than a single screen - above the fold?).</p>
<h2 id="exposure-blended-panorama-coming-soon"><a href="#exposure-blended-panorama-coming-soon" class="header-link-alt">Exposure Blended Panorama Coming Soon</a></h2>
<p>The creator of new processing project <a href="http://photoflowblog.blogspot.com/">PhotoFlow</a>, Andrea Ferrero, is being kind enough to take a break from coding to write a new tutorial for us: <em>“Exposure Blended Panoramas with Hugin and Photoflow”</em>!</p>
<p>I’ve been collaborating with him on getting things in order to publish and this looks like it’s going to be a fun tutorial!</p>
<h2 id="submitting"><a href="#submitting" class="header-link-alt">Submitting</a></h2>
<p>We’ve been talking back and forth trying to find a good workflow for contributors to be able to provide submissions as easily as possible.
At the moment I translate any submissions into <a href="http://daringfireball.net/projects/markdown/syntax">Markdown</a>/<a href="https://developer.mozilla.org/en-US/docs/Web/Guide/HTML/HTML5">HTML</a> as needed from whatever source the author decides to throw at me.  This is less than ideal (but at least it’s nice and easy for authors - which is more important to me than having to port them manually).</p>
<h3 id="github-submissions"><a href="#github-submissions" class="header-link-alt">Github Submissions</a></h3>
<p>For those comfortable with <a href="https://git-scm.com/">Git</a> and <a href="https://github.com">Github</a> I have created a neat option to submit posts.
You can fork my <a href="https://github.com/patdavid/PIXLSUS">PIXLS.US repository</a> from here:</p>
<p><a href="https://github.com/patdavid/PIXLSUS">https://github.com/patdavid/PIXLSUS</a></p>
<p>Just follow the instructions on that page, and issue a pull request when you’re done.
Simple! :)
You may want to communicate with me to let me know the status of the submission, in case you’re still working on it, or it’s ready to be published.</p>
<h3 id="any-old-files"><a href="#any-old-files" class="header-link-alt">Any Old Files</a></h3>
<p>Of course, if you want to submit some content, please don’t feel you have to use Github if you’re not comfortable with it.
Feel free to write it any way that works best for you (as I said, my native build files are usually simple Markdown).
You can also reach out to me and let me know what you may be thinking ahead of time, as I might be able to help out.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[A New (Old) Tutorial]]></title>
            <link>https://pixls.us/blog/2015/05/a-new-old-tutorial/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/05/a-new-old-tutorial/</guid>
            <pubDate>Wed, 27 May 2015 18:32:07 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/05/a-new-old-tutorial/Mairi Deux 3.jpg" /><br/>
                <h1>A New (Old) Tutorial</h1> 
                <h2>Revisiting an Open Source Portrait (Mairi)</h2>  
                <p>A little while back I had attempted to document a shoot with my friend and model, Mairi.
In particular I wanted to capture a start-to-finish workflow for processing a portrait using free software.
There are often many tutorials for individual portions of a retouching process but rarely do they get seen in the context of a full workflow.</p>
<p>The results became a <a href="http://blog.patdavid.net/2013/03/the-open-source-portrait-equipment.html" title="An Open Source Portrait (Equipment)">two</a>-<a href="http://blog.patdavid.net/2013/03/the-open-source-portrait-postprocessing.html" title="An Open Source Portrait (Postprocessing)">part</a> post on my blog.
For posterity (as well as for those who may have missed it the first time around) I am republishing the second part of the tutorial <a href="https://pixls.us/articles/an-open-source-portrait-mairi/"><em>Postprocessing</em></a> here.</p>
<!-- more -->
<p>Though the post was originally published in 2013 the process it describes is still quite current (and mostly still my same personal workflow).
This tutorial covers the retouching in post while the <a href="http://blog.patdavid.net/2013/03/the-open-source-portrait-equipment.html" title="An Open Source Portrait (Equipment)">original article</a> about setting up and conducting the shoot is still over on my personal blog.</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Sharpen-Wavelet-2.jpg" alt="Mairi Portrait Final"/>
<figcaption>
The finished result from the tutorial.<br>by Pat David (<a class='cc' href='https://creativecommons.org/licenses/by-sa/2.0/'>cba</a>).
</figcaption>
</figure>

<p>The tutorial may read a little long but the process is relatively quick once it’s been done a few times.
Hopefully it proves to be helpful to others as a workflow to use or tweak for their own process!</p>
<h2 id="coming-soon"><a href="#coming-soon" class="header-link-alt">Coming Soon</a></h2>
<p>I am still working on getting some sample shots to demonstrate the previously mentioned <a href="https://discuss.pixls.us/t/noise-free-shadows-dual-exposure/204">noise free shadows</a> idea using dual exposures.
I just need to find some sample shots that will be instructive while still at least being something nice to look at…</p>
<p>Also, another guest post is coming down the pipes from the creator of <a href="http://photoflowblog.blogspot.com/">PhotoFlow</a>, Andrea Ferrero!
He’ll be talking about creating blended panorama images using <a href="http://hugin.sourceforge.net/">Hugin</a> and PhotoFlow.
Judging by the results on his sample image, this will be a fun tutorial to look out for!</p>
<figure class="big-vid">
<img src="https://pixls.us/blog/2015/05/a-new-old-tutorial/pano-sample.jpg">
</figure>



  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[An Open Source Portrait (Mairi)]]></title>
            <link>https://pixls.us/articles/an-open-source-portrait-mairi/</link>
            <guid isPermaLink="true">https://pixls.us/articles/an-open-source-portrait-mairi/</guid>
            <pubDate>Mon, 18 May 2015 17:04:49 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Mairi.jpg" /><br/>
                <h1>An Open Source Portrait (Mairi)</h1> 
                <h2>Processing a portrait session</h2>  
                <p>This is an article I had written long ago (<a href="http://blog.patdavid.net/2013/03/the-open-source-portrait-postprocessing.html">originally published</a> in 2013).
The material is still quite relevant and the workflow hasn’t really changed, so I am republishing it here for posterity and those that may have missed it the first time around.</p>
<p><a href="http://blog.patdavid.net/2013/03/the-open-source-portrait-equipment.html">The previous post</a> for this article went over the shoot that led to this image.</p>
<ul>
<li><a href="#picking-your-image">Picking Your Image</a></li>
<li><a href="#raw-processing">RAW Processing</a><ul>
<li><a href="#adjust-exposure">Adjust Exposure</a><ul>
<li><a href="#exposure-compensation">Exposure Compensation</a></li>
<li><a href="#black-point">Black Point</a></li>
</ul>
</li>
<li><a href="#white-balance">White Balance</a></li>
<li><a href="#noise-reduction-amp-sharpening">Noise Reduction</a></li>
<li><a href="#in-summary">In Summary</a></li>
</ul>
</li>
<li><a href="#gimp-retouching">GIMP Retouching</a><ul>
<li><a href="#touchup-flyaway-hairs">Touchup Hair</a></li>
<li><a href="#fixing-the-background-amp-cropping">Fixing the Background/Cropping</a></li>
<li><a href="#skin-retouching-with-wavelet-decompose">Skin Retouching &amp; Wavelet Decompose</a></li>
<li><a href="#contour-painting-highlights">Contour Painting Highlights</a></li>
<li><a href="#color-curves">Color Curves</a></li>
<li><a href="#sharpening">Sharpening</a></li>
</ul>
</li>
<li><a href="#finally-at-the-end">The End</a></li>
</ul>
<p>If you’d like to follow along with the image of Mairi, you can download the files from the links below.</p>
<p class="aside" style="font-size: 1rem;">
<a href="https://docs.google.com/uc?export=download&amp;id=0B21lPI7Ov4CVNUk1Y01HQUNPckk">Download the .ORF RAW file [Google Drive]</a><br><a href="Mairi-RAW-Final.jpg">Download the full resolution .JPG output from RawTherapee.</a><br><a href="https://docs.google.com/uc?export=download&amp;id=0B21lPI7Ov4CVMl9lZFJWb1Rxa3c">Download the Full Resolution .XCF file [.7zip - 265MB]</a><br>If you want to use the .XCF file just to see what I did, I recommend the ½ resolution file, as it’s smaller: 
<a href="https://docs.google.com/uc?export=download&amp;id=0B21lPI7Ov4CVaXA4bkNJdDhGRkU">Download the ½ Resolution .XCF file [.7zip - 60MB]</a><br><small><em>These files are being made available under a <a href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/">Creative Commons Attribution, Non-Commercial, Share Alike</a> license (<a href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/">CC-BY-SA-NC</a>).</em></small>
</p>


<p>To whet your appetite, here is the final result of all of the postprocessing done in this tutorial (click to compare it to no retouching):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Sharpen-Wavelet-2.jpg" data-swap-src="Mairi-RAW-Final.jpg" alt="Mairi Final Result" width="598" height="800" />
<figcaption>
The final result I’m aiming for.<br>Click to compare to original.
</figcaption>
</figure>

<hr>
<h2 id="picking-your-image">Picking Your Image<a href="#picking-your-image" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>This is a hard thing to quantify, as each of us is driven by our own vision and style.
In my case, I wanted something a little more somber looking with a focus on her eyes (<em>they are the window to the soul,</em> right?).
There’s just something I like about big, bright eyes in a portrait, particularly in women.</p>
<p>I also personally liked the grey sweater against the grey background as well.
I felt that it put more focus on the colors of her skin, hair, and eyes.
So that pretty much narrowed me down to this contact sheet:</p>
<figure class="big-vid">
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/contact-grey.jpg" alt="Mairi contact sheet" width="960" height="902">
<figcaption>
    Narrowing it down to this set.
</figcaption>
</figure>

<p>Looking over the shots, I decided I liked the images with the hood up, but her hair down and flowing around her.
This puts me in the top two rows, with only a few left to decide upon.
At this point I narrowed it down to one that I liked best - grey sweater, hood up but not pulled back against her head, hair flowing out of it, and big eyes.</p>
<p>This is pretty common, I’d imagine.
You can grab several frames, but in the end hopefully just the right amount of small details will come together and you’ll find something that you really like.
In my case it was this one:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/P2160427.jpg" alt="Mairi Raw" width="600" height="800">
<figcaption>
    I finally decided on this shot based on the color, hair, eyes, and slight smile.
</figcaption>
</figure>

<p><strong>Now hold on a minute</strong>. The image above is the JPG straight out of the camera.
As you can see, I’ve underexposed this one a little bit, and the colors are not anywhere near where I’d like them to be.
If you’re following along <em>don’t download this version of the image</em>.
I’ll have a much better starting JPG after we run it through some RAW development first!</p>
<p>If you’re impatient, <a href="#raw-summary">jump to that section</a> and get the image there.</p>
<h2 id="raw-processing">Raw Processing<a href="#raw-processing" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>There are a few RAW conversion options out there in the land of F/OSS.
Here’s a small list of popular ones to peruse:</p>
<ul>
<li><a href="http://www.rawtherapee.com">RawTherapee</a></li>
<li><a href="http://www.darktable.org/">darktable</a></li>
<li><a href="http://ufraw.sourceforge.net/">UFRaw</a></li>
<li><a href="http://photivo.org/">Photivo</a></li>
<li><a href="http://aferrero2707.github.io/PhotoFlow/">PhotoFlow</a></li>
</ul>
<p>One of the reasons I love using F/OSS is the availability (usually) of the software across my OS’s.
In my case I went with RawTherapee a while back and liked it, so I’ve stuck with it so far (even though I had to build my own OSX versions).</p>
<p>So, my workflow includes RawTherapee at this point.
You should be able to follow along in other converters, but I’m going to focus on RT because that’s what I’m using.
If you shoot only in JPG (seriously, use RAW if you can), you can skip this section and head directly down to <a href="#GIMP">GIMP Retouching</a>.</p>
<h3 id="load-it-up">Load it up<a href="#load-it-up" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>After starting up RawTherapee, you’ll be in the <strong>File Browser</strong> interface, waiting for you to select a folder of images.
You can navigate to your folder of images through the file browser on the left side of the window.
It may take a bit while RawTherapee generates thumbnails of all the images in your directory.</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/RT-file-browser.png" alt="RawTherapee File Browser" width="600" height="369">
<figcaption>
RawTherapee file browser view.<br>(Navigate folders on the left pane)
</figcaption>
</figure>

<p>Once you’ve located your image, double clicking it in the main window will open it up for editing.
If you’re using a default install/options on RT, chances are a “Default” profile will be applied to your image that has <strong>Auto Levels</strong> turned on.</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Mairi-RT-Default.jpg" alt="Mairi RawTherapee Default" width="598" height="800">
<figcaption>
The base image with “Default” profile applied (auto levels).
</figcaption>
</figure>

<p>Chances are that <strong>Auto Levels</strong> will not look very good.
My <strong>Default</strong> processing profile usually does not look so hot (no noise reduction, auto levels, etc.).
That’s ok, because we are going to fix this right up in the next few sections.</p>
<h3 id="adjust-exposure">Adjust Exposure<a href="#adjust-exposure" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I like to control the exposure and processing on my RAW images.
Auto Levels may work for some, but once you get used to some basic corrections and how to use them it’s relatively quick and painless to dial-in something you like quickly.</p>
<p class="aside">Again - much of what I’m going to describe is subjective, and will depend on personal taste and vision.
This just happens to be how I work, adjust as needed for you own workflow. :)</p>

<p>To give me a good starting point I will usually remove all adjustments to the image, and reset everything back to zero.
This is easy to do as my <strong>Default</strong> profile has nothing done to it other than <strong>Auto Levels</strong>.</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/RT-Exposure-Default.png" alt="RawTherapee Default Exposure Values" width="284" height="845">
<figcaption>
Auto Levels values on the Exposure panel.
</figcaption>
</figure>

<p>A quick and easy way to reset the <strong>Exposure</strong> values on the <strong>Exposure</strong> panel is to use the <b style="color:#20a020;">Neutral button</b> on that panel (I’ve outlined it in <b style="color:#20A020;">green</b> above).
You can also hit the small “undo” arrows next to each slider to set that slider back to zero as well.</p>
<p>At this point the image exposure is set to a baseline we can begin working on.
For reference, here is my image after zeroing out all of the exposure sliders and the saturation:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Mairi-RT-Zeroed.jpg" alt="Mairi RawTherapee Zero Values" width="598" height="800">
<figcaption>
With all exposure adjustments (and saturation) set to zero.
</figcaption>
</figure>




<h4 id="exposure-compensation">Exposure Compensation<a href="#exposure-compensation" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The first thing I’ll begin adjusting is the <em>Exposure Compensation</em> for the image.
You want to be paying careful attention to the histogram for the image to know what your adjustments to <em>Exposure Compensation</em> are doing, and to keep from blowing things out.</p>
<p>I personally begin pushing the <em>Exposure Compensation</em> until one of the RGB channels just begins butting up against the right side of the histogram.
Here is what the histogram looks like for the neutral exposure:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/RT-Histogram-Neutral.png" alt="RawTherapee Neutral Histogram" width="282" height="155">
<figcaption>
Neutral exposure histogram.
</figcaption>
</figure>

<p>After adjusting <em>Exposure Compensation</em> I get the Red channel snug up against the right side of the histogram:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/RT-Histogram-Exp-Comp.png" alt="RawTherapee Histogram Exposure Compensation" width="282" height="155">
<figcaption>
<em>Exposure Compensation</em> until the values just touch the right side.
</figcaption>
</figure>

<p>If you go a little too far, you’ll notice one of the channels will spike against the side, and if you really go too far, you’ll get a small colored box in the upper right corner indicating that channel has gone out of range (is blown out).</p>
<p>So here is what my image looks like now with only the <em>Exposure Compensation</em> adjusted to a better range:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Mairi-RT-Exp-Comp.jpg" alt="Mairi RawTherapee Exposure Compensation" width="598" height="800">
<figcaption>
<em>Exposure Compensation</em> adjusted to 2.40.
</figcaption>
</figure>

<p>The <strong>Exposure</strong> panel in RT now looks like this (only the <em>Exposure Compensation</em> has been adjusted):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/RT-Exposure-Exp-Comp.png" alt="RawTherapee Exposure Compensation Panel" width="286" height="630">
<figcaption>
<em>Exposure Compensation</em> set to 2.40 for this image.
</figcaption>
</figure>

<p>If the highlights in your image begin to get slightly out of range, you may need to make adjustments to the <strong>Highlight recovery amount/threshold</strong>, but in my case the image was slightly under-exposed, so I kept it zero.</p>
<p>There is also a great visual method of seeing where your exposures for each channel are at, and to avoid hightlight/shadow clipping.
Along the top of your main image window, to the right, there are some icons that look like this:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/RT-Clipping-Channels.png" alt="RawTherapee Clipping Channels" width="326" height="40">
<figcaption>
<i style="color:rgb(0,255,255); background-color: gray;">Channel previews</i>, <i style="color:rgb(255,0,255); background-color: gray;">Highlight</i> &amp; <i style="color:rgb(255,255,0); background-color: gray;">Shadow</i> clipping indicators
</figcaption>
</figure>

<p>The <i style="color:rgb(0,255,255); background-color: gray;">Channel previews</i> let’s you individually toggle each of the R,G,B, and Luminosity previews for the image.
You can use these with the <i style="color:rgb(255,0,255); background-color: gray;">Highlight</i> and <i style="color:rgb(255,255,0); background-color: gray;">Shadow</i> clipping indicators to see which channels are clipping and where.</p>
<p><i style="color:rgb(255,0,255); background-color: gray;">Highlight</i> and <i style="color:rgb(255,255,0); background-color: gray;">Shadow</i> clipping indicators will visually show you on your image where the values go beyond the threshold for each.
For highlights, it’s any values that are greater than <strong>253</strong>, and for shadows it’s any values that are lower than 8.</p>
<p>To illustrate, here is what my image looks like in RT with the <em>Exposure Compensation</em> set to 2.40 from above:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Mairi-RT-Clipping.jpg" alt="Mairi RawTherapee Clipping Channels" width="598" height="800">
<figcaption>
With Highlight &amp; Shadow clipping turned on.
</figcaption>
</figure>

<p>I don’t mind the shadows clipping in the dark regions of the image, though I can make adjustments to the <strong>Black Point</strong> (below) to modify that.
The highlight clipping on her face is of more concern to me.
I certainly don’t want that!</p>
<p>At this point I can dial in my <em>Exposure Compensation</em> for the highlights by backing it down slightly.
As I ease off it I should be seeing the dark patch for <em>Highlight Clipping</em> growing smaller.
I’ll stop when it’s either all gone, or just about all gone.</p>
<p>I wasn’t too far off in my initial adjustment, and only had to back the <em>Exposure Compensation</em> off to <strong>2.30</strong> to remove most of the highlight clipping.</p>
<p>Settings so far (everything else zero)…</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Exposure Compensation</td>
<td>2.30</td>
</tr>
</tbody>
</table>
<hr>
<h4 id="black-point">Black Point<a href="#black-point" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>At this point I will usually zoom a bit into a shadow area of my image that might include dark/black tones.
The blacks feel a little flat to me, and I’m going to increase the black level just a bit to darken them up.</p>
<p>I want to be zoomed in a bit so I can determine at which point the black point crushes any details that I want to be visible still.
You want your blacks to be dark if possible, but you want to keep details in the shadows if possible (it’s really, really subjective where this point is, but I’ll err on the conservative side since I am still going to process colors a little bit in GIMP later).</p>
<p>Starting with a <strong>Black</strong> point of zero:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Mairi-Detail-Black-0.jpg" alt="Mairi Detail Black 0" width="600" height="600">
</figure>

<p>I will increase the <strong>Black</strong> point while keeping an eye on those shadow details, increasing it until I like how the blacks look and I haven’t destroyed detail in the dark tones.
I finally settled on a <strong>Black</strong> value of 150 as seen here:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Mairi-Detail-Black-150.jpg" data-swap-src='Mairi-Detail-Black-0.jpg' alt="Mairi Detail Black 150" width="600" height="600">
<figcaption>
Black value set at 150 (still keeping sweater details in the shadows).<br>Click to compare to previous.
</figcaption>
</figure>

<p>Watch out for <em>Shadow Recovery</em> when you first start adjusting the <em>Black Point</em>.
It’s default might be a different value than zero (mine is at 50), and the <strong>Neutral</strong> button won’t set it back to zero (resetting it will give it back to it’s default value of 50).
You may want to push it manually to zero, and if you feel you want to bump shadow details a bit, <em>then</em> start pushing it up.</p>
<p>I know things look noisy at the moment, but we’ll deal with that in the next section (there is no noise reduction being applied at this point).</p>
<p>Settings so far (everything else zero)…</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Exposure Compensation</td>
<td>2.30</td>
</tr>
<tr>
<td>Black</td>
<td>150</td>
</tr>
</tbody>
</table>
<hr>
<h4 id="brightness-contrast-and-saturation">Brightness, Contrast, and Saturation<a href="#brightness-contrast-and-saturation" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>For this image I didn’t feel the need to modify these values, but this is purely subjective (<em>again</em>).
If you do modify these values, keep an eye on the histogram and what it’s doing to keep things from getting out of range/whack again.</p>
<h3 id="white-balance">White Balance<a href="#white-balance" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Hopefully you had the right <strong>White Balance</strong> set during your shoot in camera.
If not, it’s ok - we’re shooting in RAW so we can just set it as needed now.</p>
<p>I happen to have had my in-camera WB set to <em>Flash</em>, so the embedded WB settings in my RAW file metadata are pretty close.
In my shot, however, you’ll notice that there is a bit of a white window visible in the left of the frame.
I happen to know that the window is quite white, and should be rendered as such in my image.</p>
<p>As a side note, what I <em>really</em> should have done was to get myself a good reference for balancing the white balance, and to shoot it as part of my setup.
Something like the <a href="http://www.amazon.com/gp/product/B000JLO31C/ref=as_li_ss_tl?ie=UTF8&amp;camp=1789&amp;creative=390957&amp;creativeASIN=B000JLO31C&amp;linkCode=as2&amp;tag=httpblogpatda-20">X-Rite MSCCC ColorChecker Classic</a>, or even a <a href="http://www.amazon.com/gp/product/B000ARHJPW/ref=as_li_ss_tl?ie=UTF8&amp;camp=1789&amp;creative=390957&amp;creativeASIN=B000ARHJPW&amp;linkCode=as2&amp;tag=httpblogpatda-20">WhiBal G7 Certified Neutral White Balance Card</a>.
These are a little pricey, but any good 18% grey card will do, really.
I just happen to know that my window borders are a pure white, so I’m cheating a bit here…</p>
<p>So here is what our image looks like at the moment:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Mairi-WB-Camera.jpg" alt="Mairi White Balance Camera" width="598" height="800">
<figcaption>
Image so far, with <strong>White Balance</strong> set to <em>Camera</em> (Default).
</figcaption>
</figure>

<p>The <strong>White Balance</strong> for your image can be adjusted from the <strong>Color</strong> panel:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/RT-Color-Default.png" alt="RawTherapee Default Color" width="288" height="422">
<figcaption>
Default Color panel showing <em>Camera</em> white balance.
</figcaption>
</figure>

<p>You can try out some of the presets in the <em>Method</em> drop-down - there are the typical settings there for Sunny, Shade, Flashes, etc…
In my case I am going to use the <strong>Spot WB</strong> option.
Clicking that button will let me pick a section of my image that should be color neutral.</p>
<p>In my case, I know that the window border should be white (and color neutral), so I will pick from that area on my image.
Doing so will shift my WB, and will produce a result that looks like this:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Mairi-WB-window.jpg" data-swap-src="Mairi-WB-Camera.jpg" alt="Mairi Camera White Balance" width="598" height="800">
<figcaption>
WB based on white window border.<br>Click to compare <em>Camera</em> based
</figcaption>
</figure>

<p>I also happen to know that the grey colored walls in the background are close to neutral, but with the slightest hint of blue in them.
If I used the grey wall instead of the white window, I would introduce the slightest warm cast to the image.
I tried it (choosing a section of the grey wall on the right side of the background), and actually prefer the slightly warmer color, personally:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Mairi-WB-Wall.jpg" data-swap-src="Mairi-WB-window.jpg" alt="Mairi White Balance Wall" width="598" height="800">
<figcaption>
WB based on the grey wall background (right side of image).<br/>
Click to compare to window WB.
</figcaption>
</figure>

<p>The difference is ever so slight, but it is there.
In my original final image, I went with the balance pulled from the wall, so I will continue with that version here.
If you’re curious, here is what my WB values look like:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/RT-Color-SpotWB-Window.png" alt="RawTherapee Spot White Balance Window" width="288" height="420">
<figcaption>
After setting <strong>Spot WB</strong> to the window.
</figcaption>
</figure>

<p>Seriously, though, don’t rely on luck.
Get a grey/color card to correct color casts if you can…</p>
<p>Settings so far (everything else zero)…</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Exposure Compensation</td>
<td>2.30</td>
</tr>
<tr>
<td>Black</td>
<td>150</td>
</tr>
<tr>
<td>WB Temperature</td>
<td>7300</td>
</tr>
<tr>
<td>WB Tint</td>
<td>0.545</td>
</tr>
</tbody>
</table>
<hr>
<h3 id="noise-reduction-sharpening">Noise Reduction &amp; Sharpening<a href="#noise-reduction-sharpening" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Chances are the RAW image is going to look pretty noisy zoomed in a bit.
This isn’t unusual since we are dealing with RAW data.
There are two noise reduction (NR) options in RT, and we are going to want to use both.</p>
<h4 id="impulse-noise-reduction">Impulse Noise Reduction<a href="#impulse-noise-reduction" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>This NR will remove pixels that have a high impulse deviation from surrounding pixels.
Basically the “salt and pepper” noise you may notice in your images where individual pixels are oddly brighter/darker than the surrounding pixels.</p>
<p>If I zoom into a portion of my image (not far from where I was looking at shadows for setting a black point), I’ll see this:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/NR-Impulse-Crop-None.png" alt="Noise Reduction Crop None" height="600" width="600">
<figcaption>
Closeup crop with no <strong>Impulse Noise Reduction</strong>.
</figcaption>
</figure>

<p>I’ll normally play a bit with the <strong>Impulse NR</strong> to alleviate the specks while still retaining details.
As with most NR methods - going a bit too far will obliterate some details with the noise.
The trick is to find a happy medium between the two.
In my case, I settled on a value of 55 (the default is 50):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/NR-Impulse-Crop-55.png" data-swap-src="NR-Impulse-Crop-None.png" alt="Impulse Noise Reduction 55" width="600" height="600">
<figcaption>
<strong>Impulse NR</strong> set to a value of 55.<br>Click to compare to no NR.
</figcaption>
</figure>

<p>I could have gone a bit further (and have in others from this series), and pushed it up to the 60-70 range, but it’s a matter of taste and weighing the tradeoffs.</p>
<h4 id="luminance-chrominance-noise-reduction">Luminance/Chrominance Noise Reduction<a href="#luminance-chrominance-noise-reduction" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>These two NR methods will suppress noise in the luminance channel (brightness), and the blue/red chrominances.</p>
<p>I will use a light hand with these NR values.
The defaults are 5 for each, and it should make a noticeable difference just with the default values.
If you push the <strong>Luminance</strong> NR too far, you’ll smear fine details right off your image.
If you push the <strong>Chrominance</strong> NR too far, you’ll suck the life out of the colors in your image.</p>
<p>Not surprisingly, it’s another trade off.
In my case, I pushed the L/C NR just a tiny bit past the default to 6 and 6 respectively.</p>
<p>You’ll be able to see the effect of chrominance NR by looking at the flat colored grey wall in the background.
Just don’t forget to check other areas of your image with the settings you choose.
For me it was a close look at her iris, where pushing the chrominance NR too far lost some of the beautiful colors in her eye.</p>
<p>Compare the same crop from above with and without Luminance/Chrominance noise reduction applied:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/NR-LC-Crop-6-6.png" data-swap-src="NR-Impulse-Crop-55.png" alt="Noise Reduction Luminance Chrominance 6 6" width="600" height="600">
<figcaption>
With Luminance &amp; Chrominance NR set to 6.<br>Click to compare without. 
</figcaption>
</figure>

<p>If you’ve read my previous article on B&amp;W conversion, you’ll know that I don’t mind a little noise/grain in my images at all, so this level doesn’t bother me in the least.
I could chase the noise even further if I really wanted to, but always remember that doing so is going to be at the expense of detail/color in your final result.
As with most things in life, moderation is key!</p>
<h4 id="sharpening">Sharpening<a href="#sharpening" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>If you are going to sharpen your image a bit, this is probably the best time to do so.
The problem is that <em>usually</em> sharpening is the last bit of post-processing you should do to your image, due to it’s destructive nature.
Plus, lately I’ve grown accustomed to sharpening by using an extra wavelet scale during my skin retouching in GIMP (you’ll see below in a bit).</p>
<p>So, I’ll avoid sharpening at this stage.
If I was going to use it here at all, it would be just very, very light.
Also, if you do any sharpening at this stage, try to make sure that it happens <em>after</em> any noise reduction in the pipeline.</p>
<p>Settings so far (everything else zero)…</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Exposure Compensation</td>
<td>2.30</td>
</tr>
<tr>
<td>Black</td>
<td>150</td>
</tr>
<tr>
<td>WB Temperature</td>
<td>7300</td>
</tr>
<tr>
<td>WB Tint</td>
<td>0.545</td>
</tr>
<tr>
<td>Impulse NR</td>
<td>55</td>
</tr>
<tr>
<td>Luminance NR</td>
<td>6</td>
</tr>
<tr>
<td>Chrominance NR</td>
<td>6</td>
</tr>
</tbody>
</table>
<hr>
<h3 id="lens-correction">Lens Correction<a href="#lens-correction" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>This is actually a section that deserves its own post, detailing methods for correcting for lens barrel distortion with Hugin.
RawTherapee actually has an “Automatic Distortion Correction” that will effect pincushion distortion in your images.</p>
<p>In my case, I was shooting at the long end of the lens at 50mm, and the distortion is minimal.
So I didn’t bother with correcting this (it might have been needed at a shorter focal length, and being closer to the subject, though).</p>
<h3 id="in-summary">In Summary<a href="#in-summary" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>That about wraps up the RAW “development” I’m going to do on this image.
I try to keep things minimal where possible, though I could have gone further and adjusted color tones and LAB adjustments here as well.
In fact, with the exception of Wavelet Decompose for skin retouching, and some other masking/painting operations, I could do most of what I want for this portrait entirely in RawTherapee.</p>
<p>I know that this reads really long, but the truth is that once I am accustomed to a workflow, this takes less than 5 minutes from start to finish (faster if I’ve already fiddled with other images from the same set).
All I really modified here was <strong>Exposure</strong>, <strong>White Balance</strong>, and <strong>Noise Reduction</strong>.</p>
<p>Finally, as I hinted at earlier, here is the final version after doing all of these RAW edits, as we get ready to bring the image into GIMP for further processing:</p>
<figure>
<a href="Mairi-RAW-Final.jpg" target="_blank">
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Mairi-RAW-Final.jpg" alt="Mairi Final Version from RawTherapee" width="598" height="800">
</a>
<figcaption>
<strong>This</strong> is the one to download if you want to follow along in GIMP below.<br>Just click the image to open in a new window, then save it from there.
</figcaption>
</figure>




<h2 id="gimp-retouching">GIMP Retouching<a href="#gimp-retouching" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Well, here we are.
Finally.
It’s the home stretch now, so don’t give up just yet!</p>
<p>If you didn’t follow along with the RAW processing earlier, you can download the full resolution JPG output from RawTherapee by clicking here:</p>
<p class="aside">
<a href="Mairi-RAW-Final.jpg">Download the full resolution JPG output from RawTherapee</a>
</p>

<p>Armed with our final results from RawTherapee, we’re now ready to do a little retouching to the image.</p>
<p>The overall workflow and the order in which I approach them is dependent on my mood mostly.
Most times, I enjoy doing skin retouching, so I’ll often jump right in with <strong>Wavelet Decompose</strong> and play around.
Really, though, I should start shifting Wavelet Decompose to a later part of my workflow, and fix other things like removing objects from the background and fixing flyaway hairs first.</p>
<p>This way, I can directly re-use wavelet scales for a slight wavelet sharpening while I have them.</p>
<p>Looking at this image so far, I can spot a few broad things that I want to correct, and I’m going to address them in this order:</p>
<ol>
<li>Touchup flyaway hairs</li>
<li>Crop &amp; remove distracting background elements</li>
<li>Skin retouching with Wavelet Decompose</li>
<li>Contour paint highlights</li>
<li>Apply some color curves</li>
</ol>
<hr>
<h3 id="touchup-flyaway-hairs">Touchup Flyaway Hairs<a href="#touchup-flyaway-hairs" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>If you can have the model bring a hairbrush with them to a shoot - DO IT.
Seriously.
Your eyes and carpal tunnel will thank me later.</p>
<p>Even with a brush or hairstylist/make-up artist the occasional hair will decide to rebel and do its own thing.
This will require us to get down to the details and fix those hairs up.</p>
<p>Luckily for me, Mairis hair mostly cooperated with us during the shoot (and where it didn’t I kind of liked it).
To illustrate this step, though, I’m going to clean up some of the stray hairs on the left side of the image (the right side of her face).</p>
<p>Luckily for me, the background is a consistent color/texture.
This means cloning out these hairs shouldn’t be too much of a problem, but there are still some things you should keep in mind while doing this.</p>
<p>Here is the area that I’d like to clean up a little bit:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/GIMP-Hair-Left-Original.jpg" alt="Mairi Hair Left Original" width="600" height="1256">
<figcaption>
Sometimes you just have to work one strand of hair at a time…
</figcaption>
</figure>

<figure style="float:right; margin: 0 0 1rem 1rem;">
<img border="0" src="https://pixls.us/articles/an-open-source-portrait-mairi/GIMP-Hair-Clone-Tool.png" alt="GIMP Clone Tool Hair" width="165" height="587">
</figure>

<p>I will usually use a hard-edged brush because a soft-edge will smear details on its edges, and can often be spotted pretty easily by the eye.
This works because the background is relatively constant in grain and color.</p>
<p>I’ll sample from an area near the hair I want to remove, and set the brush to be <strong>“Aligned”</strong>.
I also try to keep the brush size as small as I can and still remove the hair.</p>
<p>The thing to keep in mind is how the hair is actually <em>flowing</em>, and to follow that.
I will often follow outlying strands of hair back to where they start from the head, and begin cloning them out from there.</p>
<p>I also try not to get too ambitious (some stray hairs are sometimes fine).
Removing too many at once can lead to unrealistic results, so I try to be conservative, and to constantly zoom out and check my work visually.</p>
<p>Try not to leave hairs prematurely cut off in space if possible, it tends to look a bit distracting.
If you want to remove a hair that crosses over another strand that you may want to keep, make sure to adjust the source of the clone brush so you can do it without leaving a gap in the leftover strand.</p>
<p>Here is a quick 5 minute touchup of some of the stray hairs (click to compare to the original):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/GIMP-Hair-Left-Clean.jpg" alt="GIMP Hair Clean Clone" data-swap-src="GIMP-Hair-Left-Original.jpg" width="600" height="1256">
<figcaption>
Click to compare.
</figcaption>
</figure>

<p>Occasionally, you’ll need to fix hairs that are crossing over other hair (sort of like a virtual “brushing” of the hair).
In these cases, you really have to pay careful attention to <em>how the hair flows</em> and to use that as a guide when choosing a sample point with either the clone or heal brush.</p>
<p>If this sounds like a lot of work - it is.
Thankfully, once you’ve become accustomed to doing it, and doing it well, you’ll find yourself picking up a lot of speed.
It’s one of those things that’s worth learning to do right, and to let practice speed it up for you.</p>
<p>I actually like the cascading hair around her face opening up to a pretty color, so that’s about as far as I’m going to go with stray hairs on this image.</p>
<h3 id="fixing-the-background-cropping">Fixing the Background &amp; Cropping<a href="#fixing-the-background-cropping" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>With the limited space I had to shoot this portrait, it’s no surprise that I had gotten some undesirable background elements, like the window edges.</p>
<p>There’s a couple of ways I could go about fixing these - I could fix the background in place, or I can crop out the elements I don’t want.</p>
<p>In my final version shown in the previous post, I wanted to crop tighter, so it worked out well to remove the window on the left.
To illustrate how we can remove the window, I’m going to leave the aspect ratio as it is, and walk through removing the distracting background elements.</p>
<h4 id="removing-background-elements">Removing Background Elements<a href="#removing-background-elements" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Because most of the background is already a (relatively) solid color, this isn’t too hard.
There’s just a couple of simple things to keep in mind.</p>
<p>The way I’m going to approach this is to make a duplicate of my current layer, and to move the duplicate into place such that the background will cover up parts of the window I want to remove.
Then I’ll mask the duplicate layer to hide the window.</p>
<p>I start by choosing an area of the background that’s similar in color/tone:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/GIMP-Background-Start.jpg" alt="GIMP Mairi Background Fix Start" width="598" height="800">
<figcaption>
Thankfully the background is relatively consistent.
</figcaption>
</figure>

<p>I’ll then move the duplicate layer so that the green area covers up the window to the left:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/GIMP-Background-End.jpg" alt="GIMP Mairi Background Fix End" width="598" height="800">
<figcaption>
Position the duplicate layer so the green area now covers up the window.
</figcaption>
</figure>

<p>Here is what this looks like in GIMP, with the duplicate layer set to 90% opacity over the base layer (so you can see where the window edge is):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/GIMP-Background-Shifted.jpg" alt="GIMP Mairi Background Shift" width="600" height="720">
<figcaption>
Moving the duplicate layer over to cover the window.
</figcaption>
</figure>

<p>Now I’ll add a black (fully transparent) layer mask over the duplicate layer, and I’ll paint white on the mask to cover up the window edge (with a soft-edged brush).
This give me results that look like this:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/GIMP-Background-Shifted-Masked.jpg" alt="Mairi GIMP background shift masked" width="600" height="720">
<figcaption>
After applying a transparent mask, and painting white over the window edge.
</figcaption>
</figure>

<p>The problem is that the background area from the duplicate is a bit darker than the base layer background, and the seam is visible where they are masked.
To fix this, I can just adjust the lightness of the duplicate layer until I get a good match.</p>
<p>I used Hue-Saturation to adjust the lightness (because I wasn’t sure if I would need to adjust the hue slightly as well - turns out I didn’t).
I found that increasing the <em>Lightness</em> value to 3 got me reasonably close:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/GIMP-Background-Shifted-Masked-Lightened.jpg" alt="GIMP Mairi Background lightened" width="600" height="720">
<figcaption>
After increasing duplicate layer <em>Lightness</em> to 3.
</figcaption>
</figure>

<p>To further fix the lower part of the window, I just repeated all the steps above with another duplicate of the base layer, just shifted to cover the lower part of the window.
I had to mask along her sweater.
Here is the result after repeating the above steps:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/GIMP-Background-Masked.jpg" alt="GIMP Mairi background masked finished" width="598" height="800">
<figcaption>
After repeating above steps for the lower left corner.
</figcaption>
</figure>

<p>The results are ok, but could be just a little bit better.
Visually, the falloff of light on the background doesn’t match what’s happening on her body, so I added a small gradient to the lower left corner to give it a more natural looking light falloff:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/GIMP-Background-Masked-Gradient.jpg" alt="GIMP Mairi background masked gradient" width="598" height="800">
<figcaption>
Adding a gradient to the lower left background helps it look more natural.
</figcaption>
</figure>

<p>Fixing the slight window/shadow on the right is easily done with a clone/heal tool combination.
The final result of quickly cleaning up the background is this:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/GIMP-Background-Final.jpg" alt="GIMP Mairi background final fix" width="598" height="800">
<figcaption>
Finished cleaning up the background.
</figcaption>
</figure>

<p>I could have spent a little more with this, but I’m happy with the results for the purpose of this post.
If your cloning efforts leave obvious transitions between tones, the Heal tool can be helpful for alleviating this (especially when used with large brush radii, just be prepared to wait a bit).</p>
<p>With the background squared away, we can move on to one of my favorite things to play with, skin retouching!</p>
<h3 id="skin-retouching-with-wavelet-decompose">Skin Retouching with Wavelet Decompose<a href="#skin-retouching-with-wavelet-decompose" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I had <a href="http://blog.patdavid.net/2011/12/getting-around-in-gimp-skin-retouching.html">previously written about using Wavelet Decompose</a> as a means for touching up skin.
As I said in that post, and will repeat here:</p>
<blockquote>
<p>The best way to utilize this tool is <strong>with a light touch</strong>.</p>
</blockquote>
<p>Re-read that sentence and keep it in mind as we move forward.</p>
<p>Don’t make mannequins.</p>
<p>Ok, with a layer that contains all of the changes we’ve made so far rolled up, we can now decompose the image to wavelet scales.
In my case I almost always use the default of 5 scales unless there’s a good reason to increase/decrease that number.</p>
<p>For anyone new to this method, the basic idea of Wavelet Decompose is that it will break down your images to multiple layers, each containing a specific set of details based on their relative size, and a residual layer with color/tonal information.
For instance, Wavelet scale 1 will contain only the finest details in your image, while each successive scale will contain larger and larger details.</p>
<p>The benefit to us is that these details are isolated on each layer, meaning we can modify details on one layer without affecting other details from other layers (or adjust the colors/tones on the residual layer without modifying the details).</p>
<p>Here is an example of the resulting layers we get when running Wavelet Decompose:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Example.jpg" alt="GIMP Wavelet Separation Example" width="600" height="400">
<figcaption>
Wavelet scales from 1 (finest) to the Residual
</figcaption>
</figure>

<p>After running Wavelet Decompose, we’ll find ourselves with 6 new layers: Residual + 5 Wavelet scales.
I am going to start on Wavelet scale 5.</p>
<p>If you hold down <strong>Shift</strong> and click on a layer visibility icon, you’ll isolate just that single layer as visible.
Do this now to <em>Wavelet scale 5</em>, and let’s have a look at what we’re dealing with.</p>
<p>I usually work on skin retouching in sections.
Usually I’ll consider the forehead, nose, cheeks to smile lines, chin, and upper lip all as separate sections (trying to follow normal facial contours).
Something like this:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Breakdown.jpg" alt="GIMP Wavelet Decompose Region Breakdown" width="587" height="800">
<figcaption>
Rough breakdown of each area I’ll work on separately
</figcaption>
</figure>

<p>I’m going to start with the forehead.
I’ll work with detail scales first, and follow up with touchups on the residual scale if needed to even out color tones.
Here is what Wavelet scale 5 looks like isolated:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Forehead-5.jpg" alt="GIMP Wavelet Scale 5 forehead" width="600" height="303">
<figcaption>
Forehead, Wavelet scale 5
</figcaption>
</figure>

<p>It may not seem obvious, especially if you don’t use wavelet scales much, but there’s a lot of large scale tonal imperfections here.
Look at the same image, but with the levels normalized:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Forehead-5-normalized.jpg" alt="GIMP Wavelet Scale 5 forehead" width="600" height="303">
<figcaption>
These are the tones we want to smooth out
</figcaption>
</figure>

<p>Normalizing the wavelet scale lets you see the tones that we want to smooth out.</p>
<p>My normal workflow is to have all of the wavelet scales and residual visible (each of the wavelet scales has a layer blending mode of <strong>Grain Merge</strong>).
This way I’m visually seeing the overall image results.
Then I will select each wavelet scale as I work on it.</p>
<p>I’ll normally use the <strong>Free Select Tool</strong> to select the forehead.
I’ll usually have the <strong>Feather edges</strong> option turned on, with a large radius (maybe 1% of the smallest image dimensions roughly - so ~35 pixels here).
Remember to have your layer selected that you want to work on.</p>
<p>With my area selected, I’ll often run a <strong>Gaussian Blur</strong> (IIR) over the skin to smooth out those imperfections.
The radius you use is dependent on how strong you want to smooth the tones out.
Too much, and you’ll obliterate the details on that scale, so start small.</p>
<p>Here is my selection I’ll work with (remember - my active layer is Wavelet scale 5):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Forehead-orig-selection.jpg" alt="GIMP Wavelet Scale selection" width="600" height="303">
<figcaption>
Forehead with selection (feather turned on to 35px)
</figcaption>
</figure>

<p>Now I’ll experiment with different <strong>Gaussian Blur</strong> radii to get a feel for how it will effect my entire image.
I settled on a high-ish value of 35px radius, which gave me this as a result (click to compare to original):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Forehead-5-35px.jpg" data-swap-src="Wavelet-Forehead-orig.jpg" alt="GIMP Wavelet Scale selection" width="600" height="303">
<figcaption>
Forehead, Wavelet scale 5 after <strong>Gaussian Blur (IIR)</strong> 35px radius.<br>Click to compare.
</figcaption>
</figure>

<p>Just with this small change to a single wavelet scale, we can already see a remarkable improvement to the underlying skin tones, and we haven’t hurt any of the fine details in the skin!</p>
<p>In some cases, this may be all that is required for a particular area of skin.
I could push things just a tiny bit further if I wanted by working globally again on a finer wavelet scale, but I’ve learned the hard way to back off early if possible.</p>
<p>Instead, I’ll look at specific areas of the skin that I may want to touch up.
For instance, the two frown lines in the center of the forehead.
I may not want to remove them completely, but I may want to downplay how visible they are.
Wavelet scales are perfect for this.</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Forehead-5-35px-frown.jpg" alt="GIMP Wavelet Scale selection" width="600" height="303">
<figcaption>
Small frown lines I want to reduce
</figcaption>
</figure>

<p>Because each of the Wavelet scales are set to a layer blend mode of <strong>Grain Merge</strong>, this means that any area that has a completely grey color will not effect the final image.
This means that you can paint with medium grey RGB(128,128,128) to completely remove a detail from a layer.</p>
<p>You can also use the Blur/Sharpen brush to selectively blur an area of the image as well.
(I’ve found that the Blur tool works best at smaller wavelet scales - it doesn’t appear to make a big difference on larger scales).</p>
<p>So, if we look at Wavelet scale 5 where the frown lines are, we’ll see there’s not much there - it was already smoothed earlier.
If we look at Wavelet scale 4 though, we’ll see them prominently.</p>
<p>I’ll use the <strong>Heal Tool</strong> to sample from the same wavelet scale in a different location, and paint over just the frown lines.
I’ll work on Wavelet scale 4 first.
If needed, I can also move down to Wavelet scale 3 and repeat the same procedure there.</p>
<p>A couple of quick passes just over the frown lines, and the results look like this:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Forehead-5-35px-frown-fixed.jpg" data-swap-src="Wavelet-Forehead-5-35px.jpg" alt="GIMP Wavelet Scale selection" width="600" height="303">
<figcaption>
Cloning over frown line on scale 4 &amp; 3.<br>Click to compare. 
</figcaption>
</figure>

<p>I could continue over any other blemishes I may want to correct, but small individual blemishes can usually be fixed with a little spot healing quickly.</p>
<p>Moving on to the nose, the tones have different requirements.
Overall, the tones on Wavelet scale 5 are similar to the forehead.
In this case, a similar amount of blurring as the forehead on scale 5 will nicely smooth out the tones.
Here is the nose after a slight blurring (click to see original):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Nose-5-35px.jpg" data-swap-src="Wavelet-Nose-Orig.jpg" alt="GIMP mairi wavelet decompose nose" width="275" height="510">
<figcaption>
Nose with 35px Gaussian blur on Wavelet scale 5.<br>Click to compare.
</figcaption>
</figure>

<p>There is a bit of color in the nose that is slightly uneven that I’d like to fix.
This is relatively easy to do with wavelet scales, because I can modify the underlying color tones of the nose without destroying the details on the other scale layers.</p>
<p>In this case, I’ll work on the Wavelet residual layer.</p>
<p>I’ll use a <strong>Heal Tool</strong> with a large, soft brush.
I’ll sample from about the middle of the nose, and clean up the slightly redder skin by healing new tones into that area.
I’ll follow the contours of the nose and the way that the light is hitting it in order to match the underlying tones to what is already there.</p>
<p>After a little work these are the results (click to compare to original):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Nose-5-35px-heal.jpg" data-swap-src="Wavelet-Nose-Orig.jpg" alt="GIMP Wavelet Scale selection nose" width="275" height="510">
<figcaption>
Healing on the Wavelet residual scale to even tones.<br>Click to compare.
</figcaption>
</figure>

<p>Next I’ll take a look at the eyes and cheek on the brighter side of her face.</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Cheek-Orig.jpg" alt="GIMP Mairi wavelet decompose cheek original" width="473" height="716">
<figcaption>
Overall tones are good here, just some slight retouching required
</figcaption>
</figure>

<p>The tones here are not bad, particularly on scale 5.
After making my selection, I’ve applied a blur at 25px just to smooth things a bit.</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Cheek-5-25px.jpg" data-swap-src="Wavelet-Cheek-Orig.jpg" alt="GIMP Mairi wavelet decompose cheek " width="473" height="716">
<figcaption>
A slight 25px blur to smooth overall tones.<br>Click to compare. 
</figcaption>
</figure>

<p>The dark tones under/around the eyes is a bit different to deal with.
As before, I’ll turn to working on the Wavelet residual layer to brighten up the color tones under the eyes.</p>
<p>I use the <strong>Heal Tool</strong> to sample from a brighter area of skin near the eye.
Then I’ll carefully paint into the dark tones to brighten them up, and to even the colors out with the surrounding skin.</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Cheek-residual-eyes.jpg" data-swap-src="Wavelet-Cheek-Orig.jpg" alt="GIMP Mairi wavelet residual eyes" width="473" height="716">
<figcaption>
Carefully cloning/healing brighter skin tones under the eyes.<br>Click to compare to original.
</figcaption>
</figure>

<p>Wavelets are amazing for this type of adjustment, because I can brighten up/change the skin tones under the eyes without effecting the fine skin details here like small wrinkles and pores.
The textual character remains unchanged, but the underlying skin tones can be modified easily.</p>
<p>The same can be done for the slightly red tones on the cheek, and at the edge of her jaw.
Which I did.</p>
<p>I’m purposefully not going to modify the fine wrinkles under the eyes, either.
These small imperfections will often bring great character to a face, and unless they are very distracting or bad, I find it best to leave them be.</p>
<p>A good tip is that even though these small imperfections may seem large when you’re pixel peeping, get into the habit of zooming out to a sane zoom level and evaluate the image then.
Sometimes you’ll find you’ve gone too far, and things begin to creep into mannequin territory.</p>
<p>Don’t make mannequins!</p>
<h4 id="in-summary-again">In Summary Again<a href="#in-summary-again" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>This entire post is getting a little long, so I’m going to stop here with the skin retouching breakdown.</p>
<p>Also, that’s honestly about it as far as the process goes.
Just repeat on the areas that are left (right cheek, chin, and upper lip).
You can just apply the processes I described above to those other areas, in the same way.</p>
<p>To summarize, here are the tools/steps I’ll use with Wavelet Decompose to retouch skin:</p>
<ul>
<li>Area selection with Gaussian blur to even out overall tones at a particular scale</li>
<li>Paint with grey, Clone, Heal on wavelet scales to modify specific details</li>
<li>Clone/Heal on wavelet residual scale to modify underlying skin tones/colors (but leave details intact)</li>
</ul>
<p>Here are the final results after using only Wavelet Decompose (click to compare to original):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Face-Final.jpg" data-swap-src="Wavelet-Face-Original.jpg" alt="Mairi GIMP Wavelet face final retouching" width="587" height="800">
<figcaption>
After retouching in Wavelet Scales only.<br>Click to compare to original.
</figcaption>
</figure>




<h3 id="spot-touchups">Spot Touchups<a href="#spot-touchups" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>There may be a few things that still need a little spot touchup that I didn’t bother to mess with in Wavelet scales.</p>
<p>In my case, I’ll clone/heal out some small hairs along the jaw line, and touch up some small spots of skin individually.
This is really just a light cleaning, and I usually do this at the pixel level (obnoxiously zoomed in, and small brush sizes).</p>
<p>I also use a method for checking the skin for areas that I may want to touchup, but might not be immediately visible or noticeable.
It uses the fact that the Blue channel of an image can show you just how scary skin can look (seriously, color decompose any image of skin, and look at the blue channel).</p>
<h3 id="contour-painting-highlights">Contour Painting Highlights<a href="#contour-painting-highlights" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>One of the downsides of using Wavelet scales for modifying skin is that if you’re blurring on some of the scales, you’ll sometimes decrease the local contrast in your image.
This isn’t so bad, but you may want to bring back some of the contrast in areas you’ve touched up.</p>
<p>What I’m going to do is basically add some transparent layers over my image, and set their layer blend modes to <strong>“Overlay”</strong>.</p>
<p>Then I’ll paint white over contours I want to enhance, and adjust the opacity of the layer to taste.
(This is highly subjective, so I’m going to just show a quick idea of how I might approach it - you can get as nuts with this as you like…).</p>
<p>Here I’ve added a new transparent layer on top of my image, and set the Layer Blend Mode to <em>Overlay</em>.
Then I painted white onto contours that I want to highlight:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Contour-Face.jpg" alt="Mairi GIMP Contour dodge burn highlight" width="587" height="800">
<figcaption>
Painting on the <em>Overlay</em> layer along contours to highlight
</figcaption>
</figure>

<p>It looks strange right now, but I’ll add a large radius Gaussian Blur to smooth these tones out.
I used a blur radius of <strong>111 pixels</strong>.
Here is what it looks like after the blur:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Contour-Face-Blur.jpg" alt="Mairi GIMP Contour dodge burn highlight gaussian blur" width="587" height="800">
<figcaption>
Blurring the <em>Overlay</em> layer with Gaussian Blur (111 pixel radius)
</figcaption>
</figure>

<p>Finally, I’ll adjust the opacity of the <em>Overlay</em> layer to taste.
I’ll usually dial this way, way down so that it’s not so obvious.
Here, I’ve dialed the opacity back to about 20%, which leaves us with this (click to compare):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Contour-Face-Blur-Opacity-20.jpg" data-swap-src="Wavelet-Face-Final.jpg" alt="Mairi GIMP Contour dodge burn highlight final One" width="587" height="800">
<figcaption>
After setting the <em>Overlay</em> layer to 20% opacity (still a little high for me, but it’s good for illustration).<br>Click to compare.
</figcaption>
</figure>

<p>I will sometimes add a few more of these layers to enhance other parts of the image as well.
I’ll use it (very lightly!!!) to enhance the eyes a bit, and in this case, I used an even larger layer to add some volume and highlights to her hair as well.</p>
<p>Here is the results after adding some eye and hair highlight layers as well (click to compare no highlights):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Contour-Final.jpg" data-swap-src="Contour-Original.jpg" alt="mairi gimp contour dodge burn final" width="598" height="800">
<figcaption>
Face, eyes, and hair contour painting result.<br>Click to compare. 
</figcaption>
</figure>




<h3 id="color-curves">Color Curves<a href="#color-curves" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Finally, I like to apply some color curves that I have around and use often.
I’ve been heavily favoring a Portra emulation curve from <a href="http://www.prime-junta.net/pont/How_to/100_Curves_and_Films/_Curves_and_films.html">Petteri Sulonen</a> that he calls <em>Portra-esque</em>, especially for skin.
It has a very pretty rolloff in the highlights that really renders pretty colors.</p>
<p>If I feel it’s too much, I can always apply it on a duplicate of my image so far, and adjust opacity to suit.
Here is the same image with only the <em>Potra-esque</em> curve applied:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Curves-Portra.jpg" data-swap-src="Contour-Final.jpg" alt="mairi gimp color tone curve portra" width="598" height="800">
<figcaption>
Image so far, with a <em>Portra-esque</em> color curve applied.<br>Click to compare.
</figcaption>
</figure>

<p>If you’re curious, I had written up a much more in-depth look at color curves for skin here: <a href="http://blog.patdavid.net/2012/07/getting-around-in-gimp-more-color.html">Getting Around in GIMP - More Color Curves (Skin)</a>.
You can actually download the curves for Portra, Velvia, Provia emulation on that page.</p>
<h3 id="final-sharpening">Final Sharpening<a href="#final-sharpening" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Finally.
The last step before saving out our image!</p>
<p>For sharpening, I actually like to use one of the Wavelet scales that I generated earlier.
I’ll just duplicate a low scale, like 2 or 3, and drag it on top of my layer stack to sharpen the details from that scale.</p>
<p>In this case, I liked the details from Wavelet scale 2, so I duplicated that layer, and dragged it on top of my layer stack.
The blend mode is already set to <em>Grain Merge</em>, so I don’t have to do anything else:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Sharpen-Wavelet-2.jpg" data-swap-src="Curves-Portra.jpg" alt="mairi gimp sharpen wavelet scale" width="598" height="800">
<figcaption>
Wavelet scale 2 copied to the top of the layer stack for sharpening.<br>Click to compare.
</figcaption>
</figure>




<h2 id="finally-at-the-end">Finally at the End<a href="#finally-at-the-end" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>If you’re still with me - you really deserve a medal.
I’m sorry this has run as long as it has, but I wanted to try to be as complete as I could.</p>
<p>So, for a final comparison, here is the image we finished with (click to compare to what we started with before retouching in GIMP):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Sharpen-Wavelet-2.jpg" data-swap-src="Mairi-RAW-Final.jpg" alt="mairi gimp final sharpen wavelet" width="598" height="800">
<figcaption>
Our final result.<br>Click to compare.
</figcaption>
</figure>

<p>Not too bad for a little bit of fiddling, I think!  I know that this tutorial reads really, really long, but I promise that once you’ve understood the processes being used, it’s actually very quick in practice.</p>
<p>I hope that this has been helpful to you in some way!  If you happen to use anything from this tutorial please share it.
I’d love to see what others do with these techniques.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Software and Noise]]></title>
            <link>https://pixls.us/blog/2015/05/software-and-noise/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/05/software-and-noise/</guid>
            <pubDate>Mon, 18 May 2015 16:38:01 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/05/software-and-noise/Unnecessary_Noise.jpg" /><br/>
                <h1>Software and Noise</h1> 
                <h2>Wonderful response from everyone</h2>  
                <p>I want to take a moment to thank everyone for all of the kind words and support over the past week.
A positive response can be a great motivator to help keep the momentum rolling (and everyone really has been super positive)!</p>
<h2 id="software"><a href="#software" class="header-link-alt">Software</a></h2>
<p>The <strong><a href="https://pixls.us/software/">Software page</a></strong> is live with a decent start at a list.</p>
<p>I posted an announcement of the site launch over on <a href="http://www.reddit.com">reddit</a> and one of the comments (from <a href="http://www.reddit.com/r/photography/comments/35b7y4/new_community_for_freeopen_source_photography/cr30jeo">/u/cb900crdr</a>) was that it might be helpful to have a list of links to programs.
I had originally planned on having a page to list the various projects but removed it just before launch (until I could find some time to gather all the links).</p>
<p>This was as good a reason as any to take a shot at putting a page together.
I brought the topic up <a href="https://discuss.pixls.us/t/free-software-list-and-links/193/8">on the forums</a> to get input from everyone as well.
If you see that I’ve missed anything, please consider adding it to the list on the forum.
<!-- more --></p>
<p>I think it may be helpful to add at least a sentence or two description to identify what each project does for those not familiar with them.
For instance, if you didn’t know what Hugin was before, the name by itself is not very helpful (or GIMP, or G’MIC, etc…).
The problem is how to do it without cluttering up the page too much.</p>
<h2 id="noise"><a href="#noise" class="header-link-alt">Noise</a></h2>
<p>I had also mentioned <a href="https://discuss.pixls.us/t/noise-free-shadows-dual-exposure/204">in this post</a> on the forums about a neat method for basically replacing shadow tones in one image with those from second, overexposed image.
The approach is similar in theory to tonemapping an HDR and is originally described by <a href="http://www.guillermoluijk.com/article/nonoise/index_en.htm">Guillermo Luijk</a> (back in 2007).</p>
<p>The process basically exploits the fact that digital sensors have a linear response (a basis for the advice ETTR - <em>“Expose to the Right”</em>).
His suggested workflow is to use a second exposure of the scene but exposed +4EV.
Then to adjust the exposure of the second image down -4EV and then replace the shadow tones in the base image with the adjusted (noise-reduced) one.</p>
<p>I will write an article soon describing the workflow in a bit more detail.  Stay tuned!</p>
<p><small class="lede-attr">Lede image: 
<a href='https://www.flickr.com/photos/pamhule/4461831240'><em>Unnecessary Noise Prohibited</em> </a> by <a href='https://www.flickr.com/photos/pamhule/'>Jens Schott Knudsen</a> <a class='cc' href='https://creativecommons.org/licenses/by-sa/2.0/' target='_blank'>cbn</a>
</small></p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[It's Alive!]]></title>
            <link>https://pixls.us/blog/2015/05/it-s-alive/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/05/it-s-alive/</guid>
            <pubDate>Thu, 07 May 2015 21:25:16 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/05/it-s-alive/nautilus.jpg" /><br/>
                <h1>It's Alive!</h1> 
                <h2>Time to finally launch...</h2>  
                <p>Well, here we are.
I just checked the first blog post and it was dated August 24<sup>th</sup>, 2014.
I had probably been working on the back end of the site getting things running for the basic blog setup a few weeks prior to that.
It’s <strong>almost</strong> been a full year since I started working on this idea.</p>
<p>So it is with great pleasure that I can finally say…</p>
<h2 id="welcome-to-pixls-us-"><a href="#welcome-to-pixls-us-" class="header-link-alt">Welcome to <a href="https://pixls.us">PIXLS.US</a>!</a></h2>
<p>If you’re just now joining us, let me re-iterate the mission statement for this website.</p>
<blockquote>
<p><strong>PIXLS.US Mission Statement</strong></p>
</blockquote>
<blockquote>
<p>To provide tutorials, workflows and a showcase for high-quality photography using Free/Open Source Software.</p>
</blockquote>
<p>I started this site because the world of F/OSS photography is fractured across different places.
There’s no good single place for photographers to collaborate around free software workflows, as well as a lack of good tutorials aimed at high-quality processing with free software.</p>
<!-- more -->
<h3 id="tutorials"><a href="#tutorials" class="header-link-alt">Tutorials</a></h3>
<p>I have personally been writing tutorials on my blog for a few years now (holy crap).
I primarily started doing it because while there are many tutorials for photo editing, they almost always stopped short of working towards high-quality results.
The few tutorials that did try to address high quality results were all quite a few years old (and often in need of updating).</p>
<p>With your help, I’m hoping to change that here.</p>
<h3 id="workflows"><a href="#workflows" class="header-link-alt">Workflows</a></h3>
<p>Workflows is something that doesn’t often get described either.
Specifically, what a workflow looks like with free software.
For instance, some thoughts off the top of my head:</p>
<ul>
<li>Creating a panorama image from start to finish.</li>
<li>Shooting and editing fashion images.</li>
<li>Taking great portrait images, and how to retouch them.</li>
<li>What to watch out for when shooting macro.</li>
<li>Planning and shooting great astrophotography.</li>
<li>How to approach landscape editing.</li>
<li>Creating a composite dream image.</li>
</ul>
<p>These are just some of the ideas around workflows.
It also doesn’t have to be only software-focused.
There is a wealth of knowledge about practical techniques that we can all share as well.</p>
<h3 id="showcase"><a href="#showcase" class="header-link-alt">Showcase</a></h3>
<p>Quick - name 5 photographers whose work you love, that use free software.
Did you have trouble reaching five?
That’s another of the things that I would like to focus on here: showcasing amazing work from talented photographers that happen to use free software (and in some cases may be willing to share with us).</p>
<p>I even <a href="https://discuss.pixls.us/t/notable-fl-oss-photographers/139">started a thread on the forum</a> to try and note some amazing photographers.  I will try to work through that list and get them to open up and speak with us a bit about their work and process.</p>
<h2 id="by-us-for-us"><a href="#by-us-for-us" class="header-link-alt">By Us, For Us</a></h2>
<p>I am floored by how awesome the community has been.
As I mentioned on my blog, the main reason for me to write was to give something back to the community.
I learned so much for so long from others before me and the least I could do is try to help others as well.</p>
<p>This community will be what <strong>we</strong> make it.
Come help make it something awesome that we can all be proud of.</p>
<p>Go <a href="https://discuss.pixls.us">sign up</a> on the forum and let your voice be heard.</p>
<p>Have an idea for an article?  Let me know (in the <a href="https://discuss.pixls.us">forums</a> or by <a href="mailto:pat@patdavid.net">email</a>)!</p>
<h2 id="make-some-noise-"><a href="#make-some-noise-" class="header-link-alt">Make Some Noise!</a></h2>
<p>Finally, we are just starting out and are a small community at the moment.
If you’re feeling up to it, please consider letting your social circles know that we’re here and what we’re trying to do.
The only way for the community to grow is for people to know it’s here in the first place!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[What's In Your Bag?]]></title>
            <link>https://pixls.us/blog/2015/05/what-s-in-your-bag/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/05/what-s-in-your-bag/</guid>
            <pubDate>Mon, 04 May 2015 14:47:58 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/05/what-s-in-your-bag/MyBag.jpg" /><br/>
                <h1>What's In Your Bag?</h1> 
                <h2>Thoughts on a next article as well</h2>  
                <p>That lede image above is a quick (and dirty) snapshot of my go-to bag for running out the door.
I thought it might be fun to take a diversion and talk about gear a little bit.
Here’s the full image again:</p>
<!-- more -->
<figure class="big-vid">
<img src="https://pixls.us/blog/2015/05/what-s-in-your-bag/MyBag.jpg" alt="Pat David Camera Bag Gear"/>
<figcaption>
My gear + bag.  Not shown, spare battery and memory cards.
</figcaption>
</figure>

<p>I had decided years ago on going with Micro Four Thirds (MFT) as a camera system because I like to travel light, and wanted options to adapt old lenses.
(On a side note, I’m still angry that there is not focus-peaking on the E-M5…)</p>
<p>My camera is the Olympus OM-D E-M5 (usually paired with the 12-50mm weatherproof lens when I’m out and about). 
This is a perfect combination for me, particularly when I’m chasing around a 4 year old in who-knows-where situations.
A water and dust resistant lens/body is nice to have.</p>
<p>On the far left is a Promaster 5-in-1 reflector (41 inch).
These are usually relatively inexpensive and absolutely indispensable pieces of gear that can be adapted to many different situations.</p>
<p>I was recently reminded of this yet again while on a walk through some gardens…</p>
<figure>
<img src="https://pixls.us/blog/2015/05/what-s-in-your-bag/with-without-reflector2.jpg" alt="Dot with/without reflector" />
<figcaption>
Both images straight out of the camera, with/without reflector, same settings.
</figcaption>
</figure>

<p>The base of the reflector (without its covering) is a great translucent scrim that is handy to use with flashes if you need to soften things up a bit (and not lug around a softbox).</p>
<figure>
<img src="https://pixls.us/blog/2015/05/what-s-in-your-bag/dot-eyes-open.jpg" alt="Dot Eyes Open by Pat David" />
<figcaption>
Speedlight shooting into the reflector scrim, ~2 feet away from model, camera left.
</figcaption>
</figure>

<p>Speaking of flashes, you’ll also find my pair of Yongnuo YN-560 manual speedlights.
I’ve been slowly teaching myself <a href="https://www.flickr.com/photos/patdavid/sets/72157626359784129/">lighting with speedlights</a>, so rarely will I <em>not</em> have them with me.
To use them off-camera I also have a pair of Cactus V5 transceivers (one to transmit, one to receive).</p>
<p>Everything (except the reflector) packs nice and neatly into my wife’s old camera bag (a  precursor to the Domke bags) that I ran off with.
(That is, the old camera bag of my wife, <strong>not</strong> the old bag, my wife).</p>
<p>The bag is canvas and I waxed it myself to give it some water resistance.
This basically consisted of me melting some wax and brushing it all over the bag, then using a hairdryer to further melt it into the fibers.
This was a great DIY project that was relatively inexpensive (about $8USD for more wax than you’ll need) and relatively quick to do (just a few hours total).</p>
<h3 id="share-your-gear"><a href="#share-your-gear" class="header-link-alt">Share Your Gear</a></h3>
<p>I’d love to see what others are using out there!  Take a minute, snap a photo of your gear/bag, and share it with us.
Bonus points if you arrange it by <a href="http://en.wikipedia.org/wiki/Knoll_%28verb%29">knolling</a>.</p>
<h2 id="sharpening"><a href="#sharpening" class="header-link-alt">Sharpening</a></h2>
<p>I was recently poked by someone on the <a href="https://mail.gnome.org/archives/gimp-web-list/">GIMP-Web mailing list</a> to update one of the tutorials on <a href="http://www.gimp.org/tutorials">www.gimp.org</a> about sharpening.
I thought about it, then decided it may be better just to write some new material from scratch.</p>
<p>I figured why stop there?  I might as well make it a fun post here taking a look at what methods we have for sharpening, why you may (or may not) want to use them, and where in the processing pipeline it makes sense.
(While still pushing the GIMP specific sharpening thoughts to a separate tutorial there).</p>
<p>If anyone has thoughts around this or just wants to share what they’re doing, please let us know in the comments below.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Back to Writing]]></title>
            <link>https://pixls.us/blog/2015/04/back-to-writing/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/04/back-to-writing/</guid>
            <pubDate>Wed, 22 Apr 2015 17:00:15 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/04/back-to-writing/Tacky.jpg" /><br/>
                <h1>Back to Writing</h1> 
                <h2>Hiccups and Other Things</h2>  
                <p>I took a bit of a break from writing articles to <a href="https://pixls.us//pixls.us/blog/2015/04/a-forum">work on</a> getting <a href="https://discuss.pixls.us">the forums</a> up and running.
We are almost back to a stable enough point that I want to turn my attention back to writing.</p>
<p>I say almost because there are still a few wonky things that I’d like to work out.
There is still a little bit of an issue with the comment embeds from the forum for full-blown <a href="https://pixls.us/articles/">articles</a>.</p>
<h2 id="ssl-and-https"><a href="#ssl-and-https" class="header-link-alt">SSL and https</a></h2>
<p>One of the reasons for the possibly strange behavior for articles in the forums is that darix convinced me to go ahead and get SSL setup for the domains.  So working on it yesterday we got it running for both the <a href="https://pixls.us">main site here</a>, as well as at <a href="https://discuss.pixls.us">the forums</a>.</p>
<p>You should notice an indicator in your browser that your connection is over https somewhere (a little green lock?) for this page right now.
I’ve set all connections to <a href="https://pixls.us//pixls.us">PIXLS.US</a> to use SSL now (same thing with the forums).</p>
<!-- more -->
<p>The only drawback was that we uncovered some strange behavior when importing posts into the forum for embedding.
If you care, the way things work is that:</p>
<ol>
<li>I publish an RSS feed of all of the content on the site (<a href="https://pixls.us/feed.xml">https://pixls.us/feed.xml</a> if you’re curious).</li>
<li>Every hour the forum polls this feed.</li>
<li>If there’s new posts, the forum imports them and creates a new topic.
This is what you see under the “PIXLS.US” category on the forum.</li>
<li>Some small code on each post (on the website) references the forum topic entry to embed as comments.</li>
</ol>
<p>There have been a couple of strange things going on with importing those posts, but darix resolved most of them.
The only thing that is still strange is the article objects themselves, which at the moment show up twice in the forum.</p>
<p>I should not that all of this could very well be caused by my writing the RSS feeds.
I know just enough to be dangerous and annoying to those who know better (this should probably be my epitaph).</p>
<blockquote>
<p><strong>Here Lies Pat David</strong></p>
</blockquote>
<blockquote>
<p>He knew just enough to be dangerous and annoy those who knew better…</p>
</blockquote>
<p>Fitting!</p>
<p>On the good side, thanks to the efforts of those smarter than I, even though we had some import hiccups, things have continued to run smoothly for the most part.
The correct comments were maintained in the correct topic threads, and those were in turn correctly associated with the posts they belonged to (well, <em>blog</em> posts at any rate).</p>
<p>Coming soon(<em>ish</em>) - creating showcase posts!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Skin Retouching with Wavelet Decompose]]></title>
            <link>https://pixls.us/articles/skin-retouching-with-wavelet-decompose/</link>
            <guid isPermaLink="true">https://pixls.us/articles/skin-retouching-with-wavelet-decompose/</guid>
            <pubDate>Mon, 20 Apr 2015 16:47:07 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-after-opt.jpg" /><br/>
                <h1>Skin Retouching with Wavelet Decompose</h1> 
                <h2>A better alternative to smearing textures</h2>  
                <p>Skin retouching is a delicate art.</p>
<p><em>Effective</em> skin retouching can feel like a black art.</p>
<p>There have been various methods detailed in the past for ways to “smooth” skin in <a href="http://www.gimp.org">GIMP</a>.
Those methods ranged from disappointing at best to downright ridiculous at worst.
The disappointing methods were simply a product of the best methods available at the time.
The ridiculous ones seemed to be due to a lack of subtlety.</p>
<h2 id="subtlety">Subtlety<a href="#subtlety" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Subtlety is a key requirement when approaching skin retouching.
There are certainly exceptions when required (high-fashion for instance) but it should always be approached from a minimalist perspective first.</p>
<p>Too often retouching skin is approached with a very heavy hand. 
In an attempt to <em>“clean”</em> the skin many will chase every last drop of detail out of an image, resulting in a fake and overly smoothed result (making mannequins).
<strong>This is bad</strong>.</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Mairi-Oversmooth.jpg" width="640" height="640" alt="Oversmooth Mairi" />
<figcaption>
To reiterate: <strong>This is bad</strong>.
</figcaption>
</figure>

<p>Real skin has pores, bumps, spots, color, and other interesting things going on. 
The goal shouldn’t be to remove those characteristics, but rather to make some them less pronounced <em>as needed</em>. 
A good rule of thumb is: </p>
<blockquote>
<p>“Never do more than good makeup can achieve.”</p>
</blockquote>
<p>Of course, some makeup artists are magicians. 
In fact, it can be very helpful to go out and research how they work and what their process and reasons are.
This can help you understand better how to approach all manner of retouching, particularly when using techniques like dodging/burning and color theory (as it relates to makeup and skin).</p>
<p>Keep in mind the context as well.
Candid images may only require a very minimum of retouching (<em>if at all</em>), while a fashion shoot may desire a stronger application.
For the best results, it helps to have a clear vision of what you want to achieve.</p>
<h2 id="tools">Tools<a href="#tools" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="blurring">Blurring<a href="#blurring" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>One method of smoothing skin that can be found in many old tutorials on the subject involves using some variation of blurring the base image and masking the blurred regions into the image.
In theory the idea may seem sound but fails quickly on closer inspection.</p>
<p>A combination of the broad effects of blurring coupled with the indiscriminate application across all the textures in the skin make this a less than ideal approach.
All of those pores, spots, bumps, and colors get lost when using an indiscriminate function such as blurring the image.
While there may be a desire to remove some of those details, visually our eyes are expecting for there to be some sort of texture and detail there.
Loss of those details is what pushes the results into “mannequin” territory.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Mannequin.jpg" width="960" height="640" alt='Mannequin by Horia Varlan'/>
<figcaption>
Mannequin territory</br>
<em>“White male mannequin head in storefront window”</em> by <a href='https://www.flickr.com/photos/horiavarlan/4269156697'>Horia Varlan</a> (<a href='https://creativecommons.org/licenses/by/2.0/' class='cc'>cb</a>)
</figcaption>
</figure>

<p>Overall, this method should not even be considered as an option for skin retouching.
The results are never good and are indiscriminately too destructive to the image.</p>
<h3 id="high-pass-low-pass-frequency-separation">High Pass/Low Pass Frequency Separation<a href="#high-pass-low-pass-frequency-separation" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>A slightly more advanced way to approach skin retouching is to use a “high pass/low pass” (or “high frequency/low frequency”, or just “frequency separation”) technique to separate the image into two layers.
One layer would contain all of the high-frequency (fine) details while the other layer would contain the low-frequency (coarse) information.</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Mairi-Base.jpg" height="640" width="640" alt="Mairi Base by Pat David"/>
<figcaption>
Mairi 
</figcaption>
</figure>

<p>The resulting layers can look strange to those not accustomed to seeing them.
The important thing to notice is the ability to isolate all high frequency details on a separate layer.
This allows us to independently modify the colors/tones from the details.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Mairi-HFLF.jpg" width="960" height="480" alt='Mairi Frequency Separation'/>
<figcaption>
Low Frequency (left) and High Frequency (right)<br/>
Created with a blur radius of 15px
</figcaption>
</figure>



<h4 id="create-frequency-separated-layers">Create Frequency Separated Layers<a href="#create-frequency-separated-layers" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Creating the frequency separated layers is relatively easy in GIMP.
Starting with the base image layer:</p>
<ol>
<li>Duplicate base layer<br/>
[<em>Layer &rarr; Duplicate Layer</em>]<ul>
<li>Name it “LF”</li>
</ul>
</li>
<li>Apply a Gaussian Blur to the “LF” layer<br/>
[<em>Filters &rarr; Blur &rarr; Gaussian Blur</em>]<ul>
<li>Choose an appropriate radius to isolate your desired high-frequency details (15px in the example)</li>
<li>The blur radius is ~1.5% of the width of the face</li>
</ul>
</li>
<li>Change “LF” layer blend mode to <em>Grain Extract</em></li>
<li>Create a new layer from visible<br/>
[<em>Layer &rarr; New from Visible</em>]<ul>
<li>Name it “HF”</li>
<li>Change “HF” layer blend mode to <em>Grain Merge</em></li>
</ul>
</li>
<li>Change “LF” layer blend mode back to <em>Normal</em></li>
</ol>
<p>Visually, the result should look identical to the original base layer.
Technically the separated frequency layers now allow for much finer targeted editing.
The layers for the image will now have an HF layer (in <em>Grain Merge</em> blend mode) over a LF layer:</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/HFLF Layers.png" alt="GIMP Layers Dialog Frequency Separation" />
<figcaption>
Layers after going through a frequency separation.
</figcaption>
</figure>

<p>The choice of radius for the <em>Gaussian Blur</em> operation will determine the level of details that get separated from the low-frequency layer.  Smaller blur radii will isolate finer details (conversely larger radii include larger details).</p>
<h4 id="skin-retouching-with-frequency-separation">Skin Retouching with Frequency Separation<a href="#skin-retouching-with-frequency-separation" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Consider now the results from the separation.  In particular notice which types of skin features occur in each layer.</p>
<p>Pores, light wrinkles, crows-feet, and small details are separated into the HF layer, while larger skin tones remain on the LF layer.
Overall skin tones can be evened out out by smoothing the tones in the low frequency layer.</p>
<p class='aside'>
<span>A note on smoothing</span>
There are various ways of softening details on the different layers.
<br/>
The standard <em>Gaussian Blur</em> is one method that works well and quickly.
<span class='Cmd'>Filters &rarr; Blur &rarr; Gaussian Blur…</span>
<br/>
A better method might be using a <em>Selective Gaussian Blur</em> to only blur certain areas (based on the value difference between the pixel in consideration and its neighbors).
<span class='Cmd'>Filters &rarr; Blur &rarr; Selective Gaussian Blur…</span>
<br/>
If <a href="http://gmic.sourceforge.net/">G’MIC</a> is installed, there is also access to a <em>bilateral blur</em> filter (similar to <em>Surface Blur</em> in Adobe Photoshop) that is also an edge-preserving blur function.
<span class='Cmd'>Filters &rarr; G’MIC…<br/>
Repair &rarr; Smooth [bilateral]</span>
</p>

<p>When considering a face for skin retouching it’s often best to consider each general contour area of the face separately.
This is mostly due to different areas of the skin having different characteristics (<em>ie</em>: forehead wrinkles are often at a different scale than crows feet or smile lines).  </p>
<p>Below is one example of a good starting point for contour consideration when smoothing.
The key is to vary the smoothing intensity for each region to obtain a good result.
There may not be a change required all the time, but it’s a good habit to get into for when it is needed.</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Mairi Smooth Contour.jpg" width="640" height="640" alt="Mairi Contour Smoothing Areas"/>
<figcaption>
Areas of smoothing consideration
</figcaption>
</figure>

<p>A good place to start is often to address any “blotchiness” or uneven tones in the skin.
(Ideally this would be addressed through the use of foundation makeup.)
As seen above, those types of tones can be found on the Low Frequency layer.</p>
<p>Following the contour areas from above a <em>Bilateral Blur</em> (from G’MIC) is used to smooth the regions.
When using the <em>Free Select Tool</em> to select a region, remember to enable <em>Feather edges</em> from the tool options to make a smooth transition from the working area to the surrounding image.</p>
<p><span class='Cmd'>Filters &rarr; G’MIC…<br/>
Repair &rarr; Smooth [bilateral]</span></p>
<p>The defaults of <em>spatial variance</em>: 10, <em>value variance</em>: 7, and <em>iterations</em>: 2 are used.</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Mairi LF Smooth.jpg" alt="Mairi Low Frequency Smoothed" data-swap-src="Mairi-Base.jpg" width="640" height="640" />
<figcaption>
After smoothing the LF layer with a bilateral blur<br/>
Click to compare to original
</figcaption>
</figure>

<p>Visually, smoothing the Low Frequency skin tones provides a marked improvement to the perceived quality.
Importantly, notice that none of the finer details have been modified (wrinkles, pores, etc…).</p>
<p>At this point, regular workflows could still be used such as spot healing or dodging &amp; burning (on either LF or HF layers as needed).</p>
<h4 id="hf-lf-summary">HF/LF Summary<a href="#hf-lf-summary" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>High/Low frequency separation is a great tool to approach skin retouching due to it’s ability to allow a retoucher to approach the work in discrete layers.</p>
<p>If one wanted to isolate a series of frequencies, then things get a little trickier.
It would require the user to generate HF/LF separately for each size they wanted to isolate.
The workflow would be along the lines of do the separation, retouch, do the separation again for a different size, retouch.  Rinse and repeat.</p>
<p>It turns out that there is already a very handy way to isolate multiple frequencies at once and still have a visual means of combining them easily to see the edits as they are being made:
<strong>Wavelet Decompose</strong>.</p>
<h3 id="wavelet-decompose">Wavelet Decompose<a href="#wavelet-decompose" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Wavelet Decompose allows you to generate multiple High Frequency layers (and a Low Frequency “Residual” layer) all at once.
Each of the HF layers use the <strong>Grain Merge</strong> layer blending mode so that the composite image is reconstituted correctly.
This allows the retoucher to make modifications to any of the scale (frequency) layers while seeing the results immediately on the canvas.</p>
<p class='aside'>
<span>Getting Wavelet Decompose [Plugin]</span>
The original plugin for Wavelet Decompose by the user <em>marcor</em> on the <a href="http://registry.gimp.org">GIMP registry</a> can be found here:
<span class='Cmd'><a href="http://registry.gimp.org/node/11742">Wavelet Decompose</a> [registry.gimp.org]</span>
<br/>
Once installed the command is:
<span class='Cmd'>Filters &rarr; Generic &rarr; Wavelet Decompose …</span>
<br/>

<span>Getting Wavelet Decompose [Script-Fu]</span>
There is also a Script-Fu version by Christoph A. Traxler that can be downloaded from us here:
<span class='Cmd'><a href="wavelet-decompose.scm">Wavelet Decompose Script-Fu</a> [pixls.us]</span>
<br/>
Once installed the command is:
<span class='Cmd'>Image &rarr; Wavelet Decompose …</span>

</p>


<p>The advantage to using a wavelet decomposition over a simple HF/LF separation is cases where there may be details of a different size than your HF layer that you still want to isolate.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Mairi Wavelet Scales Horiz.jpg" alt='Mairi Wavelet Decomposed Scales' data-swap-src='Mairi%20Wavelet%20Scales%20Horiz%20Normal.jpg' width='960' height='640' />
<figcaption>
Wavelet Decomposed to 5 levels<br/>
Click to view equalized levels and enhance details
</figcaption>
</figure>

<p>Examining the equalized version of the previous image immediately shows the various scale features isolated through the decomposition.
In particular, the top row shows the finest details while the bottom row shows broad details with the color residual layer last.</p>
<p>With the various detail scales separated, the retoucher can easily make modifications on any given scale while seeing the results directly on the canvas.
This is due to the detail scale layers being set to “Grain Merge” blending mode in GIMP.</p>
<h2 id="application">Application<a href="#application" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Use of wavelet scales for retouching are done similarly to using a frequency separation.
The major difference is choosing which detail scale to apply the smoothing operations to, and at what intensity.</p>
<p>I have found that a good workflow is to generally start at the largest detail scale.
Experiment with smoothing methods and parameters until a good result is achieved without going too far.
If needed, repeat the operations with different parameters on the next smaller detail scale (with reduced parameters).</p>
<p>For this example, running the <em>Bilateral Blur</em> from G’MIC with the same values as in the <strong>Frequency Separation</strong> example above yields:</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Mairi Wavelet Decompose 5 Smooth.jpg" alt="Mairi Wavelet Decompose Smooth 5 by Pat David" width="640" height="640" />
<figcaption>
Click to compare:
<span class="toggle-swap" data-fig-swap="Mairi-Base.jpg">Original</span>
<span class="toggle-swap" data-fig-swap="Mairi LF Smooth.jpg">Low Frequency Smooth</span>
<span class="toggle-swap" data-fig-swap="Mairi Wavelet Decompose 5 Smooth.jpg">Wavelet Smooth</span>
</figcaption>
</figure>

<p>The smoothing of the largest detail scale produces pleasing skin tones without removing too many details. </p>
<p>Having the detail scales separated out also allow for spot modifications without disrupting the textures of other scale layers.
For example, there is some slight skin discoloration on the models lit cheek:</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Mairi Wavelet Residual Cheek Before Highlight.jpg" alt="Mairi Wavelet Residual Cheek Before Highlight.jpg" width="640" height="640" />
<figcaption>
A small color tone difference to repair.
</figcaption>
</figure>

<p>By working on the color (low-frequency) <strong>Residual</strong> layer, the color tones can be evened out using a <em>Heal Brush</em> and sampling from nearby skin.</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Mairi Wavelet Residual Cheek After.jpg" alt="Mairi Wavelet Residual Cheek After" data-swap-src="Mairi-Wavelet-Residual-Cheek-Before.jpg" width="640" height="640" />
<figcaption>
After healing the area on the <strong>Residual</strong> color layer<br/>
Click to compare to original
</figcaption>
</figure>

<p>Notice in particular that the fine details that make up the skin composition here are not modified.
Wrinkles, pores, and skin texture are kept intact while the underlying color tones for that region are blended smoothly into the surrounding area.</p>
<p>This same technique can come in very handy for lightening under eyes that might have dark circles under them, for instance.</p>
<h3 id="spot-healing">Spot Healing<a href="#spot-healing" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Once the skin tones have been smoothed as desired work can continue with spot healing discrete problems as needed.
Simple skin blemishes that are discrete are best approached with a spot healing tool after the global skin tones have been modified (to avoid having to apply the healing on all of the detail layers one at a time).</p>
<h2 id="example-nikki">Example: Nikki<a href="#example-nikki" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>A good image to see what this approach can accomplish is the lede image to this article, <a href="https://www.flickr.com/photos/patdavid/14490236250/">Nikki</a>.
This is a crop from the raw image untouched:</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-Base-crop.jpg" alt="Nikki Base" width="640" height="640" />
<figcaption>
Crop from <em>Nikki</em>, no retouching.
</figcaption>
</figure>

<p>To follow along you can <a href="Nikki-Base-crop-noresize.jpg">download the full-size base image</a> (360KB).</p>
<p>Running Wavelet decompose (plugin) against the image with the default of 5 scales,</p>
<p><span class="Cmd">Filters &rarr; Generic &rarr; Wavelet decompose …</span></p>
<p>will leave the image with layers that look like this:</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/WD Layers.png" alt="GIMP Layers Wavelet Decompose" />
<figcaption>
Detail scales and residual layers from Wavelet decompose
</figcaption>
</figure>



<h3 id="what-we-re-targeting">What We’re Targeting<a href="#what-we-re-targeting" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>After running a wavelet decompose on a layer there is a very simple method of exaggerating the details that will be targeted for smoothing the skin tones.
Simply toggle off the visibility of the <em>Wavelet residual</em> layer:</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-Base-crop-no-residual.jpg" alt="Nikki Base" width="640" height="640" />
<figcaption>
<em>Nikki</em> with only the detail scales visible over the base image (no residual layer).
</figcaption>
</figure>

<p>I <strong><em>highly</em></strong> recommend that you do <em>not</em> do this with the subject in the room!
Nobody looks good when the residual scale is removed from the image stack…</p>
<p>But it does nicely exaggerate the types of tonal variations that are prime candidates for smoothing and suppression.</p>
<h3 id="regions">Regions<a href="#regions" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Similar to the regions previously shown, we will walk through the retouching process based on that type of facial contour: forehead, nose, cheeks, chin, and lip.</p>
<p>I’ll normally use the <em>Free Select Tool</em> with a feathered radius around one-half an iris length (~30px in this case).
The radius value is mostly arbitrary and serves only to smooth the transition from areas being worked on (so adjust to taste).
I will also usually select regions as I go and remember to save the selections to a channel to make it easier to come back to them later if desired: </p>
<p><span class="Cmd">Select &rarr; Save to Channel</span></p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-Base-crop-regions.jpg" alt="Nikki Base Regions" width="640" height="640" />
<figcaption>
Crop from <em>Nikki</em>, no retouching.
</figcaption>
</figure>

<p><em>Wavelet Decompose</em> is run on the layer using the default number of wavelet detail scales: <strong>5</strong>.</p>
<h3 id="forehead">Forehead<a href="#forehead" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>With the forehead region selected a first pass can be made to smooth out the tones.
As mentioned previously, we’ll start on the largest detail scale <em>Wavelet scale 5</em>.</p>
<h4 id="wavelet-scale-5">Wavelet Scale 5<a href="#wavelet-scale-5" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Due to the size of the blemishes in this area, a slightly more aggressive smoothing amount can be used and adjusted to taste.  A <em>Bilateral Blur</em> can be used again, with slightly higher values than default:</p>
<ul>
<li>Spatial variance: 15</li>
<li>Value variance: 12</li>
<li>Iterations: 2</li>
</ul>
<p>Those parameters do a good job of initially dampening the skin tones here:</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-crop-forehead-w5.jpg" alt="Nikki Forehead Wavelet 5" data-swap-src="Nikki-Base-crop.jpg" width="640" height="640" />
<figcaption>
<em>Bilateral Blur</em> on Wavelet scale 5 results<br/>
Click to compare to original
</figcaption>
</figure>


<h4 id="wavelet-scale-4">Wavelet Scale 4<a href="#wavelet-scale-4" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>There are still some uneven tones that were not affected by the smoothing on scale 5.
These are mostly smaller tones around blemishes.
So continuing with the same region, but now working on <em>Wavelet scale 4</em> should help dampen those even further.</p>
<p>Using the <em>bilateral blur</em> again with smaller parameter values than previously:</p>
<ul>
<li>Spatial variance: 7</li>
<li>Value variance: 4</li>
<li>Iterations: 1</li>
</ul>
<p>These values are determined through experimentation on the image. They are tuned in iterations until the result is visually pleasing, then dialed back a little bit more.</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-crop-forehead-w5-w4.jpg" alt="Nikki Forehead Wavelet 5 & 4" data-swap-src="Nikki-Base-crop.jpg" width="640" height="640" />
<figcaption>
<em>Bilateral Blur</em> on Wavelet scale 4 results<br/>
Click to compare to original
</figcaption>
</figure>

<p>At this point, most of the skin tones have been evened out and what is left is mostly discrete skin blemishes that can be cleaned up with a heal tool later.
Working on just two wavelet scales significantly decreased the prominence of the blemishes and the overall smoothness of the tones.</p>
<h3 id="nose">Nose<a href="#nose" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>There is not as much smoothing required on the nose (vs. the forehead).
An initial pass on <em>Wavelet scale 5</em> with the default <em>bilateral blur</em> values:</p>
<ul>
<li>Spatial variance: 10</li>
<li>Value variance: 7</li>
<li>Iterations: 2</li>
</ul>
<p>helps to even the underlying tones nicely.
A second pass on <em>Wavelet scale 4</em> with much lower values on the blur help to smooth the slightly finer details as well:</p>
<ul>
<li>Spatial variance: 5</li>
<li>Value variance: 2</li>
<li>Iterations: 1</li>
</ul>
<p>These two passes result in this for the nose:</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-crop-nose-w5-w4.jpg" alt="Nikki Nose Wavelet 5 & 4" data-swap-src="Nikki-crop-forehead-w5-w4.jpg" width="640" height="640" />
<figcaption>
Smoothing on scales 5 &amp; 4 results<br/>
Click to compare to original
</figcaption>
</figure>



<h3 id="cheeks">Cheeks<a href="#cheeks" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Similar to the first pass on the nose, the cheeks can use an initial smoothing on <em>Wavelet scale 5</em> with the default values for the <em>bilateral blur</em>.</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-crop-cheeks-w5.jpg" alt="Nikki Cheeks Wavelet 5" data-swap-src="Nikki-crop-nose-w5-w4.jpg" width="640" height="640" />
<figcaption>
Smoothing the cheeks on wavelet scale 5<br/>
Click to compare to original
</figcaption>
</figure>

<p>To finish the cheeks a slight smoothing on <em>scale 4</em> with slight values,</p>
<ul>
<li>Spatial variance: 5</li>
<li>Value variance: 2</li>
<li>Iterations: 1</li>
</ul>
<p>This smooths just a bit more than previously usually without being too much (if it is too much, dial it back of course).</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-crop-cheeks-w5-w4.jpg" alt="Nikki Cheeks Wavelet 5 & 4" data-swap-src="Nikki-crop-cheeks-w5.jpg" width="640" height="640" />
<figcaption>
Smoothing the cheeks on wavelet scale 4<br/>
Click to compare to previous step 
</figcaption>
</figure>

<p>When clicking to compare in the above image, notice that the result of smoothing with low values on <em>scale 4</em> are subtle but they are there.
Combined with the previous step the overall result is a much visually smoother looking complexion without smearing details.</p>
<h3 id="chin-lip">Chin &amp; Lip<a href="#chin-lip" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Considering both the upper lip and chin, and as before a good starting point is to try the default <em>bilateral blur</em> values on the largest scale (<em>scale 5</em>).</p>
<ul>
<li>Spatial variance: 10</li>
<li>Value variance: 7</li>
<li>Iterations: 2</li>
</ul>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-crop-chin-lip-w5.jpg" alt="Nikki Cheeks Wavelet 5 & 4" data-swap-src="Nikki-crop-cheeks-w5-w4.jpg" width="640" height="640" />
<figcaption>
Smoothing the chin with default <em>bilateral blur</em> values
<br/>
Click to compare to original 
</figcaption>
</figure>

<p>Similar to the previous step a further refinement of the skin tones can be achieved by smoothing on the next detail scale down, <em>wavelet scale 4</em>.
As before, using slight values:</p>
<ul>
<li>Spatial variance: 5</li>
<li>Value variance: 2</li>
<li>Iterations: 2</li>
</ul>
<p>Will produce a nice finishing to the detail tones in this area:</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-crop-chin-lip-w5-w4.jpg" alt="Nikki Chin Wavelet 5 & 4" data-swap-src="Nikki-crop-chin-lip-w5.jpg" width="640" height="640" />
<figcaption>
Further refining the chin and lip with smaller blur values on wavelet scale 4
<br/>
Click to compare to previous step 
</figcaption>
</figure>



<h3 id="results-wavelet-smoothing-only-">Results (Wavelet Smoothing Only)<a href="#results-wavelet-smoothing-only-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>This process relied only on smoothing the tones on the largest detail scales, 4 &amp; 5.
Without doing any targeted modifications (beyond regions) here are the final results:</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-crop-chin-lip-w5-w4.jpg" alt="Nikki Wavelet Final" data-swap-src="Nikki-Base-crop.jpg" width="640" height="640" />
<figcaption>
End result working only on wavelet scales 4 &amp; 5
<br/>
Click to compare to original 
</figcaption>
</figure>

<p>This is a fantastic base to continue working from (particularly when compared to the starting original image).
A few areas of spot healing as needed would be enough to make a great final image from here.</p>
<blockquote>
<p>The concept to keep in mind when working with Wavelet scales is to build up a series of small changes that together will produce a pleasing visual result.</p>
</blockquote>
<p>At this point only a few minor spot corrections and some color toning are required to reach a pleasing final result:</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-Final.jpg" alt="Nikki Final" data-swap-src="Nikki-crop-chin-lip-w5-w4.jpg" width="640" height="640" />
<figcaption>
Final result after spot corrections and color toning
<br/>
Click to compare to Wavelet smoothing only 
</figcaption>
</figure>

<hr>
<h2 id="moderation">Moderation<a href="#moderation" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>As with many things in life, moderation is the key here.
Visually it can be helpful to occasionally check your image results zoomed far out.
If an image looks too smooth when zoomed out then dial it back.</p>
<p>Remember that this is an inherently <em>destructive</em> process and should be used as little as needed to get a desired result.</p>
<h2 id="resources">Resources<a href="#resources" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>You can download the sample <em>Mairi</em> and <em>Nikki</em> GIMP .XCF files used to create the examples above here:</p>
<ul>
<li><a href="https://s3.amazonaws.com/pixls-files/Mairi-Example.xcf.bz2">Mairi</a> <sup>[<strong>34.4MB</strong>]</sup></li>
<li><a href="https://s3.amazonaws.com/pixls-files/Nikki-Example.xcf.bz2">Nikki</a> <sup>[<strong>7.7MB</strong>]</sup></li>
</ul>
<p>These are compressed GIMP .xcf files (hence the .xcf.bz2 file extensions).
They should open directly in GIMP (created in 2.8.14) without problem.</p>
<h2 id="further-reading">Further Reading<a href="#further-reading" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>This tutorial is a combination of material originally posted here: </p>
<ul>
<li><a href="http://blog.patdavid.net/2011/12/getting-around-in-gimp-skin-retouching.html">Getting Around in GIMP - Skin Retouching (Wavelet Decompose)</a></li>
<li><a href="http://blog.patdavid.net/2014/07/wavelet-decompose-again.html">Getting Around in GIMP - Wavelet Decompose (Again)</a></li>
<li><a href="http://blog.patdavid.net/2013/03/the-open-source-portrait-postprocessing.html#GIMP-Skin">The Open Source Portrait (Postprocessing)</a></li>
</ul>
<p>The original wavelet decompose plugin from user <em>marcor</em> on <a href="http://registry.gimp.org/">registry.gimp.org</a> (the one I use usually):</p>
<ul>
<li><a href="http://registry.gimp.org/node/11742">Wavelet Decompose</a></li>
</ul>
<p>A Script-Fu version of Wavelet Decompose by Christoph A. Traxler.
Place the .scm file into your scripts folder and the menu option “Wavelet Decompose …” will be under the <strong>Image</strong> menu:</p>
<ul>
<li><a href="wavelet-decompose.scm">Wavelet Decompose Script-Fu</a></li>
</ul>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[An Opportunity]]></title>
            <link>https://pixls.us/blog/2015/04/an-opportunity/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/04/an-opportunity/</guid>
            <pubDate>Tue, 14 Apr 2015 02:59:55 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/04/an-opportunity/Mary Front.jpg" /><br/>
                <h1>An Opportunity</h1> 
                <h2>To help (and attract) new users!</h2>  
                <p>I think we are at an interesting time for digital imaging.
I came across this graph on <a href="http://petapixel.com/2015/04/09/this-is-what-the-history-of-camera-sales-looks-like-with-smartphones-included/">Petapixel</a> the other day that showed camera sales from 1947 - 2014:</p>
<p><img src="https://pixls.us/blog/2015/04/an-opportunity/graph.jpg" alt="CIPA Camera Production 1947-2014"></p>
<p>There was explosive growth driven by the <span style="color: #4e92db;"><em>Compact Digital</em></span> market right around 2000.
Likely driven by the advent of those inexpensive compact digital cameras and the ubiquity of home computers.
It was relatively cheap to get a decent digital camera and the cost per photo suddenly dropped to a previously unheard of amount (compared to shooting film).</p>
<p>This meant that substantially more people were now able to take and share photographs.</p>
<p>That precarious plummet after 2011 seems frightful for the photography industry as a whole, though.
The numbers from the graph would seem to indicate that production in 2014 dropped to <em>below</em> the values from 2001.</p>
<!-- more -->
<p>Petapixel had a follow-up article where photographer Sven Skafisk added in smartphone sales using data from Gartner Inc.: </p>
<p><img src="https://pixls.us/blog/2015/04/an-opportunity/chartwithsmartphones.png" alt="Camera Sales with Smartphones"></p>
<p>If that graph doesn’t describe an industry in the throes of change, then I don’t know what does.
It looks like the camera industry is less in decline and more about being in a big transition phase.</p>
<h3 id="so-what-"><a href="#so-what-" class="header-link-alt">So What?</a></h3>
<p>So why would this matter?
Because now, more than ever, there is a large amount of people who may be interested in learning to process their photographs in some way.
As the costs and barrier to entry to photography as a hobby get lower we see more and more people finding the fun and joy of photography.</p>
<p>Couple that with the fact that the modern language of media consumption is primarily <em>visual</em> and I see a great opportunity brewing.</p>
<p>I feel this is important to <em>us</em> as free software users as it gives us an opportunity to help make people aware of free software (and its ideas).
New hobbyists will invariably look for an inexpensive way to get started processing photos and will almost always run into various free software projects at some point in the search.</p>
<p>It’s entirely on us as a community to make sure that there will be good resources to learn from.
If we do a good enough job, some of those folks will realize that free software more than meets their needs.
If we do a <em>really</em> good job, some of those people will become valuable parts of our communities.</p>
<h2 id="articles-have-comments-now-also"><a href="#articles-have-comments-now-also" class="header-link-alt">Articles Have Comments Now Also</a></h2>
<p>So I have now also enabled the comments for more than just blog posts.
They should now be working just fine on full articles as well.
So feel free to head over to <a href="http://lightsweep.co.uk">Ian Hex’s</a> neat <a href="http://pixls.us/articles/luminosity-masking-in-darktable">Luminosity Masking in darktable</a> tutorial and leave a comment to let him know what you thought of it!
(Or any of the other articles, too.)</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[A Forum]]></title>
            <link>https://pixls.us/blog/2015/04/a-forum/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/04/a-forum/</guid>
            <pubDate>Fri, 10 Apr 2015 14:40:44 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/04/a-forum/Glades.jpg" /><br/>
                <h1>A Forum</h1> 
                <h2>For Discourse, if you will...</h2>  
                <p>After much hard work, that basically consisted of me annoying darix as often as possible, I am glad to say that we finally have a <a href="http://discourse.org">Discourse</a> instance set up!
<strong>Super Big</strong> thank you to darix for all the help!</p>
<h2 id="so-what-"><a href="#so-what-" class="header-link-alt">So What?</a></h2>
<p>What does this mean?
For starters, we now have a forum/community in place that we can start building around photography and free software.</p>
<p>A neat side-effect of this forum is that we now also have a way to embed forum threads as comments on posts (only blogposts at the moment - I’ll add them to articles shortly).</p>
<p>At the bottom of any blog post you should now either see a series of conversations happening with a <code>Continue Discussion</code> or a link to <code>Start Discussion</code>.
Either of those buttons will take you to the actual forum to continue the conversation.
Replies to topics that are tied to posts will show up as a conversation at the bottom of the post (check the bottom of this post).</p>
<p>The site is <em>open</em> and <em>live</em> at the moment (if a bit bare-bones).
Feel free to drop by and create an account, comment on things, start new topics, etc.
I’m testing things out at the moment to see if I need to possibly bump the server specs in order to handle the loads (most likely).
(In the course of writing this, I went ahead and bumped the server RAM to 2GB - so it should run smoothly).</p>
<!-- more -->
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[All the Articles]]></title>
            <link>https://pixls.us/blog/2015/03/all-the-articles/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/03/all-the-articles/</guid>
            <pubDate>Mon, 30 Mar 2015 22:31:36 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/03/all-the-articles/M31 - Adam Evans.jpg" /><br/>
                <h1>All the Articles</h1> 
                <h2>My God, It's Full of Articles</h2>  
                <p>I spent a little time struggling conceptually with how I wanted to categorize the different types of content I am planning for this site.
As I had <a href="https://pixls.us/blog/2015/02/some-updates/">previously noted</a>, I was already done with creating a <em>blog post</em> type of content, and had noted that I was working on how to show tutorials and ‘showcase’ types of posts.</p>
<p>Apparently, I had the answer in mind when I created that graphic last month.
If you notice the two other types of content I am working on, <em>Tutorials</em> and <em>Showcase</em>, are both listed as types <strong>Articles</strong> on the graphic.</p>
<!-- more -->
<figure class='big-vid'>
<img src='http://pixls.us/blog/2015/02/some-updates/Some Updates 4.png' alt='site content types - Blog, Tutorials, Showcase' />
</figure>


<p>Of course.
There will only be two distinct types of content from the viewpoint of the site, <em>blogposts</em> and <em>articles</em>.
I will then use the features of the static-site generator I use for this site, <a href="http://metalsmith.io">metalsmith</a>, to manage the content presentation (tutorials, showcase, etc).
This will be handled through collections in metalsmith.</p>
<p>So at the end of the day, even though there will be a section of <em>Tutorials</em> and <em>Showcase</em> or whatever else I come up with (or someone else), the bottom line is that the base content object will be an <strong>Article</strong>.</p>
<p>I like this approach, as it leaves a large amount of flexibility while maintaining a nice sense of simplicity.
(Anything that lowers the barrier to writing and publishing material is good in my book).</p>
<h2 id="an-aside-on-collections-in-metalsmith"><a href="#an-aside-on-collections-in-metalsmith" class="header-link-alt">An Aside on Collections in Metalsmith</a></h2>
<p>This is just a note to myself in case I forget what I was on about with collections.</p>
<p>There are basically two ways of associating an <em>article</em> with a collection, through metadata on the file and through a matching pattern during compile time.
Unfortunately, as near as I can tell, you can’t do them both at the same time for the same collection type.</p>
<h3 id="metadata"><a href="#metadata" class="header-link-alt">Metadata</a></h3>
<p>Doing it through metadata assocation only requires that in the front-matter of the file, the collection type is called out, like <code>collection: tutorial</code>.
For example, here’s a sample of the front-matter for this blog post:</p>
<pre><code class="lang-javascript">---
date: 2015-03-30T17:31:36-05:00
title: &quot;All the Articles&quot;
sub-title: &quot;My God, It&#39;s Full of Articles&quot;
lede-img: &quot;M31 - Adam Evans.jpg&quot;
author: &quot;Pat David&quot;
collection: blogposts
layout: blog-posts.hbt
---
</code></pre>
<p>In this case, the post will be added to the collection, <em>blogposts</em>.</p>
<h3 id="pattern-matching"><a href="#pattern-matching" class="header-link-alt">Pattern Matching</a></h3>
<p>In the <code>index.js</code> for the site, there’s a section for using collections where a pattern can be specified to add files:</p>
<pre><code class="lang-javascript">.use( collections({
    articles: {
        pattern: &#39;articles/*/index.html&#39;,
        sortBy: &#39;date&#39;,
        reverse: true
        }
}))
</code></pre>
<p>This glob pattern will simply add all the posts in a folder in the <code>articles/</code> directory to the collection, <em>articles</em>.</p>
<p>In fact, this is actually how I want to collect all <em>articles</em> on the site for archive purposes.
I’ll want a page on the site that will list all of the articles that will be published, regardless of further classifications.
I feel that it is helpful for people searching for information to have a single page listing of all the material on the site (I did something similar with my blog by adding <a href="http://blog.patdavid.net/p/archive.html">an archive page</a>).</p>
<h2 id="happy-"><a href="#happy-" class="header-link-alt">Happy!</a></h2>
<p>So these pieces sort of falling into place make me happy because it means that I am much closer to having a setup how I would like it to be.
I can get started writing these other article types now without worrying as much about the back end.</p>
<p>Rather, I only need to focus on creating the landing pages for the content type (tutorials/, showcase/, etc…).
Yay!
More time to spend on writing new stuff!</p>
<h2 id="discourse"><a href="#discourse" class="header-link-alt">Discourse</a></h2>
<p><img src="https://pixls.us/blog/2015/03/all-the-articles/discourse.png" alt="Discourse Logo"></p>
<p>I had mentioned it previously, but darix on <code>#darktable</code> has been an immense help in testing out <a href="http://discourse.org">Discourse</a> for me.
He has gotten it to a point where it mostly works so the only thing holding me back from getting it rolled out is deciding how/where to host the instance.</p>
<p>If anyone has any thoughts or suggestions, I’m all ears!
To use Darix’s discourse, I’ll need openSUSE 13 at least.
Otherwise, I could probably buy a droplet on Digital Ocean and host it there for now.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Deep Links]]></title>
            <link>https://pixls.us/blog/2015/03/deep-links/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/03/deep-links/</guid>
            <pubDate>Tue, 24 Mar 2015 22:17:53 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/03/deep-links/More Mairi Experiments.jpg" /><br/>
                <h1>Deep Links</h1> 
                <h2>As well as a sort-of look for article/tutorial indexes</h2>  
                <figure>
<img src="https://pixls.us/blog/2015/03/deep-links/Deep-Thoughts.jpg" alt="Deep Thoughts by Jack Handy" title="I'm showing my age with this reference, aren't I?" />
</figure>

<p>I tried to find a good funny reference to <a href="http://en.wikipedia.org/wiki/Jack_Handey">Jack Handey</a> here but failed.
Which might be a good thing given how the reference likely shows my age…</p>
<p>I have been working on various bits of the site as well as finishing up a long-overdue article.
I’ve also been giving some thoughts in general about interesting ways to move forward with some ideas which I will bore you all with shortly.</p>
<!-- more -->
<h2 id="deep-linking"><a href="#deep-linking" class="header-link-alt">Deep Linking</a></h2>
<p>A while back I had <a href="https://pixls.us/blog/2014/09/an-about-page-and-help/#breaking-up-long-pages">some thoughts</a> around how best to format long form articles.
I finally decided to keep articles entirely on a single page as opposed to breaking them up across multiple pages.
Mostly this was because I know I personally hate having to click through too many times just to read an article, and the technique is often used as a cheap means to show more ads to readers.</p>
<p>The problem with single page articles is linking/referencing content at an arbitrary location in the page.
The markdown processor I’m using in <a href="http://metalsmith.io">metalsmith</a> <em>does</em> add a unique heading id to each html heading element, but doesn’t expose the link easily.</p>
<p>So I spent some time recently writing a small metalsmith plugin to do that for me.
In the <a href="https://pixls.us/articles/">articles</a> you can now get a direct link to a heading section by hovering the mouse pointer over a heading.
The link will become visible at the end of the heading (as a link icon):</p>
<figure style="border: solid 2px #999; padding: 1rem;">
<img src="https://pixls.us/blog/2015/03/deep-links/deep-link.png" alt="PIXLS.US deep link example" />
<figcaption>
The link becomes visible when hovering over a heading.
</figcaption>
</figure>

<p>This lets you now link directly to that section.
So I can now link directly to content deep into the page itself, <a href="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/#example-nikki">like this link</a> to the Nikki example for skin retouching.</p>
<p>These are the same heading links that are used for the <em>Contents</em> navigation pane on the menu:</p>
<figure>
<img src="https://pixls.us/blog/2015/03/deep-links/pixlsus-menu.png" alt="PIXLS.US Navigation Menu" width="640" height="640" />
</figure>

<p>This method of exposing a heading link is similar to what you may find on <a href="http://github.com">GitHub</a> for instance.
So, at least there’s now the ability to deep-link into articles as needed! :)</p>
<h2 id="skin-retouching-with-wavelets"><a href="#skin-retouching-with-wavelets" class="header-link-alt">Skin Retouching with Wavelets</a></h2>
<p>Also, I took a break from this other thing I’m working on to finish writing the <a href="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/">Skin Retouching with Wavelet Decompose</a> article.</p>
<figure class='big-vid'>
<img src="https://lh3.googleusercontent.com/-NEKW7KPTLh0/U_lW3AoF3yI/AAAAAAAARN8/b2DSir8MK0s/s0/Nikki-after-opt.jpg" alt="Nikki by Pat David" />
<figcaption>
<em>Nikki</em> is a sample image from the <a href="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/">Skin Retouching with Wavelets</a> article.
</figcaption>
</figure>

<p>This poor article has been in the queue for what feels like forever, so it’s nice to finally be able to publish it.
This particular article is a combination of many of the previous things I had written around using wavelet scales for retouching work.
If you get a chance to read it, I’d love to hear what anyone has to say about it!</p>
<h2 id="articles-index-page"><a href="#articles-index-page" class="header-link-alt">Articles Index Page</a></h2>
<p>I’m still experimenting with the look and feel of the <a href="https://pixls.us/articles/">articles index page</a>.
If you follow that link you’ll see one of the ideas I currently have for laying it out.
I’m not 100% sold on this layout yet, as it may get cumbersome with many articles at once.</p>
<p>I may also provide links at the top of the page for particular content (tutorials, showcases, by tag/software, etc…).</p>
<p>Speaking of which, I’m wondering from a content management standpoint if it makes more sense to publish every item on the site as an “article”, then to handle the categorization and display as a function of tags/categories on the posts.
Not quite sure just yet.
I’ll still need to fiddle with some other layout/organizational ideas.</p>
<h2 id="on-another-note"><a href="#on-another-note" class="header-link-alt">On Another Note</a></h2>
<p>I finally also fixed the path problem when generating the blog post listing page.
I had a problem where locally referenced images for a post (relative to the post directory) didn’t have their paths updated when showing them on the blog index page.
So I took some time and repaired it with a small <a href="http://handlebarsjs.com">handlebars</a> helper function.</p>
<p>For instance, the <em>Deep Thoughts</em> image at the beginning of this post wasn’t showing correctly from the blog index page before I fixed it.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Some Updates]]></title>
            <link>https://pixls.us/blog/2015/02/some-updates/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/02/some-updates/</guid>
            <pubDate>Thu, 26 Feb 2015 21:38:21 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/02/some-updates/Dorothy.jpg" /><br/>
                <h1>Some Updates</h1> 
                <h2>Yes still writing and working!</h2>  
                <p>I hate when things take me away for a little while, but won’t make any apologies just yet for having little activity here!
It’s mostly a one-man show here at the moment so I do beg for some patience as I build things out and get articles together.</p>
<p>Speaking of building things out…</p>
<h2 id="site-structure"><a href="#site-structure" class="header-link-alt">Site Structure</a></h2>
<p>I have been giving some thought to the general site structure lately.
I thought it might be fun to talk about it briefly.</p>
<p>My original (and still current) intention for the main piece of content for PIXLS.US is a tutorial.
It’s the main type of content I was writing on <a href="http://blog.patdavid.net">my blog</a> as well as what I’ve been trying to update on <a href="http://www.gimp.org/tutorials">http://www.gimp.org/tutorials</a>.
It’s a nice, known quantity…</p>
<!-- more -->
<figure class="big-vid">
<img src="https://pixls.us/blog/2015/02/some-updates/Some Updates.png"/>
</figure>

<p>So I spent my early time building the site focusing on the layout and design of tutorial pages.
Fonts, sizes, weights, layout, and more.
It’s just the way I think.
Plus, if I did a decent job on this layout, I could not worry about fiddling with it later and instead focus on writing.</p>
<p>I finally ended up with a layout that I liked (basically what you’re reading on right now).
The problem was, I wanted a bunch of tutorials, not just one!</p>
<p>So with a little work and the help of some contributors (yay <a href="http://lightsweep.co.uk/">Ian Hex</a>!), I was looking at a few different tutorials now for the site.  Yay!</p>
<figure class="big-vid">
<img src="https://pixls.us/blog/2015/02/some-updates/Some Updates 2.png"/>
</figure>

<p>The problem now was that I needed to create a nice page to help guide users to the various tutorials.
This is <em>still</em> not done…</p>
<p>So here I am at the moment still working on how best to showcase the neat tutorials on an index page of some sort:</p>
<figure class="big-vid">
<img src="https://pixls.us/blog/2015/02/some-updates/Some Updates 3.png"/>
</figure>

<p>I need to find an attractive and usable means of listing the various tutorial articles.
So this is one of the things that has been taking up some of my time.</p>
<p>The main page has also been occupying some of my attention,
as I’m not 100% sure how to present all the site information (tutorials, blog posts, showcases, etc…).
There’s kind of a running theme here I guess.</p>
<figure class="big-vid">
<img src="https://pixls.us/blog/2015/02/some-updates/Some Updates 4.png"/>
</figure>

<p>I’m also going to be trying to produce some “Showcase” type of article posts that will highlight a F/OSS photographer or images.</p>
<p>The blog pages I’ve already finished (it’s what you’re reading now).
I’ve also mostly gotten the index pages for the blog in a workable state.
I took some time recently to paginate the blog index pages as well so as to not try to load the entire post history on a single page.</p>
<p>To summarize, there are a few things yet to design and code.
I’m working on getting them so we can have an actual launch.</p>
<ul>
<li><p><strong>Main Page</strong></p>
<p>  I still need to design and layout how best to show off the site content.</p>
</li>
<li><p><strong>Tutorial/Articles Page</strong></p>
<p>  This is another page to design and layout.
I have some ideas and neat content already written, so this is just designing the page.</p>
</li>
<li><p><strong>Showcase Pages &amp; Index</strong></p>
<p>  These pages will be functionally the same as the article pages, but the content will focus more on showcasing FL/OSS artists and their works.
I’ll categorize these pages differently so I can collect them on their own index page separate from the tutorials.</p>
</li>
</ul>
<h2 id="in-closing-"><a href="#in-closing-" class="header-link-alt">In Closing…</a></h2>
<p>So, things are moving along (albeit slower than I would like).
I’m building the scaffolding for the future, so I don’t feel so rushed.
Better to do it well than quick in my opinion.</p>
<h3 id="contributing"><a href="#contributing" class="header-link-alt">Contributing</a></h3>
<p>Also, if anyone would like to immortalize themselves on the early pages of an experimental website to bring high quality tutorials and discussions to the the Free/Open Source Imaging world – well then you know where to turn: <a href="mailto:pat@patdavid.net?Subject=PIXLS.US">pat@patdavid.net</a>.</p>
<p>I promise I don’t bite (hard).</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Another Article Done]]></title>
            <link>https://pixls.us/blog/2015/01/another-article-done/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/01/another-article-done/</guid>
            <pubDate>Wed, 07 Jan 2015 14:30:35 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2015/01/another-article-done/Ian_Hex.jpg" /><br/>
                <h1>Another Article Done</h1> 
                <h2>Ian Hex and Luminosity Masks in darktable</h2>  
                <p>2015 seems to be getting started nicely! </p>
<p>Just before the holidays <a href="http://lightsweep.co.uk">Ian Hex</a> sent me his finished tutorial to post, and I just finished editing it.
It’s a wonderful look at using Luminosity Masks in darktable for targeted adjustments. (Parametric masks in darktable-speak).
You can find the new tutorial here:</p>
<p><a href="https://pixls.us/articles/luminosity-masking-in-darktable/"><strong>PIXLS.US: Luminosity Masks in darktable</strong></a></p>
<!-- more -->
<p class="aside">
On a side note, I had previously written about doing <a href="http://blog.patdavid.net/2013/11/getting-around-in-gimp-luminosity-masks.html">Luminosity Masks in GIMP</a> on my personal blog, and yes I will be porting that tutorial here a little later!
</p>



<h2 id="still-writing"><a href="#still-writing" class="header-link-alt">Still Writing</a></h2>
<p>I am still working on the Wavelet article (I took a break to copyedit Ian’s article).
I am continuing my work on that article as well as taking a rudimentary first stab at an article index page (or possibly a variation for a main landing page for the site).</p>
<p>Just need to decide on an attractive and functional layout for presenting the list of articles we have available.
I’m also open to suggestions if any of you readers out there have seen something that you think would be appropriate or neat to consider…</p>
<p>I am also open to taking submissions from folks who may have the mental fortitude to write something for the site.
Just shoot me any ideas/sketches/outlines you think may be appropriate!
(<a href="mailto:pat@patdavid.net">pat@patdavid.net</a> in case you didn’t already have it…)</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Luminosity Masking in darktable]]></title>
            <link>https://pixls.us/articles/luminosity-masking-in-darktable/</link>
            <guid isPermaLink="true">https://pixls.us/articles/luminosity-masking-in-darktable/</guid>
            <pubDate>Tue, 06 Jan 2015 18:41:08 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/luminosity-masking-in-darktable/luminosity masks in darktable tutorial lede.jpeg" /><br/>
                <h1>Luminosity Masking in darktable</h1> 
                <h2>Making targeted adjustments to your RAWs</h2>  
                <p><strong>Luminosity Masking</strong>, the ability to create selections of your image based on its specific tones for ultra-targeted editing, is a relatively recent concept favoured by landscape photographers the world over.
In this article, we will explore how to create and use Luminosity Masks in the F/OSS RAW editor <a href="http://www.darktable.org">darktable</a>, so that you can make adjustments on your RAW files to isolated tones.</p>
<h2 id="what-is-luminosity-masking-">What is Luminosity Masking?<a href="#what-is-luminosity-masking-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Luminosity Masking is a technique developed in the last 10 years or so primarily by American Southwest landscape photographer Tony Kuyper over at <a href="http://goodlight.us/">goodlight.us</a>. 
Tony provides <em>extensive</em> writing and information on Luminosity Masking and how to create Luminosity Masks; in this article I’ll be primarily focusing and creating and using the masks in darktable, but if you want to really understand the basics I highly recommend giving <a href="http://goodlight.us/writing/luminositymasks/luminositymasks-1.html">Tony’s guide a good read over</a> first.</p>
<p>In essence, Luminosity Masking is about creating highly specific selections of your photo based on the tones of the image itself. 
This enables you to have extremely fine control over what parts of the photo are selected to make adjustments on (such as contrast, saturation etc.) whilst keeping other tones of the photo <em>completely unaffected</em>. 
Let’s quickly illustrate this with some screenshots. </p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-2.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>Here’s a shot I got of the Coral Beach on the Isle of Skye, Scotland, when my partner and I toured there recently in October 2014. 
It’s a pretty solid exposure. Let’s have a look at the histogram.</p>
<figure>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-3.jpeg" width='640' height='360' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>This article assumes you already have a basic understanding of histograms and how they work but I’ll give a quick summary here: the histogram represents the tonal information of your photo. 
It’s a graph of the light. 
On the left-hand side of the histogram is where all the Shadow information is, all the darker tones of the image. 
On the right-hand side you’ll find all the Highlight information, the brighter tones. 
And therefore, towards the middle of the histogram, is where all your midtones are located. 
The taller the graph is in a certain section of your histogram, the more information there is. 
So for this photo, you can see that we have a lot of shadow and highlight information, but hardly any midtones. 
We’re also not clipping (losing information) any shadows and highlights as well <em>i.e.</em> the graph isn’t flattened against either side of the histogram. 
So we’ve got plenty of room to work with here.</p>
<p>So let’s say that I feel the sky is a little too bright and I want to darken it. 
The day was quite overcast at this point and the sky in this image feels too washed out. 
Let’s darken it by dropping the exposure.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-4.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>Now we’re starting to see some more definition in the sky but the image overall feels too dingy and dark. 
Let’s look at the histogram.</p>
<figure>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-5.jpeg" width='640' height='360' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>As you can see, underneath the histogram I have the Exposure module open and I’ve pulled the exposure of the photo down by -1.02EV, darkening the image. 
This is reflected in the histogram. 
What was previously highlight information has been brought down so that it now resides in the midtones of the photo. 
This has brought back some definition and colour to the sky but now the rest of the photo is too dark; you can see on the histogram that the shadow information is bundling up on the left-hand side and we’re in danger of clipping the shadows, that is, losing information, which would result in blotches of pure black in the photo. 
Not good.</p>
<p>How do we get around this? Well, we create and use a Luminosity Mask that selects just the highlights in the photo, mostly the sky, but leaves the rest of the photo alone, keeping the shadows where they are. 
Here’s the result of using a Luminosity Mask to darken just the highlights in the photo. </p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-6.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p><em>Much</em> better.
We’ve darkened the highlights, the sky, bringing back some colour and definition but have left the shadows, the beach and grass, well alone. 
Let’s see how our histogram is doing.</p>
<figure>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-7.jpeg" width='640' height='349' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>Once again, I’ve opened up the Exposure module and dropped the exposure of the photo down to -1.02EV but you can see that the module looks a little different this time.
 That’s because I’ve applied a Luminosity Mask to the Exposure change.
 We’ll come back to that in a bit.
 Look at the histogram in the top-right.
 We’ve brought the highlights down into the midtones but kept the shadows where they are.
 We can make another targeted adjustment if we want.
 Let’s say that I want to brighten the shadows a little bit as well. </p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-8.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>Ah-ha! 
Now we’re bringing back some clarity and interest to the foreground, that lovely sweeping curve of the grass, beach and loch, with the hill in the distance. 
Check out the histogram.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-9.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>You can see at the bottom-right that I’ve made a new adjustment, known as “Exposure 1”, where I’ve increased the exposure of the image by 0.72EV. 
But again, I’ve applied a Luminosity Mask to this adjustment so that the brightening effect only happens to the shadows in the photo, leaving the highlights alone. 
In the histogram, you’ll note that we now have a lot of midtone information, by darkening the highlights and brightening the shadows. 
Tony Kuyper talks alot about the <a href="http://goodlight.us/writing/magicmidtones/magicmidtones-1.html">“Magic Midtones”</a> and for good reason: the midtones are the real meat of the photo and applying targeted adjustments to the midtones of a photo can really take your work to the next level.</p>
<p>So, let’s review the changes we’ve made to this photo.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/photo-comp.jpg" width='960' height='636' alt='Ian Hex darktable luminosity mask tutorial' />
<figcaption>
<em>Fig. 1</em>: Original RAW<br/>
<em>Fig. 2</em>: Whole photo darkened<br/>
<em>Fig. 3</em>: Highlights darkened only<br/>
<em>Fig. 4</em>: Shadows brightened as well<br/>
</figcaption>
</figure>

<p>And let’s also look at how the histogram has changed.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/photo-comp2.jpg" width='960' height='530' alt='Ian Hex darktable luminosity mask tutorial' />
<figcaption>
<em>Fig. 5</em>: Original RAW<br/>
<em>Fig. 6</em>: Whole photo darkened<br/>
<em>Fig. 7</em>: Highlights darkened only<br/>
<em>Fig. 8</em>: Shadows brightened as well<br/>
</figcaption>
</figure>



<h2 id="creating-luminosity-masks-in-darktable">Creating Luminosity Masks in darktable<a href="#creating-luminosity-masks-in-darktable" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Luminosity Masking is easy to do in darktable; it’s built right in to the software since v1.4 (and now we’re on v1.6). 
Every single module in darktable, whether that’s Contrast, Vibrance, Exposure etc., can have a Luminosity Mask applied to it for targeted adjustments. 
Let’s demonstrate on a new image.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-1.jpeg" width='960' height='540' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>Here’s a shot I grabbed on that same tour of Scotland in October 2014, this time of the Glenfinnan Monument. 
Pretty neat? If you look at the histogram at the top-right, you’ll see that I have a lot of shadow information (in fact it’s almost clipping) and I have a good range of highlight information that moves into the midtones as well. 
Thankfully there’s no clipping going on but the photo is too dark, with the monument and mountains appearing almost as shadowy silhouettes against the sky. 
What we want to do is to brighten up those shadows to bring back the details and colour in the monument and the mountains. 
We may also do a smidgen of highlight darkening as well. </p>
<p>So, let’s open the Exposure module and I’ll walk us through it.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-10.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>You can find the Exposure module in the Basic Group, represented by the hollow white circle icon. </p>
<p>The magic we’re looking for is under the “blend” dropdown:</p>
<figure class=''>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-11.jpeg" width='640' height='349' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>Simply select “parametric mask”. 
This is where the magic is. 
In my view, it should be renamed to Luminosity Mask, but that’s just me. </p>
<p>This is where we can create a mask of the photo by selecting just certain tonal ranges. 
Now, we’re not going to go into detail on every aspect of this masking system; I’ll leave to you to experiment with. 
Just note that this “parametric mask” function is available in <em>every darktable module</em>, so you can apply Luminosity Masks on Exposure, Saturation, Contrast, Vibrance, Local Contrast… whatever you wish. 
This is neat and very powerful. </p>
<p>So, next step: select the “L” tab for “Luminosity” – located on the far right of the other tabs “g”, “R”, “G”, “B”, “H”, “S” — and then select the little icon that has the black circle in the white square, this will show the mask.</p>
<figure class=''>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-12.jpeg" width='640' height='360' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>This is what your photo will now look like.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-13.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p><strong><em>Don’t panic!</em></strong></p>
<p>All this yellow is telling you is that, currently, any Exposure adjustment you make will take effect on the <em>whole photo</em>. Clearly, this isn’t what we want. 
What we’re going to do is adjust the Input slider to start narrowing down our selection to just the tones we want; in this case, we’re after the shadows so we can brighten them up whilst leaving the highlights alone. 
We can do this by bringing the sliders on the right-hand side of the Input slider down towards the left.
This will start deselecting the highlights of the photo as we narrow our mask further towards the shadows.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-14.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>As you can see, I’ve brought the Input sliders from the right-hand side down to 25, very close to the left-hand sliders. 
This is reflected in the mask, as we’re now starting to deselect some of the brighter highlights in the sky. 
But we want to narrow it down further so that we’re targeting just the darkest parts of the photo: the mountains, foreground and monument.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-15.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p><em>Boom!</em>
We’ve had to bring the sliders down all the way to 5 to cut off the highlights in the sky. 
We’ve also managed to deselect some of the brighter highlights in the foreground as well. 
Let’s just make one final adjustment to the sliders before we start brightening the Exposure.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-16.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>Here, all we’ve done is move the bottom right-hand slider back up towards the highlights a little bit. 
What this does is feather and soften the mask so that when we do our Exposure brightening it will look more natural and blend better. </p>
<p>OK, we’ve got our initial mask targeted nicely towards the shadows; hide the mask by selecting that black circle in the white square icon again. 
Now it’s time to start brightening up the Exposure.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-17.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p><em>Boom!</em> 
Much better. Let’s do a side-by-side comparison.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/photo-comp3.jpg" width='960' height='724' alt='Ian Hex darktable luminosity mask tutorial' />
<figcaption>
<em>Fig. 9</em>: Original RAW unedited<br/>
<em>Fig. 10</em>: Foreground, monument and mountains (shadow areas) have been brightened through a Luminosity Mask, leaving the highlights in the sky alone.
</figcaption>
</figure>

<p>Already, we’ve made a striking change to how the photo looks. 
There’s now a lot more interest as our subject, the monument, is much brighter with plenty of details available. 
However, we’re not quite done. 
The sky to the right of the monument looks a bit… <em>funky</em>. 
That’s because when we feathered our Luminosity Mask a bit we selected too much highlight information. 
This has resulted in part of the sky getting brighter but the rest of the sky staying the same, which looks strange. 
We can correct this by moving the bottom right-hand slide back to the left a bit, cutting off those highlights in the sky more.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-18.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>Better. 
By moving the bottom right-hand slider back down from 20 to 8 the sky looks more natural. </p>
<p>Already, this photo is looking a lot better. 
Let’s take some of those bright highlights in the sky and darken them a bit, so that the eye isn’t distracted and focuses more on the monument. 
To do this, we’re going to make another Exposure adjustment.</p>
<figure class=''>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-19.jpeg" width='640' height='360' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>To the left of the Exposure module name you’ll see four little icons. 
Click the rightmost one and then select “New Instance” in the dropdown that appears.</p>
<figure class=''>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-20.jpeg" width='640' height='360' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>We now have a new module called “Exposure 1” that sits on top of our previous Exposure module. 
With this Exposure 1, we’re going to create a Luminosity Mask targeting the highlights so that we can darken the exposure in them.</p>
<p>Same process as before: in “Exposure 1” select the “blend” dropdown then select “parametric mask”.</p>
<figure class=''>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-21.jpeg" width='640' height='360' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>Select the “L” tab for Luminosity then make the mask visible by clicking on the black circle in the white square icon.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-22.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>This time, we’re going to take the left-hand sliders and bring them to the right, slowly deselecting the shadows until we’ve targeted the highlight tones we want to darken.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-23.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>In our example, we’ve taken the left-hand sliders of Input up to 17 and then brought the bottom left-hand Input slider back down a little to 6 so that we feather the mask out for a more seamless blend.</p>
<p>Let’s start decreasing the exposure to see what it looks like. 
Just click on the black circle in the white square icon again to hide the mask and starting decreasing the exposure slider.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-24.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>Nice! 
Here, we’ve brought the exposure down by -0.50EV through our Luminosity Mask, targeting the highlights and darkening them. 
We’ve also tweaked the bottom left-hand slider by bringing it down to 3 for a bit more feathering.</p>
<p>Here’s a before and after.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/photo-comp4.jpg" width='960' height='724' alt='Ian Hex darktable luminosity mask tutorial' />
<figcaption>
<em>Fig 11</em>: Shadows brightened only via a Luminosity Mask<br/>
<em>Fig 12</em>: Brightest highlights darkened through a Luminosity Mask in a new exposure adjustment.
</figcaption>
</figure>

<p><em>Giggedy.</em> So this photo is starting to look pretty sweet. 
Let’s just make one more adjustment, globally this time with no Luminosity Mask. 
I want to generally increase the overall exposure of the photo. </p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-25.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>Good! 
As you can see, on the right-hand side I’ve created another Exposure module, now called “exposure 2” and have increased the overall exposure of the photo by 0.50EV. </p>
<p>To round up this tutorial, let’s look into making one more adjustment through Luminosity Masks. 
Now that we’ve brightened up the shadows and darkened down the highlights, we’ve moved a lot of the tones in the photo towards the midtones. 
This is where the real meat of the image is. 
We can now really give this photo some punch and pop by applying some contrast to just the midtones of the image. 
Here’s how.</p>
<p>Open the Contrast, Brightness &amp; Saturation module, select “blend” then select the “parametric mask” option in the dropdown.</p>
<figure class=''>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-26.jpeg" width='640' height='360' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>You’ll note this time round that the tabs in the module—the “L”, “a”, “b”, “C” and “h”—are different to the Exposure module. 
Don’t worry. 
Just leave the “L” for Luminosity selected. 
We’re now going to adjust the Input sliders so that we’re targeting just the <em>midtones</em> of the photo. 
We do this by deselecting <em>both the highlights and shadows</em>. 
This is done by moving the left-hand sliders up and the right-hand sliders down towards the middle.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-27.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>So here’s how my midtones Luminosity Mask looks. 
On the right, you can see that I’ve brought the sliders towards the middle and then dropped the bottom slider of the pair away so that there’s some feathering. 
This is quite a tight midtones mask but that’s OK. 
Now let’s hide the mask and start increasing the contrast. </p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-28.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p><em>Much better</em>. 
Because we’re only targeting a tight selection of the midtones we can make quite an aggressive contrast adjustment (I’ve brought the contrast slider way up to 50). 
I’ve also increased the brightness of the midtones a little, pulled down the saturation to compensate for the contrast adjustment, and also increased the blurring of the mask to 100, feathering out the mask further for a more natural adjustment. </p>
<p>Let’s look at the before and after.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/photo-comp5.jpg" width='960' height='724' alt='Ian Hex darktable luminosity mask tutorial' />
<figcaption>
<em>Fig. 13</em>: Our RAW with the shadows brightened and the highlights darkened <br/>
<em>Fig. 14</em>: Contrast increased in the midtones through a Luminosity Mask 
</figcaption>
</figure>

<p>You can see the biggest difference this contrast adjustment made was to the texture in the foreground grass and the stone detail in the monument. 
You can make out the individual clumps of growth in the foreground as well as the individual tones in the stone of the monument. 
Neat. </p>
<p>Finally, here’s an overview of the adjustments we’ve made to this photo</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/photo-comp6.jpg" alt='Ian Hex darktable luminosity mask tutorial' />
<figcaption>
<em>Fig. 15</em>: Original unedited RAW<br/>
<em>Fig. 16</em>: Shadows brightened through a Luminosity Mask<br/>
<em>Fig. 17</em>: Highlights darkened through a Luminosity Mask<br/>
<em>Fig. 18</em>: Overall exposure increased a little, no mask<br/>
<em>Fig. 19</em>: Contrast in the midtones increased through a Luminosity Mask.
</figcaption>
</figure>


<h2 id="conclusion">Conclusion<a href="#conclusion" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>In this tutorial, I’ve only gone through the very basics of what is possible with darktable’s Luminosity Masks, so that one can make subtle adjustments to the shadows, highlights and midtones of their photo in order to balance the image better. 
But Luminosity Masks can be used for so much more and so I invite you to experiment! Try out the different modules available in darktable and see how you can apply various filters through different masks to achieve highly-specific adjustments to your RAWs like never before.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Still Writing]]></title>
            <link>https://pixls.us/blog/2014/12/still-writing/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2014/12/still-writing/</guid>
            <pubDate>Fri, 12 Dec 2014 02:15:16 GMT</pubDate>
            <description><![CDATA[<img src="https://lh3.googleusercontent.com/-QwTdTG8FL1Y/T9yrrP7f_eI/AAAAAAAAK14/UhCj5utvBbM/w1650-no/The%2BReverence%2Bof%2BSt%2BPauls.jpg" /><br/>
                <h1>Still Writing</h1> 
                <h2>Yes, things are still moving (slowly) along</h2>  
                <p>It’s been a busy month (+ &frac12;) for me personally.
Things have finally settled down so I can get back to writing articles and working on the site.</p>
<h2 id="wavelets-coming"><a href="#wavelets-coming" class="header-link-alt">Wavelets Coming</a></h2>
<p>As I mentioned in the <a href="https://pixls.us/blog/2014/10/iterating/">previous post</a>, I’m currently working through a re-write of the various tutorials I had done about using Wavelet Decompose for skin retouching.
I’m about <sup>2</sup>&frasl;<sub>3</sub> of the way through it now and expect to have it finished shortly.
<!-- more --></p>
<h2 id="guest-writer-ian-hex"><a href="#guest-writer-ian-hex" class="header-link-alt">Guest Writer Ian Hex</a></h2>
<p>I also previously mentioned that I’ve been reaching out to a few folks to see if they might be interested in writing some articles for the site.
I’m <em>extremely</em> pleased to say that <a href="https://plus.google.com/+IanHex/about">Ian Hex</a> is stepping up to the plate with a neat tutorial about <a href="http://www.darktable.org/">darktable</a> that is being written right at this very moment!</p>
<p>If you haven’t had a chance to see Ian’s work I highly recommend stopping by his site at <a href="http://lightsweep.co.uk/">http://lightsweep.co.uk/</a> to get a gander at some epic images from the UK.
I desperately want to hop on a plane and visit after seeing them!</p>
<p>His self-professed mission is:</p>
<blockquote>
<p>..to show off the beauty of British landscapes and architecture to the world</p>
</blockquote>
<p>and I’d say he’s doing a bang-up job of it so far!</p>
<!-- FULL-WIDTH -->
<figure class='full-width'>
<img src='https://lh4.googleusercontent.com/-v1YXb39LcGU/UgKMka3X-QI/AAAAAAAAcME/eLd41FOcZWg/w1650-no/fire%2Bof%2Bwhitbey%2Babbey.jpg' alt=''/>
<figcaption>
<em>Fire of Whitby Abbey</em> by <a href="http://lightsweep.co.uk">Ian Hex</a> (<a class='cc' href='https://creativecommons.org/licenses/by-nc-sa/3.0/' target='_blank'>cbna</a>)
</figcaption>
</figure>

<p><figure class='full-width'>
<img src='https://lh5.googleusercontent.com/-U-joYnXk96M/UydLySqCmJI/AAAAAAAAkoo/7GGzWvxCMsU/w1650-no/wonder%2Bof%2Bvariety%2Bgoogle.jpg' alt='' /></p>
<p><figcaption>
<em>Wonder of Variety</em> by <a href="http://lightsweep.co.uk">Ian Hex</a> (<a class='cc' href='https://creativecommons.org/licenses/by-nc-sa/3.0/' target='_blank'>cbna</a>)
</figcaption>
</figure>
<!-- /FULL-WIDTH --></p>
<p>Ian will be writing about Luminosity Masks in darktable.
Given his results and body of work I am personally looking forward to this one!</p>
<p>Maybe if we get a good enough response with his post we can convince him to come back and write some more…</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Iterating]]></title>
            <link>https://pixls.us/blog/2014/10/iterating/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2014/10/iterating/</guid>
            <pubDate>Wed, 29 Oct 2014 02:59:05 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2014/10/iterating/LGM Bug.jpg" /><br/>
                <h1>Iterating</h1> 
                <h2>Minor changes and another tutorial</h2>  
                <p>I’m working my way through some of the suggestions I’ve received from many folks.
In particular, the “px” icon in the upper left to slide open the navigation and Table of Contents has been changed to a (hopefully) more familiar ‘hamburger’ icon.
I’ll also be testing some other things in the coming weeks as time permits such as having a TOC show up by default in the right &#8531; of the page at the top.</p>
<p>Don’t expect it too soon as I want to focus on writing more content first.
I’m aiming for a December-ish timeframe for a more official launch and want to make sure there is a decent amount of material for folks to consume.</p>
<!-- more -->
<h2 id="the-next-tutorial"><a href="#the-next-tutorial" class="header-link-alt">The Next Tutorial</a></h2>
<p>Speaking of material, I’m starting work on a tutorial for skin retouching with wavelet decompose.
I’ve <a href="http://blog.patdavid.net/2014/07/wavelet-decompose-again.html">written</a> about this <a href="http://blog.patdavid.net/2011/12/getting-around-in-gimp-skin-retouching.html">many times before</a>, but want to port the ideas over here.</p>
<figure>
<img src='http://1.bp.blogspot.com/-9kAx4JgN3Eg/U8avZLbi0PI/AAAAAAAAQ4o/tQlbL-G3u2E/w600/dot-closed-eyes-wd.jpg' alt='Dot Eyes Closed Wavelets'/>
<figcaption>
“Dot Eyes Closed” wavelet decomposition
</figcaption>
</figure>

<p>I have a few extra thoughts surrounding the use of wavelets as well as some minor changes in my workflow with them that should make a new writeup more interesting (hopefully).
I’ll also focus specifically on skin retouching as opposed to some of the other things that can be done with wavelets.</p>
<h2 id="more-support"><a href="#more-support" class="header-link-alt">More Support</a></h2>
<p>I have reached out to some of my favorite amazing photographers using F/OSS in their workflows and the response has been overwhelmingly positive.  I’ll speak more about the folks in a later post, but I am personally very thankful that they have taken the time to respond and that it’s been so positive!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[More Content]]></title>
            <link>https://pixls.us/blog/2014/09/more-content/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2014/09/more-content/</guid>
            <pubDate>Tue, 30 Sep 2014 14:54:37 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/blog/2014/09/more-content/will-write-for-food.jpg" /><br/>
                <h1>More Content</h1> 
                <h2>First article is done, more to come</h2>  
                <p>I’ve pretty much finished up the first article mentioned in the <a href="https://pixls.us/blog/2014/09/getting-closer">previous post</a>.
There is still a long way to go.</p>
<p>As much as I’d like to believe that <em>“If you build it, they will come”</em>, the reality is that nobody is coming until there is something worth coming for.
So I’m working hard on getting good content in place.</p>
<p>I’m also acutely aware that nobody will <em>stay</em> unless good content continues to be published, but that’s for another post.
<!--more--></p>
<h2 id="next-up"><a href="#next-up" class="header-link-alt">Next Up</a></h2>
<p>I am thinking the next article that I’ll update/port will be either <em>Luminosity Masks</em> or <em>Skin Retouching</em>.
I am also thinking that a port of my <a href="http://blog.patdavid.net/2012/06/getting-around-in-gimp-color-curves.html">older color curves</a> tutorials might be nice as well (particularly <a href="http://blog.patdavid.net/2012/07/getting-around-in-gimp-more-color.html">using sample points</a>).</p>
<p>That should get me to four good tutorials to start the site with.
At that point I can start queueing up the next few asap.</p>
<p>I also wanted to do more than straight single tutorials, though, which brings me to a question.</p>
<h2 id="types-of-content"><a href="#types-of-content" class="header-link-alt">Types of Content</a></h2>
<p><em>What types of content would those of you reading this be interested in?</em></p>
<p>At the moment I’m thinking of 3 main types of articles, with a possible (probable?) fourth:</p>
<ul>
<li>Tutorials</li>
<li>Workflows</li>
<li>Showcase</li>
<li>Getting the Shot</li>
</ul>
<p>A small explanation on what I’m thinking may help here.</p>
<h3 id="tutorials"><a href="#tutorials" class="header-link-alt">Tutorials</a></h3>
<p>These would be similar to the <a href="http://localhost:8080/articles/digital-black-and-white-conversion-GIMP/">Digital B&amp;W</a> article I’ve already ported.
If you’ve read most of my tutorials on my blog, then you’re already familiar with what I’m thinking for these.</p>
<p>These are straight tutorials looking at a single (usually) effect and how to achieve it.
The primary focus is on the steps and tools to produce the desired result.</p>
<h3 id="workflows"><a href="#workflows" class="header-link-alt">Workflows</a></h3>
<p>I am envisioning a <em>workflow</em> article to be more of a look at the creative process to achieve a final resulting image.
This is more along the lines of another previous set of posts I had written about: <a href="http://blog.patdavid.net/2013/03/the-open-source-portrait-equipment.html">The Open Source Portrait</a> and the <a href="http://blog.patdavid.net/2013/08/an-open-source-headshot-ronni.html">Open Source Headshot</a>.</p>
<p>These articles would focus on all of the steps and tools to arrive at a resulting image.
The difference from a <em>tutorial</em> article is that if a <em>tutorial</em> article might explore how to use Wavelet Decompose for skin retouching, a workflow article might include using that technique (among others) to realize a final vision.</p>
<h3 id="showcase"><a href="#showcase" class="header-link-alt">Showcase</a></h3>
<p>Showcasing some of the amazing work I see occasionally is important as well, I think.
One, the artists doing this great work really do deserve to be talked about and exposed to a wider audience.</p>
<p>Second, great work by F/OSS using artists act as ambassadors for what is possible using these tools.
Too often the low opinion of many concerning F/OSS tools is framed by sub-standard work being shown.
There are some amazing photographers working with these tools, and my hope is that they can stand as examples to not only showcase F/OSS but also as a bar for others to aim for (and hopefully smash through).</p>
<h3 id="getting-the-shot-"><a href="#getting-the-shot-" class="header-link-alt">Getting the Shot?</a></h3>
<p>I’m not 100% sure on this yet, but I think I was originally viewing this as a complete workflow from start to finish, including actually shooting.
This is more focused on the photographic process in general and things to keep in mind while capturing the shots for processing later.</p>
<p>HDR, lighting, models, clothes, make-up, landscape scouting, locations, etc…</p>
<h3 id="quick-tips-"><a href="#quick-tips-" class="header-link-alt">Quick Tips?</a></h3>
<p>I’m not at all sure about this, but the idea is there.
Possibly posts that are very short and targeted at a very specific task or function.
Something that might not really warrant a long-form article but could still be quickly useful for others.</p>
<p>I am reminded of this due to an <a href="https://www.youtube.com/watch?v=n4OBn5DJdjk&amp;lc">old video of mine</a> that I had done quickly for someone on G+ about how to add a watermark over an image.</p>
<div class='big-vid'>
<div class='fluid-vid'>
<iframe width="560" height="315" src="http://www.youtube-nocookie.com/embed/n4OBn5DJdjk?rel=0" frameborder="0" allowfullscreen></iframe>
</div>
</div>

<p>You can tell why making videos is best left to folks like Rolf…</p>
<h2 id="forum-and-comments"><a href="#forum-and-comments" class="header-link-alt">Forum and Comments</a></h2>
<p>Thanks to darix (once again) over in irc on <code>#darktable</code> for setting up a <a href="http://www.discourse.org/">Discourse</a> instance for me to play with.
I have used it previously on <a href="http://boingboing.net">boingboing.net</a>, and I rather like what I’ve seen.
It also appears that there may be a way to embed thread posts as well, which would be a nice solution for commenting.</p>
<h2 id="thoughts-"><a href="#thoughts-" class="header-link-alt">Thoughts?</a></h2>
<p>Anyone with any thoughts on this, as usual, feel free to drop me a line and tell me what you think!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Getting Closer]]></title>
            <link>https://pixls.us/blog/2014/09/getting-closer/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2014/09/getting-closer/</guid>
            <pubDate>Thu, 25 Sep 2014 22:18:12 GMT</pubDate>
            <description><![CDATA[<img src="https://lh3.googleusercontent.com/-w_qFbIdXNzk/VCR_AeDB8zI/AAAAAAAAAJM/sOdDuQOra78/w1650-no/Dot%2BLeipzig%2BMarket.jpg" /><br/>
                <h1>Getting Closer</h1> 
                <h2>First article is mostly written</h2>  
                <p>Just a quick update on a couple of interesting things.</p>
<p>The first article is almost done being re-written and updated.</p>
<p>I added some functionality to the slide-out menu and am still thinking about the best icon to use.</p>
<p>I also had a nice epiphany when I realized that the styling I had already written to make big videos works great for images as well.
<!--more--></p>
<h2 id="first-test-article"><a href="#first-test-article" class="header-link-alt">First Test Article</a></h2>
<p>The first article is almost done being ported and formatted.
For anyone who’s curious, it’s a long post from the five part series I did on B&amp;W conversion using GIMP (originally <a href="http://blog.patdavid.net/2012/11/getting-around-in-gimp-black-and-white.html">published on my blog</a>).</p>
<p>The writing is going a bit slow because I am also feeling out the formatting and a couple of other minor visual things as they relate to a full-blown article.
Of course, it doesn’t help that it’s also a really, really long article…</p>
<p>For those of you bothering to read this blog, and who want to take a look at the state of that article, it can be found here:
<a href="https://pixls.us/articles/digital-black-and-white-conversion-GIMP">Pixls.us: Digital B&amp;W Conversion (GIMP)</a>.
Just don’t forget to let me know if anything looks funky, or with any suggestions/comments/criticisms.</p>
<h3 id="speaking-of-long"><a href="#speaking-of-long" class="header-link-alt">Speaking of Long</a></h3>
<p>Speaking of which, one of my first conundrums while working on it was a question of load times vs. convenience. 
The original article was written as <em>five</em> separate blog posts which kept everything in reasonably bite-sized chunks to digest.
The problem is that as a reader I am sometimes annoyed at having to click through multiple pages to read an article and I thought that most readers here might feel the same way.</p>
<p>One of my concerns was load times and rendering speed of large pages.
I <em>think</em> I have all the assets set to load as quick as possible above the fold.
I’ve tried to optimize all images as much as possible and am making sure to define discrete <code>width</code> and <code>height</code> attributes in the html to help the browser render and not have to reflow (hopefully).</p>
<p>There are still a few optimizations that I have to implement that I haven’t yet (minify javascript and concatenating all my stylesheets for actual delivery), but I have them in the queue to do.
Oh, and spritesheets for some assets that I will get around to making soon as well.</p>
<p>So my current thought is to keep the articles to a single page, even if they are long.
I am also 100% open to other ideas as well so if you have one feel free to hit me up!</p>
<h3 id="getting-around"><a href="#getting-around" class="header-link-alt">Getting Around</a></h3>
<p>Long pages can be a bit cumbersome to navigate, though.
To help make it easier to target relevant information in the page, all of the headings in a page should have a unique id attribute.
This means that users will be able to link directly to sections of a long page (this seems to have fallen out of favor with many websites - why?!).</p>
<p>For instance, I can link directly to the previous section of this post by including the id of the element in the url:</p>
<pre><code>http://pixls.us/blog/2014/09/getting-closer/#speaking-of-long
</code></pre><p>I’m still thinking about the easiest/best way to present this capability to users, but the groundwork is there for the future.</p>
<h4 id="navigation"><a href="#navigation" class="header-link-alt">Navigation</a></h4>
<p>I’m not 100% sure this is obvious, but the “px” logo in the upper-left corner of the page <em>should</em> slide out a navigation from the left side of the page (assuming you have javascript enabled in your browser).
If you don’t have javascript enabled, then clicking the logo will take you to the footer of the page where the basic navigation links are located.</p>
<p class='aside'>
I’m also considering a re-working of the icon to possibly make it more obvious that it opens a menu.
Perhaps something like the “hamburger menu icon” is in order?
</p>

<p>The first set of links are the main ones for navigating the site <em>Home</em>, <em>Blog</em>, <em>Articles</em> and <em>Software</em>.
Just below that will be the navigation links for the contents of the current page.</p>
<figure>
<img src="https://pixls.us/blog/2014/09/getting-closer/nav-example.png" alt="pixls.us navigation pane screenshot" />
</figure>

<p>For no other reason than I thought it was neat, I also made it so that the background of each of the Table of Contents entries will be a slightly darker color relative to how far along you are in the page/section.
In the example above, I have already read <em>Getting Closer</em> and <em>First Test Article</em>, and I am ~75% of the way through the <em>Speaking of Long</em> section of the post.</p>
<p>Unfortunately, this won’t work without javascript enabled.
I am still thinking of a way to possibly include the TOC in the page without screwing up the layout too much.
Something to play with later I suppose…</p>
<h3 id="pretty-pictures"><a href="#pretty-pictures" class="header-link-alt">Pretty Pictures</a></h3>
<p>At the moment I am using a combination of serving up the images directly from my host, and using Google+ photos.
Mostly because I have limited space on my webhost, and I’m not quite sure what the impact will be just yet.
I also gain the distributed Google infrastructure for image hosting, which helps I think as images are by far the biggest files to serve for these pages.</p>
<p>I also get on-the-fly image resizing when hosting the images on Google, which is handy while I build things out.</p>
<p>One of the downsides is that the on-the-fly resizing doesn’t produce progressive jpegs, which I thought might help with rendering speeds of large pages (images loading progressively at least show that something is there…).</p>
<h4 id="wider-images"><a href="#wider-images" class="header-link-alt">Wider Images</a></h4>
<p>I think I mentioned it in the previous post <a href="https://pixls.us/blog/2014/09/the-big-picture/"><em>The Big Picture</em></a> that I had done the styling to get images to span the entire width of the page.
In that same post I also demonstrated a means for making embedded videos bigger as well.
It turned out that the same styling worked great for images as well.</p>
<p>Here is the lede image wrapped in a <code>&lt;figure&gt;</code> tag:</p>
<figure>
<img src='https://lh3.googleusercontent.com/-w_qFbIdXNzk/VCR_AeDB8zI/AAAAAAAAAJM/sOdDuQOra78/w1650-no/Dot%2BLeipzig%2BMarket.jpg' alt='Dot in the Leipzig Market by Pat David' width='640' height='401' />
<figcaption>
A caption to the image in a <code>&lt;figcaption&gt;</code> tag.
</figcaption>
</figure>

<p>I can re-use the styling for the larger video to automatically make the image much larger and centered on the page:</p>
<figure class='big-vid'>
<img src='https://lh3.googleusercontent.com/-w_qFbIdXNzk/VCR_AeDB8zI/AAAAAAAAAJM/sOdDuQOra78/w1650-no/Dot%2BLeipzig%2BMarket.jpg' alt='Dot in the Leipzig Market by Pat David' width='960' height='602' />
<figcaption>
Using class <code>big-vid</code> on the figure.
</figcaption>
</figure>

<p>And, of course, wrapping the <code>&lt;figure&gt;</code> in a <code>&lt;!-- FULL-WIDTH --&gt;</code> tag yields:</p>
<!-- FULL-WIDTH -->
<figure class='full-width'>
<img src='https://lh3.googleusercontent.com/-w_qFbIdXNzk/VCR_AeDB8zI/AAAAAAAAAJM/sOdDuQOra78/w1650-no/Dot%2BLeipzig%2BMarket.jpg' alt='Dot in the Leipzig Market by Pat David' width='960' height='602' />
<figcaption>
Wrapping <code>&lt;figure&gt;</code> with a <code>&lt;!-- FULL-WIDTH --&gt;</code> tag <strong>and</strong> setting the class to <code>full-width</code>.
</figcaption>
</figure>
<!-- /FULL-WIDTH -->

This is a <em>photography</em> site, right?!

#### Comparing Images

I still don’t have a great solution for image comparison.
The problem is that ideally I could have an image that shows some results with an easy way to toggle back to a comparison image (before/after for instance).
The current way I am doing it is to toggle the image when it’s clicked on.
If you hover over an image, and the cursor changes to a crosshair, then click on it to compare.

I’m borrowing this from the B&amp;W article I was just working on:

<figure>
<img src="https://pixls.us/articles/digital-black-and-white-conversion-GIMP/rgb-mix-luminosity.png" alt="RGB Luminosity Mix" data-swap-src="https://pixls.us/articles/digital-black-and-white-conversion-GIMP/rgb-mix-base.png" width="500" height="500" />
<figcaption>
Click on the image to compare to original.
</figcaption>
</figure>

<p>This works across mobile as well but I can’t help but feel it is a bit inelegant.
It is also dependent on javsacript and I don’t know if there is a simple way around this.
At least now, without javascript turned on, everything else still works except toggling to the comparison version.</p>
<h3 id="before-launch"><a href="#before-launch" class="header-link-alt">Before Launch</a></h3>
<p>I’d like to have at least a few good articles ready to go at launch time.
As I said, I’m almost finished with the B&amp;W conversion article, but the question is what to migrate next?</p>
<p>I’m thinking that one of the <em>Open-Source Portrait</em> posts would make a nice article to launch with as well,
or perhaps an update/re-write of using Wavelet Decompose for skin retouching?
If anyone has a preference or suggestion, I’m all ears!</p>
<p>I’m also going to publish an interview with a F/OSS photographer whose work I admire.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Digital B&W Conversion (GIMP)]]></title>
            <link>https://pixls.us/articles/digital-b-w-conversion-gimp/</link>
            <guid isPermaLink="true">https://pixls.us/articles/digital-b-w-conversion-gimp/</guid>
            <pubDate>Tue, 16 Sep 2014 18:36:26 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/Into-the-Fog.jpg" /><br/>
                <h1>Digital B&W Conversion (GIMP)</h1> 
                <h2>Methods for converting to B&W</h2>  
                <p>Black and White photography is a big topic that deserves entire books devoted to the subject.
In this article we are going to explore some of the most common methods for converting a color digital image into monochrome in <a href="http://www.gimp.org" title="GIMP Homepage">GIMP</a>.</p>
<h2 id="what-we-are-trying-to-achieve">What We are Trying to Achieve<a href="#what-we-are-trying-to-achieve" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>There are a few things you should focus on in regards to preparing your images for a B&amp;W conversion.
You want to keep in mind that by removing color information you are effectively left with only tonal data (and composition) to convey your intentions.</p>
<figure class="big-vid">
<img src="https://2.bp.blogspot.com/-tTnj2ELdHSM/UKLIXA41skI/AAAAAAAADaw/aAqUIgVKLj8/w960-no/AnselAdamstrees%255B1%255D.jpg" width="960" height="653" alt="Aspens by Ansel Adams" />
<figcaption>
Aspens (no title), <a href="http://www.anseladams.com/">Ansel Adams</a><br/>
&copy;The Ansel Adams Publishing Rights Trust
</figcaption>
</figure>

<p>This can be both liberating and confining.</p>
<p>By liberating yourself of color data the focus is entirely on the subjects and composition
(this is often one of the primary reasons street photography is associated with B&amp;W).
Conversely, the subjects and composition need to be much stronger to carry the result.</p>
<figure>
<img src="https://lh4.googleusercontent.com/-zsW7nufLVLs/UJ1HPOg0vmI/AAAAAAAARS8/a3aOaDg0d38/w640-h811-no/9845_98f0%5B1%5D.jpeg" width="640" height="811" alt="Edward Weston, Pepper #30"/>
<figcaption>
Without color, the form and tones are all that’s left.<br/>
&copy;<a href="http://www.edward-weston.com/edward_weston_natural_1.htm">Edward Weston, Pepper #30</a>
</figcaption>
</figure>

<p class="aside">
As an interesting side note, Edward Weston’s Pepper #30 is the image that began my personal interest in B&amp;W photography.
</p>

<h3 id="tonality">Tonality<a href="#tonality" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>What I tend to refer to when using this term is the presence and relationship between different values of gray in the image.<br>This can be subtle with smooth, even differences between values or much more pronounced.</p>
<p>When referred to as the singular <em>“tone”</em>, it is usually referring to a single value of gray in the image.</p>
<h3 id="contrast">Contrast<a href="#contrast" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Contrast is the relative difference in tones between parts of an image.
High contrast will have a sharper differentiation between tones, while low contrast will have less differences.
Often, a straight conversion to grayscale can result in values that are all similar, yielding a tonally “flat” image.</p>
<p>Contrast is often considered in terms of the entire image <em>globally</em>, or in smaller sections <em>locally</em>.</p>
<h3 id="dynamic-range">Dynamic Range<a href="#dynamic-range" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Dynamic range is the overall range of values in your image from the darkest to the brightest.</p>
<h3 id="the-approach">The Approach<a href="#the-approach" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The approach we will take here is similar to what I had done in my film days.
We’ll attempt to use different methods of grayscale conversion (and possibly blending them) to get to a working image that is as full of tonal detail as possible.
Petteri Sulonen refers to this as his <em>“digital negative”</em> – if you want a great look at a digital B&amp;W workflow head over and read <a href="http://www.prime-junta.net/pont/How_to/n_Digital_BW/a_Digital_Black_and_White.html">his article</a>.</p>
<p>Then, with an image containing as much tonal detail as possible, we will modify it with adjustments of various types to produce a final result that is visually pleasing.</p>
<p>Before heading down that path, it may help to have a closer look at the tools being used.
Let’s have a look at how an image gets displayed on your monitor first.</p>
<h2 id="your-pixels-and-you">Your Pixels and You<a href="#your-pixels-and-you" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>You are working in an RGB world when you stare at your monitors.
Every single pixel is composed of 3 sub-pixels of Red, Green, and Blue.</p>
<figure>
<img src="https://4.bp.blogspot.com/-PQgiDUW-cro/UJrrXrq9HWI/AAAAAAAADPE/j_3YszlVeHU/s300/300px-TN_display_closeup_300X%255B1%255D.jpg" width="300" height="240" alt="TN LCD Display 300X close up"/>
<figcaption>
300X magnification of an LCD panel.<br/>
(Image from <a href="http://en.wikipedia.org/wiki/File:TN_display_closeup_300X.jpg">wikipedia</a>)
</figcaption>
</figure>

<p>The variations in brightness of each of the sub-pixels will “mix” to produce the colors you finally see.
The scales available in an 8-bit display are discrete levels from 0–255 for each color (2<sup>8</sup> = 256).
So if all of the sub-pixel values are 0, the resulting color is black.
If they are all 255, you’ll see white.
Any other combination will produce some variation of a color.</p>
<p class="color-ex" style="background-color: rgb(80,205,255);">
80, 205, 255 for instance
</p>
<p class="color-ex" style="background-color: rgb(255,172,80);">
or 255, 172, 80
</p>

<p class="aside">
<span>But what about 16-bit images?</span>
Well - the data is still in the image file to correctly describe the colors at 16bit/channel, but most likely what you’ll be seeing on your monitor is an interpolation of the values to an 8-bit/channel colorspace.
You should <em>always</em> work in the highest bit depth color that you can, and leave any conversions to 8-bit for when you are saving your work to be viewed on a monitor.
</p>

<p>The important point to take away from this is to realize that when all three color channels are the same value, you’ll got a grey color.
So a middle gray value of 127, 127, 127 would look like this:</p>
<p class="color-ex" style="background-color: rgb(127,127,127); color: #222;">
127, 127, 127
</p>
<p class="color-ex" style="background-color: rgb(222,220,220);">
While this is a little brighter: 220, 220, 220
</p>

<p>Very quickly you should realize that a true monochromatic grayscale image can display up to 256 discrete shades of gray going from 0 (pure black) to 255 (pure white),
while for 16-bit images, 2<sup>16</sup> will yield 65,536 different shades.
It is this limitation for purely gray 8-bit images that introduces artifacts over smooth gradations (<a href="http://en.wikipedia.org/wiki/Posterization">posterization</a> or banding) – and is a good reason to keep your bit depths as high as possible.</p>
<h2 id="getting-to-grey">Getting to Grey<a href="#getting-to-grey" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>There are many different paths to get to a grayscale image and almost none of them are equal.
They will all produce different images based on their method of conversion, 
and it will be up to you to decide which ones (or portions of) to keep and build upon to create your final result.</p>
<figure class="big-vid"> 
<img src="https://lh3.googleusercontent.com/-0BRTT_4u_A0/VBj3kqE8rJI/AAAAAAAARcw/WBSevvGSCqw/w960-h587-no/Conversation%2Bin%2BHayleys.jpg" width="960" height="587" alt="Conversation in Hayleys by Pat David" />
<figcaption>
A combination of luminosity desaturation and GEGL C2G<br/>
<em>Conversation in Hayleys</em> by Pat David (<a href="http://creativecommons.org/licenses/by-sa/4.0/" class="cc">cba</a>)
</figcaption>
</figure>

<p>For this tutorial we are going to try and cover as many different methods as possible.
This means we’ll be having a look at:</p>
<ul>
<li>Desaturate Command (Lightness, Luminosity, Average)</li>
<li>Channel Mixer</li>
<li>Decompose (RGB, LAB)</li>
<li>Pseudogrey</li>
<li>Layer Blending Modes</li>
<li>Film Emulation Presets</li>
<li>Combining these methods</li>
</ul>
<p>One of these methods may work fine for you.
Or, if you’re like me, it will most likely be a combination of one or more of these methods blended through a combination of layer masking and opacity adjustments.</p>
<h2 id="desaturate-gimp-">Desaturate (GIMP)<a href="#desaturate-gimp-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Perhaps the easiest and most straightforward path to a grayscale image is using the <code>Desaturate</code> command.
It can be invoked from the <a href="http://www.gimp.org" title="GIMP Homepage">GIMP</a> menu:</p>
<p><span class="Cmd">Colors &rarr; Desaturate…</span></p>
<p>There are three options available from this menu:</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/GIMP desaturate dialog.png" alt="GIMP Desaturate Dialog" width="372" height="230" />
</figure>

<p>Each of these options (Lightness, Luminosity, Average) will generate a grayscale image for you,
but the difference lies in the <em>way</em> they interpret the image colors into values of gray.</p>
<p>To illustrate the differences, consider the following two figures.
One is a gradient of red, green and blue from black to full saturation.
The other are overlapping circles of color in an additive mix.</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/rgb-base.png" alt="RGB Base Gradient Image" width="500" height="256" />
<figcaption>
Base RGB gradient of pure colors
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/rgb-mix-base.png" alt="RGB Base Mix Image" width="500" height="500" />
<figcaption>
Base RGB (additive color) mix
</figcaption>
</figure>

<p>Let’s investigate each of the desaturation options on these test images.</p>
<h3 id="lightness">Lightness<a href="#lightness" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The Lightness method will add the largest value of red, green <em>or</em> blue and the smallest value, then divide the result by 2.</p>
<p class="Cmd aside">
&frac12; &times; ( MAX(R,G,B) + MIN(r,g,b) )
</p>

<p>So, for instance, with an RGB value of 100, 20, 210, the equation would be:</p>
<p class="Cmd aside">
&frac12; &times; ( <strong>210</strong> + <strong>20</strong> ) = 115
</p>

<p>Using the Lightness function on our test images yields the following results:</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/rgb-lightness.png" alt="RGB Desaturate Lightness" width="500" height="256" />
<figcaption>
Lightness conversion yields similar values regardless of color
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/rgb-mix-lightness.png" alt="RGB Lightness Mix" data-swap-src="rgb-mix-base.png" width="500" height="500" />
<figcaption>
Click to compare to original
</figcaption>
</figure>

<p>This means that one channel is actually ignored in creating the final value.</p>
<h3 id="average">Average<a href="#average" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Average will use the numerical average of the RGB values in each pixel.</p>
<p class="Cmd aside">
&frac13; &times; ( R + G + B )
</p>

<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/rgb-average.png" alt="RGB Desaturate Average" width="500" height="256" />
<figcaption>
Averaging, the values will trend darker overall
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/rgb-mix-average.png" alt="RGB Average Mix" data-swap-src="rgb-mix-base.png" width="500" height="500" />
<figcaption>
Click to compare to original
</figcaption>
</figure>



<h3 id="luminosity">Luminosity<a href="#luminosity" class="header-link"><i class="fa fa-link"></i></a></h3>
<p><em>Lightness</em> and <em>Average</em> both evaluate the final value of gray as a purely numerical function without regard to the actual color components.
<em>Luminosity</em> on the other hand, utilizes the fact that our eyes will perceive green as lighter than red, and both lighter than blue (<a href="http://en.wikipedia.org/wiki/Luminance_(relative)">relative luminance</a>).
This is also why your camera sensor <em>usually</em> has <a href="http://en.wikipedia.org/wiki/Bayer_filter">twice as many green detectors as red and blue</a>.</p>
<p>The weighted function describing relative luminance is:</p>
<p class="Cmd aside">
(0.2126 &times; R) + (0.7152 &times; G) + (0.0722 &times; B)
</p>

<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/rgb-luminosity.png" alt="RGB Desaturate Luminosity" width="500" height="256" />
<figcaption>
This is closer to how our eyes will actually perceive the brightness of each color
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/rgb-mix-luminosity.png" alt="RGB Luminosity Mix" data-swap-src="rgb-mix-base.png" width="500" height="500" />
<figcaption>
Notice the overwhelming contribution from green<br/>
Click to compare to original
</figcaption>
</figure>

<p>No one of these methods is necessarily any better than the other objectively for your own conversions.
It really depends on the desired results.
However, if you are in doubt about which one to use, <em>Luminosity</em> may be the better option of the three to <a href="http://en.wikipedia.org/wiki/Luminosity_function">more closely emulate</a> the brightness levels you will perceive.</p>
<h3 id="examples">Examples<a href="#examples" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The image below, <a href="http://www.flickr.com/photos/patdavid/3808678255">Joseph N. Langan Park</a>, is an interesting example to see just how much green influences the conversion result using luminosity.  Click through each of the different conversion types to them, and pay careful attention to what <strong>Luminosity</strong> does with the green bushes along the waters edge.</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/langan.jpg" alt="Langan Park by Pat David" width="640" height="414" />
<figcaption>
Click to compare:<br><span class="toggle-swap" data-fig-swap="langan.jpg">Original</span>
<span class="toggle-swap" data-fig-swap="langan-lightness.jpg">Lightness</span>
<span class="toggle-swap" data-fig-swap="langan-average.jpg">Average</span>
<span class="toggle-swap" data-fig-swap="langan-luminosity.jpg">Luminosity</span>
</figcaption>
</figure>

<p>This shot of <a href="http://www.flickr.com/photos/patdavid/6231554301/">Whitney</a> shows the effect on skin tones, as well as the change in her shirt color due to the heavy reds present.
In just a <strong>Lightness</strong> conversion, the red shirt becomes relatively flat compared to her skin tones,
but becomes darker and more pronounced using <strong>Luminosity</strong>.
Her lips get a bit of a boost in tone in the <strong>Luminosity</strong> conversion as well.</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/whitney.jpg" alt="Whitney by Pat David" width="640" height="640" />
<figcaption>
Click to compare:
<span class="toggle-swap" data-fig-swap="whitney.jpg">Original</span>
<span class="toggle-swap" data-fig-swap="whitney-lightness.jpg">Lightness</span>
<span class="toggle-swap" data-fig-swap="whitney-average.jpg">Average</span>
<span class="toggle-swap" data-fig-swap="whitney-luminosity.jpg">Luminosity</span>
</figcaption>
</figure>




<h2 id="channel-mixer">Channel Mixer<a href="#channel-mixer" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Using <strong>Desaturate</strong> lets you convert to grayscale based on pre-defined functions for calculating the final value,
but what if you wanted even further control?
What if you wanted to decide just how much the red channel should influence the final gray value,
or to have more control over the ratios and weightings from each of the different channels independently?
That’s precisely what the <strong>Channel Mixer</strong> will allow you to do.</p>
<p>For the examples below I’ll use a different color gradient test map going from blue to blue HSV gradient, with a gradient to black vertically.
This represents the entire 8-bit colorspace.</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/rgb-hsv.png" alt="RGB HSV Gradient" width="550" height="256" />
<figcaption>
Gradient representing all the colors/shades in 8-bit sRGB colorspace.<br/>
Click to compare:
<span class="toggle-swap" data-fig-swap="rgb-hsv.png">Original</span>
<span class="toggle-swap" data-fig-swap="rgb-hsv-lightness.png">Lightness</span>
<span class="toggle-swap" data-fig-swap="rgb-hsv-average.png">Average</span>
<span class="toggle-swap" data-fig-swap="rgb-hsv-luminosity.png">Luminosity</span>
</figcaption>
</figure>

<p>Take a quick moment to click through the various desaturation methods already mentioned.</p>
<p>The <strong>Channel Mixer</strong> can be invoked through:</p>
<div class="Cmd">Colors &rarr; Components &rarr; Channel Mixer…</div>

<p>The dialog will look like this with the test gradient:</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/channel-mixer.png" alt="GIMP Channel Mixer Dialog" width="326" height="464" />
</figure>

<p>The <strong>Channel Mixer</strong> can be used to modify these channel on a full color image, but we are focusing on grayscale conversion right now.
So check the box for <em>Monochrome</em>, which will disable the <em>Output channel</em> option in the dialog (it’s no longer applicable).
This will turn your preview into a grayscale image.</p>
<h3 id="warning-math-ahead">Warning: Math Ahead<a href="#warning-math-ahead" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>If you checked the <em>Monochrome</em> option, and left the Red slider at 100, then you’d be seeing a representation of your image with no Green or Blue contribution (ie: you would basically be seeing the Red channel of your image):</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/channel-mixer-red.png" alt="GIMP Channel Mixer monochrome full red" width="326" height="464" />
<figcaption>
Basically just the red channel
</figcaption>
</figure>

<p>What this means is that with Green and Blue set to 0, the values of the Red are directly mapped to the output value for the grayscale image.
If you were looking at a pixel with RGB components of 200, 150, 100, then the <em>Value</em> for the pixel in this instance would become 200, 200, 200.</p>
<p>It’s also important to note that the sliders represent a <em>percent contribution to the final value</em>.</p>
<p>That is, if you set the Red and Green channels to 50(%), you would see something like this:</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/channel-mixer-red50-green50.png" alt="GIMP Channel mixer monochrome 50% red and green" width="326" height="464" />
</figure>

<p>In this case, Red and Green would contribute 50% of their values (with nothing from Blue) to the final pixel gray value.
Considering the same pixel example from above, where the RGB components are 200, 150, 100, we would get:</p>
<p class="Cmd aside">
( 200 &times; 0.5 ) + ( 150 &times; 0.5 ) + ( 100 &times; 0 )<br/>
( 100 ) + ( 75 ) + ( 0 ) = <strong>175</strong>
</p>

<p>So the final grayscale pixel value would be: 175, 175, 175.</p>
<h3 id="preserve-luminosity">Preserve Luminosity<a href="#preserve-luminosity" class="header-link"><i class="fa fa-link"></i></a></h3>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/eleven.jpg" alt="Spinal Tap up to eleven" width="623" height="336" />
<figcaption>
<em>“These go up to 11”</em> – <a href="http://en.wikipedia.org/wiki/Up_to_eleven">Nigel Tufnel</a>
</figcaption>
</figure>

<p>The astute will notice that the sliders actually have a range from -200 to 200.
So you may be asking – what happens if two channels contribute more than what is possible to show?</p>
<p>Using the pixel example again, what if both the Red and Green channels were set to contribute 100%?</p>
<p class="Cmd aside">
( 200 &times; 1.00 ) + ( 150 &times; 1.00 ) + ( 100 &times; 0 ) = <strong>350</strong>
</p>

<p>While the <strong>Channel Mixer</strong> will allow us to set these values, we can’t very well set the grayscale pixel value to be 350 (in an 8-bit image).
So anything above 255 will simply end up being clipped to 255 (effectively throwing away any tones above 255, bad!).</p>
<p>This means that you have to be careful to make sure that each of the three channel contributions don’t exceed 100 between all of them.
50% Red, 50% Green is ok – but 50% Red, 50% Green, <em>and</em> 50% Blue (150%) will clip your data.</p>
<p>This is where the <em>Preserve Luminosity</em> option comes into play.
This option will scale your final values so the effective result will always add up to 100%.
The scale factor from the above example would be calculated as:</p>
<p class="Cmd aside">
<sup>1</sup>&frasl;<sub>( 1.00 + 1.00 + 0 )</sub> = <strong>0.5</strong>
</p>

<p>So the value of <strong>350</strong> would be scaled by 0.5, giving the actual final value as 175.
If <em>Preserve Luminosity</em> is active, all the values would be scaled by this amount.</p>
<p>This is not to say that <em>Preserve Luminosity</em> is always needed, just stay aware of the possible effects if you don’t use it.</p>
<h4 id="speaking-of-luminosity">Speaking of Luminosity<a href="#speaking-of-luminosity" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Previously we talked about the function used for desaturating according to <em>relative luminance</em>.
If you’ll recall, the formula was:</p>
<p class="Cmd aside">
( 0.2126 &times; R ) + ( 0.7152 &times; G ) + ( 0.0722 &times; B )
</p>

<p>If you wanted to replicate the same results that <code>Desaturate → Luminosity</code> produces, you can just set the RGB sliders to the same values from that function (21.3, 71.5, 7.2):</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/channel-mixer-lum.png" alt="GIMP Channel mixer luminosity values" width="342" height="475" />
<figcaption>
Replicating the luminosity function
</figcaption>
</figure>

<p>If you’re just getting started with the <strong>Channel Mixer</strong>, this makes a pretty nice starting point to begin experimenting.</p>
<h3 id="experimenting">Experimenting<a href="#experimenting" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>A pretty landscape image by <a href="http://www.flickr.com">Flickr</a> user <a href="http://www.flickr.com/people/cyndicalhounfineart/">Cyndi Calhoun</a> serves as a nice test image for experimentation:</p>
<figure class="big-vid">
<img src="https://4.bp.blogspot.com/-iztPHXO-ZWA/UKvzRNgGFwI/AAAAAAAADmY/W0PY_3a_yVk/w960/cyndicalhounfineart-color.jpg" alt="Garden of the Gods by Cyndi Calhoun" width="960" height="638" />
<figcaption>
<a href="http://www.flickr.com/photos/cyndicalhounfineart/7990432224">Garden of the Gods - Looking North</a><br/>
by Cyndi Calhoun (<a href="https://creativecommons.org/licenses/by/2.0/" class="cc">cb</a>)
</figcaption>
</figure>

<p>You’ll want to keep in mind the primary RGB influences in different portions of your image as you approach you adjustments.
For instance, this image (not coincidentally) happens to have strong Red features (the rocks), Blue features (the sky), and Green features (the trees).</p>
<p>Keep an eye on the individual channels from getting so bright that you lose detail (blowouts),
or from crushing the shadows too much.
Remember, you want to try to keep as much tonal detail as possible!</p>
<p>So, using the luminosity function as a starting point…</p>
<figure class="big-vid">
<img src="https://3.bp.blogspot.com/-Kj-evm3wR2M/UKv1m2KKyiI/AAAAAAAADmo/GBPMHkYmSCg/w960/cyndicalhounfineart-CM-luminosity.jpg" alt="Garden of the Gods by Cyndi Calhoun Luminosity" width="960" height="638" />
<figcaption>
Straight conversion using the luminosity 
</figcaption>
</figure>

<p>It’s not a bad start at all, but the prominence of the red rocks in the sunlight has been dulled quite a bit.
It’s a central feature of the image and should really draw the eye towards it.
So the reds could be more pronounced to make the stone pop a little more.</p>
<p>With the <em>Preserve Luminosity</em> option checked, begin bumping the Red channel to taste.</p>
<figure class="big-vid">
<img src="https://4.bp.blogspot.com/-3AI-cCgBKhI/UKv2-uSUobI/AAAAAAAADm0/dcoCibmuKfo/w960/cyndicalhounfineart-CM-red-66.1.jpg" alt="Garden of the Gods by Cyndi Calhoun Red Channel" width="960" height="638" data-swap-src="https://3.bp.blogspot.com/-Kj-evm3wR2M/UKv1m2KKyiI/AAAAAAAADmo/GBPMHkYmSCg/w960/cyndicalhounfineart-CM-luminosity.jpg" />
<figcaption>
Red channel bumped up to 66.1<br/>
(Click image to compare to base luminosity conversion)
</figcaption>
</figure>

<p>This gives a little more prominence to the red stone.</p>
<p>The Green channel seems ok, but for comparison try lowering it to about half of the Red channel value.
Remember – <em>Preserve Luminosity</em> is checked so the final values will scale to give Red values twice the weight as Green.</p>
<figure class="big-vid">
<img src="https://3.bp.blogspot.com/-8axlWaZdtWU/UKv6IAJd24I/AAAAAAAADno/mQa0_SVqNbw/w960/cyndicalhounfineart-CM-green-33.jpg" alt="Garden of the Gods by Cyndi Calhoun Red Channel" width="960" height="638" data-swap-src="https://4.bp.blogspot.com/-3AI-cCgBKhI/UKv2-uSUobI/AAAAAAAADm0/dcoCibmuKfo/w960/cyndicalhounfineart-CM-red-66.1.jpg" />
<figcaption>
Green channel at ~half of Red.<br/>
(Click image to compare to previous step)
</figcaption>
</figure>

<p>This brings up the shadow side of the central rocks a bit as well as adds some definition to the trees and vegetation.
Also interesting is the apparent boost to the red rocks as well.</p>
<p>If you’re wondering why the red rocks got brighter as well, consider the math.
Previously Red and Green were very near each other in value (around 70), so both colors had approximately equal weight.
When Green got its influence cut in half, Red scaled to take a much larger influence, and because there was more red than green the final value will end up higher.</p>
<p>If we look at the RGB values of the red rocks, the values are roughly like this (ignoring Blue for the moment because for this example it’s staying constant): 226, 127.</p>
<p>If both Red and Green have equal influence, the final pixel value will be:</p>
<p class="Cmd aside">
( 226 &times; 0.5 ) + ( 127 &times; 0.5 ) = <strong>176.5</strong>
</p>

<p>Now if Green is only half as strong as Red, the value will be:</p>
<p class="Cmd aside">
<sup>( 226 &times; 0.5 ) + ( 127 &times; 0.25 )</sup>&frasl;<sub>( 0.5 + 0.25 )</sub> = <strong>193</strong>
</p>

<p>The result was divided by the influence amount to scale the way <em>Preserve Luminosity</em> would.
The final pixel value will become brighter in this case, which is why the red rocks got brighter with a decrease in the Green channel.</p>
<p>It should go without saying that the Blue channel will have a heavy influence on the sky (and many areas of the image in shadow).
To add a little drama to the sky, try removing the Blue channel influence by setting it to 0:</p>
<figure class="big-vid">
<img src="https://2.bp.blogspot.com/-uhP5KF3NkRM/UKwBGnx9iAI/AAAAAAAADoc/weZEupnGgdU/w960/cyndicalhounfineart-CM-blue-0.jpg" alt="Garden of the Gods by Cyndi Calhoun Red Channel" width="960" height="638" data-swap-src="https://3.bp.blogspot.com/-8axlWaZdtWU/UKv6IAJd24I/AAAAAAAADno/mQa0_SVqNbw/w960/cyndicalhounfineart-CM-green-33.jpg" />
<figcaption>
Blue channel set to 0<br/>
(Click image to compare to previous step)
</figcaption>
</figure>

<p>This will darken the sky up a bit (as well as some shadow areas).</p>
<p>Pay careful attention to what these changes do to the image in closer views.
In this case there is a higher amount of banding and noise in the smooth sky if values get pushed too far.
So try to approach it with a light hand.</p>
<p>The sliders also allow negative values.
This will seriously crush the channel results when applied (and will quickly lead to funky results if you’re not careful).
For example, to push the Blue channel even darker in the final result, try setting the Blue channel to -20:</p>
<figure class="big-vid">
<img src="https://1.bp.blogspot.com/-GmHZJXuUdkk/UKwDYHmOS1I/AAAAAAAADoo/pfsm-bDmW9c/w960/cyndicalhounfineart-CM-blue--20.jpg" alt="Garden of the Gods by Cyndi Calhoun Red Channel" width="960" height="638" data-swap-src="https://2.bp.blogspot.com/-uhP5KF3NkRM/UKwBGnx9iAI/AAAAAAAADoc/weZEupnGgdU/w960/cyndicalhounfineart-CM-blue-0.jpg" />
<figcaption>
Red: 66.1, Green: 33, Blue: -20<br/>
(Click image to compare to previous step)
</figcaption>
</figure>

<p>The sky has become much darker, as have the shadow side of the rocks.
There is an overall increase in contrast as well, but at the expense of nasty noise and banding artifacts in the sky.</p>
<p class="aside">
<span>General Rules of Thumb</span>
The Red channel is well suited for contrast (particularly in the brighter tones).
<br/>
The Green channel will hold most of the details.
<br/>
The Blue channel contains grain and (often) a lot of noise.
<br/><br/>
In skin, the Red channel is very flattering to the final result and you’ll often get good results by emphasizing the Red channel in portraits.
</p>



<h3 id="on-skin">On Skin<a href="#on-skin" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The Red channel can be very flattering on skin and is a great tool to keep in mind when working on portraits.
For instance, below is the color image of Whitney from earlier:</p>
<figure>
<img src="https://lh4.googleusercontent.com/-svJdyAqz1H0/UKFbh4bX-4I/AAAAAAAADXs/Klo2tFX_Oac/w960/whitney-color.png" alt="Whitney in color by Pat David" width="640" height="640" />
<figcaption>
Whitney in color
</figcaption>
</figure>

<p>The straight <em>Luminosity</em> conversion is below.
Click on the image to compare it to a version where the Red channel is set equal to the Green channel (giving a greater emphasis on the Reds):</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/whitney-luminosity.jpg" alt="Whitney Luminosity by Pat David" width="640" height="640" data-swap-src="whitney-bw-equal-RG.jpg"/>
<figcaption>
Whitney in Luminosity<br/>
(Click to compare Red channel = Green channel)
</figcaption>
</figure>



<h3 id="bw-film-simulation">B&amp;W Film Simulation<a href="#bw-film-simulation" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Due to the popularity of the <strong>Channel Mixer</strong> as a straightforward means of conversion with nice control over each of the RGB channel contributions, many people have used it as a basis for building profiles of what they felt was a close emulation to the tonal response of classic black and white films.</p>
<p>Borrowing the table from <a href="http://www.prime-junta.net/pont/How_to/100_Curves_and_Films/_Curves_and_films.html#N104E4">Petteri Sulonen’s site</a>, these are some common RGB Channel Mixer values to emulate some B&amp;W films.
These aren’t exact, of course, but some people may find them useful.
Particularly as a starting-off point for further modifications.</p>
<table>
<thead>
<tr>
<th>Film</th>
<th>R, G, B</th>
</tr>
</thead>
<tbody>
<tr>
<td>Agfa 200X</td>
<td>18, 41, 41</td>
</tr>
<tr>
<td>Agfapan 25</td>
<td>25, 39, 36</td>
</tr>
<tr>
<td>Agfapan 100</td>
<td>21,40,39</td>
</tr>
<tr>
<td>Agfapan 400</td>
<td>20,41,39</td>
</tr>
<tr>
<td>Ilford Delta 100</td>
<td>21,42,37</td>
</tr>
<tr>
<td>Ilford Delta 400</td>
<td>22,42,36</td>
</tr>
<tr>
<td>Ilford Delta 400 Pro &amp; 3200</td>
<td>31,36,33</td>
</tr>
<tr>
<td>Ilford FP4</td>
<td>28,41,31</td>
</tr>
<tr>
<td>Ilford HP5</td>
<td>23,37,40</td>
</tr>
<tr>
<td>Ilford Pan F</td>
<td>33,36,31</td>
</tr>
<tr>
<td>Ilford SFX</td>
<td>36,31,33</td>
</tr>
<tr>
<td>Ilford XP2 Super</td>
<td>21,42,37</td>
</tr>
<tr>
<td>Kodak Tmax 100</td>
<td>24,37,39</td>
</tr>
<tr>
<td>Kodak Tmax 400</td>
<td>27,36,37</td>
</tr>
<tr>
<td>Kodak Tri-X</td>
<td>25,35,40</td>
</tr>
</tbody>
</table>
<p>There’s a good reason that <strong>Channel Mixer</strong> is such a popular means for converting an image to grayscale.
It’s flexible and allows for a great level of control over the contributions from each channel.</p>
<p>Unfortunately the only way to preview what is happening is in the tiny dialog window.
Even when zooming in it can sometimes be frustrating to make fine adjustments to the channel contributions.</p>
<h2 id="decomposing-colors">Decomposing Colors<a href="#decomposing-colors" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Another method of converting the image to grayscale is to decompose the image into its constituent channels.
When looking at the <strong>Channel Mixer</strong> previously, there was an option to set one of the RGB channels to 100 (and leaving the others at 0) that would isolate that specific channel.</p>
<p>If you wanted to isolate each of the RGB channel contributions into its own layer, it would be tedious to do manually.
Luckily, GIMP has a built-in command to automatically <strong>Decompose</strong> the image into different channels:</p>
<p><span class="Cmd">Colors &rarr; Components &rarr; Decompose…</span></p>
<p>Will bring up the <strong>Decompose</strong> dialog box:</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/decompose-base.png" alt="GIMP Decompose color dialog" width="297" height="203" />
<figcaption>
The <strong>Decompose</strong> dialog
</figcaption>
</figure>

<p>The options available are which <em>Color model</em> to decompose to, and whether to create a new image with the decomposed channels as layers.
If <em>Decompose to layers</em> is not checked, there will be a new image for each channel separately (chances are that you’ll want to start out leaving this checked).</p>
<p>The most important option is which <em>Color model</em> to decompose to.
Up to now we have mostly been considering RGB, but there are other modes that might be handy as well.
Let’s have a look at some of the most useful decomposition modes.</p>
<p>We will be using this image graciously provided by <a href="https://plus.google.com/u/0/+DimitriosPsychogios/about">Dimitrios Psychogios</a>:</p>
<figure>
<img src="https://lh4.googleusercontent.com/-t-5u50_U9tQ/VCGZmH6RJoI/AAAAAAAAAEk/S39lYLOPONE/w640-no/dmitrios-dice.jpg" alt="Dice by Dmitrios Psychogios" width="640" height="640" /> 
<figcaption>
<em>Dice</em> by <a href="https://plus.google.com/u/0/+DimitriosPsychogios/about">Dimitrios Psychogios</a> (<a class="cc" href="http://creativecommons.org/licenses/by-sa/4.0/" title="CC-BY-SA">cba</a>)
</figcaption>
</figure>



<h3 id="rgb-a-">RGB(A)<a href="#rgb-a-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>This is the <em>Color mode</em> that we’ve been focusing on up to now, and is usually the most helpful in terms of having multiple sources to draw from.
This separates out the Red, Green, and Blue Channels into individual layers for you (and Alpha if your image has it).</p>
<figure class="big-vid">
<img src="https://lh4.googleusercontent.com/-z8HEEDSbIyU/VCGUtr9NgdI/AAAAAAAAAEI/ZWIyezyJnic/w960-no/GIMP-Decompose-RGB.jpg" alt="Dimitrios Psychogios Dice decompose RGB" width="960" height="320" />
<figcaption>
RGB decomposed.
</figcaption>
</figure>


<h3 id="hsv-hsl">HSV/HSL<a href="#hsv-hsl" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Hue, Saturation, and Value/Lightness is another useful decomposition, though usually only the Value or Lightness is useful for B&amp;W conversion.</p>
<figure class="big-vid">
<img src="https://lh4.googleusercontent.com/-9zlwkT0oEu8/VCGdQAnH88I/AAAAAAAAAE8/aTdDY_WJCXE/w960-no/GIMP-Decompose-HSV.jpg" alt="Dimitrios Psychogios Dice decompose HSV" width="960" height="320" />
<figcaption>
Hue, Saturation, Value (HSV) Channels
</figcaption>
</figure>

<p>The <em>Value</em> in <strong>HSV</strong> is derived according to a simple formula:</p>
<p class="Cmd aside">
Value, V = MAX( R, G, B )
</p>

<p>Which is basically just the largest value of Red, Green, or Blue.</p>
<figure class="big-vid">
<img src="https://lh3.googleusercontent.com/-X12euPvDqW4/VCGe8zG50II/AAAAAAAAAFQ/lcL2v-lDlxA/w960-no/GIMP-Decompose-HSL.jpg" alt="Dimitrios Psychogios Dice decompose HSL" width="960" height="320" />
<figcaption>
Hue, Saturation, Lightness (HSL) Channels
</figcaption>
</figure>

<p>The <em>Lightness</em> in <strong>HSL</strong> is derived from this formula:</p>
<p class="Cmd aside">
Lightness, L = <sup>( MAX( R, G, B ) + MIN( R, G, B ) )</sup>&frasl;<sub>2</sub><br/>
</p>

<p>Where <em>Lightness</em> is simply determined as the average of the largest and smallest component of RGB.</p>
<p>While Hue and Saturation may seem interesting, it should be obvious that the most useful channels for a grayscale conversion here would likely be <em>Value</em> or <em>Lightness</em>.
Overall, <em>Lightness</em> will tend to be a bit brighter than <em>Value</em>.</p>
<h3 id="lab">LAB<a href="#lab" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>There is far too much information concerning the <a href="http://en.wikipedia.org/wiki/Lab_color_space">LAB colorspace</a> to really go into much detail here.  Suffice it to say that the <em>L</em> in <em>LAB</em> is for <strong>Lightness</strong>, while <em>A</em> and <em>B</em> are for color opponents (<strong>A</strong> = Green&hArr;Red, <strong>B</strong> = Blue&hArr;Yellow).</p>
<p class="aside">
Later articles about color toning will show some neat tricks using the LAB colorspace for adjustments.
</p>

<p>The <em>LAB</em> colorspace is based on a perceptual model (similar to the relative luminance previously discussed).
In fact, the <em>Lightness</em> in <em>LAB</em> is calculated using the cube root of the luminance from that function.</p>
<figure class="big-vid">
<img src="https://lh6.googleusercontent.com/-9GO7aKHOqw8/VCGikj93xwI/AAAAAAAAAFg/4bXt5w2NfwI/w1014-h338-no/GIMP-Decompose-LAB.jpg" alt="Dimitrios Psychogios Dice decompose LAB" width="960" height="320" />
<figcaption>
LAB Channels
</figcaption>
</figure>

<p>As you can see, the only channel of any use for a B&amp;W conversion is really the <strong>Lightness</strong>, <em>L</em> channel.</p>
<h3 id="cmy-k-">CMY(K)<a href="#cmy-k-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Cyan, Magenta, Yellow and (Black, K) are often discussed in terms of printing.
When doing the decomposition in GIMP, you’ll have to invert the results to make them useful.
Once you do, you may notice that they are, in fact, the same as RGB (for CMY decomposition):</p>
<figure class="big-vid">
<img src="https://lh4.googleusercontent.com/-251PiePdosc/VCGm-RMqdgI/AAAAAAAAAF4/SARBbmx8qqM/w960-no/GIMP-Decompose-CMY.jpg" alt="Dimitrios Psychogios Dice decompose CMY" width="960" height="320" />
<figcaption>
CMY conversion (inverted from direct conversion)
</figcaption>
</figure>

<p>CMYK produces a similar result, but adds another channel to control the level of black in the result.
Inverting the <em>Black</em>, <strong>K</strong> channel yields something usable.</p>
<figure>
<img src="https://lh6.googleusercontent.com/-VtvoazGyhuo/VCGp7IqVWPI/AAAAAAAAAGM/1xPe4DPRM0o/w640-no/GIMP-Decompose-CMYK.jpg" alt="Dimitrios Psychogios Dice decompose CMYK" width="640" height="640" />
<figcaption>
CMYK conversion with the Black, <strong>K</strong> channel inverted
</figcaption>
</figure>



<h3 id="ycbcr">YCbCr<a href="#ycbcr" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Anyone who has done video processing might recognize this colorspace representation, as it often shows up in digital video.
<em>YCbCr</em> is a means for encoding the RGB colorspace with three channels: <em>Luma</em>, <strong>Y</strong>, and two channels of Red (<em>Cr</em>) and Blue (<em>Cb</em>) chroma differences.</p>
<figure class="big-vid">
<img src="https://lh4.googleusercontent.com/-xTLwdn-hAyc/VCGr1ZaGr8I/AAAAAAAAAGs/qNoCdHxuYBQ/w960-no/GIMP-Decompose-YCbCr.jpg" alt="Dimitrios Psychogios Dice decompose YCbCr" width="960" height="320" />
<figcaption>
YCbCr
</figcaption>
</figure>

<p>Try to use the <em>256</em> variants of the ITU recommendations to allow the decomposition to span the full 256 values available (the non-256 versions will pad 16 to the range, only allowing values to go from 16-240).</p>
<h3 id="so-what-s-the-result-">So What’s the Result?<a href="#so-what-s-the-result-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Let’s summarize some of the most useful results from <code>Colors → Components → Decompose</code> for a B&amp;W conversion:</p>
<ul>
<li>RGB - All channels</li>
<li>HSV/HSL - V (Value) and L (Lightness)</li>
<li>LAB - L</li>
<li>CMYK - K</li>
<li>YCbCr - Y (Luma)</li>
</ul>
<p>This gives a total of 9 different types of color mode conversions that may be useful for generating a B&amp;W image.
It helps to visually see all of the options at once to get a better feel for what is going on:</p>
<figure class="big-vid">
<img src="https://lh4.googleusercontent.com/-nYBQlJWqAI4/VCHaoly4o9I/AAAAAAAAAHI/dI-EDksL5sk/w960-no/GIMP-Decompose-All.jpg" alt="Dimitrios Psychogios Dice decompose All" width="960" height="960" />
<figcaption>
All 9 useful channels from <code>Colors → Components → Decompose</code>
</figcaption>
</figure>

<p>Chances are that one of these conversions might prove useful as a direct B&amp;W conversion.</p>
<p>It helps to notice that the first 4 conversions are all color channels, while the last 5 conversions are brightness values based on different functions for achieving the results (<strong>K</strong>, <strong>V</strong>alue, <strong>L</strong>ightness, <strong>L</strong>, <strong>Y</strong> (luma)).</p>
<h4 id="the-script">The Script<a href="#the-script" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>I had previously written some Script-Fu to automate the task of generating these useful channel decompositions (it was tedious choosing each color model manually).</p>
<p>The script will take the active layer in an image, and decompose it to each of the useful color channels listed above, each on its own layer.
Once downloaded and placed into your <strong>Scripts</strong> folder, the command can be found here:</p>
<p><span class="Cmd">Colors &rarr; Color Decompose…</span></p>
<p class="aside">
<span>Downloading the Script</span>
The Script-Fu for <em>Color Decompose</em> can be downloaded here:<br/>
<a href="patdavid-color-decompose_0.3.scm" style="font-size:1rem;">Color Decompose</a><br/>
or downloaded from here: <br/>
<a href="https://github.com/pixlsus/GIMP-Scripts/blob/master/patdavid-color-decompose_0.3.scm" style="font-size:1rem;">Color Decompose on Github</a>
</p>

<h4 id="looking-forward">Looking Forward<a href="#looking-forward" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Likely that <em>some parts</em> of <em>some conversions</em> will be useful in some way.
I am personally rarely satisfied with any of the straight conversion options on their own,
but would like to pick and choose which parts of the image contain the best detail and tones from the different conversion options.
The fun is then combining them in such a way so as to produce a final result that is pleasing.</p>
<h2 id="pseudogrey">Pseudogrey<a href="#pseudogrey" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Pseudogrey (gr<strong><em>e</em></strong>y, not gray, per the original author, <a href="http://r0k.us/rock/index.html">Rich Franzen</a>) is a means for increasing the available levels of <em>perceived</em> gray in an image using a bit-stealing technique.</p>
<figure class="big-vid">
<img src="https://lh4.googleusercontent.com/-0_HhC6-uT3c/VCQ9aimZaZI/AAAAAAAAAHg/jhI4l2ImxwM/w960/Randi%2Bpseudogrey.jpg" alt="Randi pseudogrey by Pat David" width="960" height="906" />
<figcaption>
<em>Randi</em> in pseudogrey<br/>
by Pat David (<a class="cc" href="https://creativecommons.org/licenses/by-sa/4.0/">cba</a>)
</figcaption>
</figure>

<p>The basic approach in <strong>Pseudogrey</strong> is that you can achieve a much higher number of <em>perceived</em> gray values in an image, if you allow some of the pixels to stray just a tiny bit away from pure gray.  For instance, if a pixel value in a true gray image was: 180, 180, 180, <strong>Pseudogrey</strong> may actually make the pixel value something like 180, 18<strong>1</strong>, 180.</p>
<p>That is, the Green value may be just a bit higher.  The <a href="http://blog.patdavid.net/2012/06/true-pseudogrey-in-gimp.html">full post on Pseudogrey</a> goes into much more detail about the algorithm.</p>
<p>The results from using Pseudogrey will follow the same model as for Luminosity desaturation, but will provide a much larger range of tones (1786 possible shades vs 256 in a truly gray image).</p>
<p>There are a couple of ways to convert images to pseudogrey.</p>
<p>There is a Script-Fu available for download:</p>
<p class="aside">
<span>Downloading the Pseudogrey script</span>
The Script-Fu for <em>Pseudogrey</em> can be downloaded here:<br/>
<a href="http://registry.gimp.org/node/26515" style="font-size:1rem;">Pseudogrey on GIMP Registry</a><br/>
or downloaded from here: <br/>
<a href="https://docs.google.com/uc?export=download&id=0B21lPI7Ov4CVOW9yTnBtbjVlaEk" style="font-size:1rem;">Pseudogrey on Google Drive</a>
</p>

<p>Once the file has been downloaded and placed into your <em>Scripts</em> folder, the command can be found under:</p>
<p class="Cmd">
Colors &rarr; Pseudogrey…
</p>

<p>Alternatively, if <a href="http://gmic.sourceforge.net/" title="G&#39;MIC Homepage">G’MIC</a> is installed then the command can be found at the Black &amp; white filter:</p>
<p class="Cmd">
G’MIC &rarr; Black &amp; white &rarr; Black &amp; white
</p>

<p>At the end of all of the various options in the filter, there is a <em>Pseudo-gray dithering</em> option to apply the algorithm at various levels (higher levels increase the distance from true gray for each pixel).</p>
<p>Pseudogrey can be helpful in areas with slight tonal value changes over a large area, as this is often where banding will become visible in an 8-bit image.
While the differences may be slight in many cases, if allowing the tiniest amount of color shifting to creep into the image for an expanded tonal range is ok, then pseudogrey is a great option to have.</p>
<h2 id="gegl-c2g">GEGL C2G<a href="#gegl-c2g" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>The Generic Graphics Library (GEGL) is the underlying graphics engine for GIMP.
There is one neat function in GEGL specificaly for B&amp;W conversions called <em>Color 2 Grayscale</em> (c2g).
It can be found on the <em>Tools</em> menu in GIMP:</p>
<p class="Cmd">
Tools &rarr; GEGL Operation…
</p>

<p>Rolf Steinort covers c2g briefly in <a href="http://blog.meetthegimp.org/episode-084-the-3-letter-acronym-show/">episode 84 of Meet the GIMP</a>.
<a href="http://blog.wbou.de/index.php/2009/08/04/black-and-white-conversion-with-gegls-c2g-color2gray-in-gimp/">Paul Bou also looks</a> at using c2g for B&amp;W conversions in a little more detail, and <a href="http://jcornuz.wordpress.com/2009/05/30/could-this-be-the-ultimate-black-and-white-converter/">Joel Cornuz also asks</a> if c2g could be the “ultimate” B&amp;W converter.
It may not be worth all the hyperbole, but c2g does do some very interesting things.</p>
<p>The operation considers each pixel relative to its neighbors within a given radius.
The value determined is evaluated as a function of perceived luminance weighted against neighboring pixels.
The <a href="http://www.gegl.org/operations.html#op_gegl:c2g">description from GEGL.org</a> is:</p>
<blockquote>
<p>Color to grayscale conversion, uses envelopes formed from spatial color differences to perform color-feature preserving grayscale spatial contrast enhancement</p>
</blockquote>
<p>In practice, c2g will attempt to scale the values of pixels within its neighborhood (radius) to maximize contrast.
What some people like about c2g is that the operation will also introduce a nice range of synthetic grain during the conversion.
There are ways to minimize the resulting grain by adjusting settings, though.</p>
<p>Let’s consider this test image:</p>
<figure class='big-vid'>
<img src='https://4.bp.blogspot.com/-dP86WT3T1Ds/UO3t-D_wewI/AAAAAAAAEwg/lObIv6J_5-M/w960/Cars-Luminosity.jpg' alt='Deerfield Beach luminosity GIMP' width='960' height='662' />
<figcaption>
Straight <em>Luminosity</em> desaturation in GIMP
</figcaption>
</figure>

<p>At first glance, GEGL c2g will likely produce ugly results.
The default settings are not conducive to producing a pretty image:</p>
<figure class='big-vid'>
<img src='https://3.bp.blogspot.com/-wGXTbiRqbwc/UO3uc418VjI/AAAAAAAAEws/8sdZBXcgN-U/w960/Cars-c2g-default.jpg' data-swap-src='https://4.bp.blogspot.com/-dP86WT3T1Ds/UO3t-D_wewI/AAAAAAAAEwg/lObIv6J_5-M/w960/Cars-Luminosity.jpg' alt='Deerfield Beach c2g default GIMP by Pat David' width='960' height='662' />
<figcaption>
    c2g conversion, default settings (radius 300, samples 4, iterations 10)<br/>
(Click image to compare to original)
</figcaption>
</figure>

<p>The default settings will (usually) produce a nasty halo effect on edges where the radius is not large enough to fully consider transitions.
The edges of the buildings/trees against the sky show this particularly.
There is also an excessive amount of synthetic graininess to the result.</p>
<p>Tweaking parameters can lead to better results at the cost of processing time.
GEGL c2g is not a fast algorithm.</p>
<p>Haloing can be decreased by increasing the radius and graininess can be decreased by increasing the samples or iterations.
Iterations seem to have a larger effect on overall noisiness in the result but (again) at the cost of increased processing time.</p>
<figure class='big-vid'>
<img src='https://2.bp.blogspot.com/-6YArLzaEH5g/UO3wD3AXOcI/AAAAAAAAExk/S8eAr2D0oQI/w960/Cars-c2g-r750-s8-i15.jpg' data-swap-src='https://3.bp.blogspot.com/-wGXTbiRqbwc/UO3uc418VjI/AAAAAAAAEws/8sdZBXcgN-U/w960/Cars-c2g-default.jpg' alt='Deerfield Beach c2g r750 s8 i15 GIMP by Pat David' width='960' height='662' />
<figcaption>
Betters results after increasing some parameters (radius 750, samples 8, iterations 15)<br/>
(Click image to compare to default parameters)
</figcaption>
</figure>

<p>Increasing the radius helped to alleviate some of the halos and will allow the algorithm to spread the contrast over a larger area.
The increase in samples and iterations helps to keep the noise down to a more manageable level as well.
Refining even further yields slightly better results:</p>
<figure class='big-vid'>
<img src='https://2.bp.blogspot.com/-lqqXT-1WS5c/UO3zfMVGNOI/AAAAAAAAEyc/GNUDbf10f_U/w960/Cars-c2g-r1500-s8-i20.jpg' data-swap-src='https://4.bp.blogspot.com/-dP86WT3T1Ds/UO3t-D_wewI/AAAAAAAAEwg/lObIv6J_5-M/w960/Cars-Luminosity.jpg' alt='Deerfield Beach c2g r1500 s8 i20 GIMP by Pat David' width='960' height='662' />
<figcaption>
Betters results after increasing some parameters (radius 1500, samples 8, iterations 20)<br/>
(Click image to compare to original)
</figcaption>
</figure>

<p>At this point the noise is nicely suppressed while the halos have mostly been eliminated.
The overall image still has more contrast than the straight luminosity desaturation (click to compare) and the contrast has been <em>weighted for the surrounding pixels as well</em>.</p>
<p>If a luminosity desaturation will choose a pixel value based on the perceived color brightness, c2g will do the same in addition to weighting the result relative to neighboring pixels.</p>
<p>For example, below is an optical illusion showing the effect on perceived luminosity relative to nearby brightness:</p>
<figure>
<img src='https://lh6.googleusercontent.com/-OID1AdW-hNU/VCRoplYzRLI/AAAAAAAAAIk/BiUyArqPQA8/w507-h395-no/Same_color_illusion.png' alt='checkerboard luminosity optical illusion' width='507' height='395' />
<figcaption>
Square A and B are the same value of gray!
</figcaption>
</figure>

<p>Squares A &amp; B are the same exact shade of gray.
The reason we perceive B as lighter than A is due to the way our eyes are perceiving nearby colors (and our expectations are strengthened by the checkerboard pattern as well).</p>
<p>The results of running the image through c2g aligns the pixel values closer to what our eyes see:</p>
<figure>
<img src='https://lh3.googleusercontent.com/-1hkcjYC9M8g/VCRoplfiphI/AAAAAAAAAIo/p_VGtseYAXE/w507-h395-no/illusion.png' alt='checkerboard luminosity optical illusion' width='507' height='395' />
<figcaption>
After letting c2g do its thing
</figcaption>
</figure>

<p>This operation can be very handy for bringing out micro-contrasts in an image (or increasing global contrast at large radius settings).</p>
<h2 id="conversion-examples">Conversion Examples<a href="#conversion-examples" class="header-link"><i class="fa fa-link"></i></a></h2>
<p><em>Finally</em>, a look at a simple workflow for applying these various methods of grayscale conversion to arrive at a final result.</p>
<p>The overall workflow here will be to decompose the image to various grayscale layers.
Then to investigate each of the different versions to identify features of interest aesthetically.
Finally, combine the different decompositions and mask accordingly to highlight those features or tones.</p>
<h3 id="pretty-woman">Pretty Woman<a href="#pretty-woman" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Do a <a href="https://www.flickr.com/creativecommons">Creative Commons search</a> on Flickr, and it’s <em>very</em> likely that photographer <a href="https://www.flickr.com/photos/72213316@N00/">Frank Kovalchek</a> will show up in some fashion.  He liberally licenses many photographs under <a href="http://creativecommons.org/">Creative Commons</a> licenses, and we will be using one of his portraits for this first example.</p>
<figure>
<img src='https://lh3.googleusercontent.com/-uac9hP5_BH8/VCWKk9tPJXI/AAAAAAAAAKI/x_7FP3Zp9QA/w640-no/aldude-color.jpg' alt='GIMP B&W base image by Frank Kovalchek' width='640' height='801' />
<figcaption>
<a href="http://www.flickr.com/photos/72213316@N00/4589410278"><em>What a sweet looking portrait</em></a> by <a href="http://www.flickr.com/people/72213316@N00/">Frank Kovalchek</a> on Flickr
(<a class='cc' href='https://creativecommons.org/licenses/by/2.0/' title='Creative Commons - By Attribution'>cb</a>)
</figcaption>
</figure>

<p>Utilizing <a href="#the-script">the script from earlier</a> to quickly break the image down into multiple layers using different decomposition modes produces a nice array overview to consider:</p>
<figure class='big-vid'>
<img src='https://lh6.googleusercontent.com/-puR1O1BYDKg/VCWQ8KlJGoI/AAAAAAAAAKo/pHHv5g7OMEI/w960-no/aldude-array.jpg' alt='GIMP B&W Decompose Array' width='960' height='1202' />
</figure>

<p>These various decompositions supply a large amount of possible variations in getting to a finished product.
Keep in mind that the goal in this example is to maintain good tonal density as well as imparting a sense of texture and detail.</p>
<h4 id="the-scarf">The Scarf<a href="#the-scarf" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>As good a starting point as any, consider the texture and detail of the scarf.  Looking at the various decompositions in the array, the question you should be asking yourself is:</p>
<blockquote>
<p>Which of these results produces the best quality/texture in the fabric of the scarf?</p>
</blockquote>
<p>Looking at the previews leads to three possible choices: <em>Luma Y709F</em>, <em>Luma Y470F</em>, and <em>HSL - Lightness</em>.
Of those let’s go with <em>Luma Y709F</em>.
This is very subjective, of course.
The important point to take away is the choice being made due to qualities it possesses <em>for a particular purpose</em>.</p>
<figure>
<img src='https://lh3.googleusercontent.com/-qmNK-DKRMX8/VCW1_ul2rJI/AAAAAAAAALA/HcGa1bm75GQ/w640-no/aldude-bw-y709f.jpg' alt='GIMP B&W y709f' width='640' height='801' />
<figcaption>
The Y709F - Luma channel as a “base” layer - chosen for the fabric texture
</figcaption>
</figure>


<p>The main focus of the image will be the models face but you will still want to retain detail and texture in the scarf as well.</p>
<h4 id="the-skin">The Skin<a href="#the-skin" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Looking at the model and her skin there is already fine detail , but could use a bit more emphasis overall.
Perhaps get the skin a little bit brighter and in a higher key to offset the dark background and the scarf.
It would be nice to smoothen/soften the skin tones as well.</p>
<p>Keeping that in mind, look back at the various decompositions again, this time with an eye towards skin tones and her face.
Not surprisingly, the <strong>RGB - Red</strong> channel looks very pretty (as well as the HSV - Value).
It’s fairly common that the red channel will be complimentary on (Caucasian) skin.
There is even an old trick to use the red channel as an overlay on a color image to help “enhance” skin tones.</p>
<p>So let’s try that here.
Place the <em>RGB - Red</em> channel over the <em>Luma - y709f</em> channel and change the layer blending mode to <strong>Overlay</strong>.</p>
<figure>
<img src='https://lh5.googleusercontent.com/-K2mv-EBujdo/VCW5HbLDMQI/AAAAAAAAALU/zLAkLGclIQo/w640-no/aldude-bw-y709f-Red-Overlay.jpg' alt='GIMP B&W y709f with Red channel Overlay' data-swap-src='https://lh3.googleusercontent.com/-qmNK-DKRMX8/VCW1_ul2rJI/AAAAAAAAALA/HcGa1bm75GQ/w640-no/aldude-bw-y709f.jpg' width='640' height='801' />
<figcaption>
Luma Y709F base, with Red channel over (layer blend mode: Overlay)<br/>
(Click to compare to base Y709F - Luma)
</figcaption>
</figure>

<p>Visually this appears to have more impact, but the skin may be blown out a little too much.
One option to attenuate this would be to lower the opacity on the <em>RGB - Red</em> layer.</p>
<p class="aside">
Also, note that very often the visual impact may also be due to the higher contrast in the image at this point.
Sometimes it’s best to stand up and look away from the image for a while before committing to a change…
</p>

<p>The problem with adjusting the opacity for the entire layer is that the ratio of levels between the skin and scarf may not be desirable for the final output.
Adjusting the opacity might reduce the effect on the skin, but at the same time will reduce the effect on the scarf by an equal amount.
What is needed is a way to apply the effect stronger on the scarf or skin separately.</p>
<p>This is exactly what <em>Layer Masks</em> are for!</p>
<h4 id="masks">Masks<a href="#masks" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>At this point a layer mask could be added to the <em>RGB - Red</em> layer, and then painted by hand to modify the intensity by isolating the face and giving a little less opacity to the scarf.
It’s a lot of tedious, detailed work.</p>
<p>However, if you look back on the array of decompositions you may notice that channels like <em>RGB - Blue</em> and <em>RGB - Green</em> look pretty good for isolating the face from the scarf already.</p>
<p>So we are going to use the <em>RGB - Green</em> layer and apply it as a layer mask to the <em>RGB - Red</em> layer.</p>
<p>The <strong>Layers</strong> palette should look something like this in GIMP now:</p>
<figure>
<img src='https://lh6.googleusercontent.com/-o_IpVAcmp1o/VCW-PQFwKRI/AAAAAAAAALo/rJEkns_zyJQ/s0-no/aldude-bw-y709f-RoverlayMask-Layers.png' alt='GIMP Layer Palette with layer mask' width='197' height='180' />
</figure>

<p>Keep in mind, a layer mask will be more transparent the darker the color is in it.
The lighter areas will show more of the layer it is applied to.
In this case, the lighter areas will allow more of the <em>RGB - Red</em> layer to show, while darker areas will show more of the layer below, <em>Luma - Y709F</em>.</p>
<p>The results at this point with the mask:</p>
<figure>
<img src='https://lh3.googleusercontent.com/-I7vWCN-LKD0/VCW_h0zI3GI/AAAAAAAAAL8/0upOtVWT_54/w640-no/aldude-bw-y709f-Red-Overlay-Masked.jpg' alt='GIMP B&W y709f with Red channel Overlay' data-swap-src='https://lh5.googleusercontent.com/-K2mv-EBujdo/VCW5HbLDMQI/AAAAAAAAALU/zLAkLGclIQo/w640-no/aldude-bw-y709f-Red-Overlay.jpg' width='640' height='801' />
<figcaption>
<em>RGB - Red</em> as overlay with <em>RGB - Green</em> as a layer mask<br/>
(Click to compare without the layer mask)
</figcaption>
</figure>

<p>What this has done is to isolate the models face from the surrounding scarf.
You can now modify the opacity of the layer, or adjust the values of the mask using <em>Levels</em> or <em>Curves</em> to adjust the intensity of the result.</p>
<p>Any changes to the <em>RGB - Red</em> layer will now be masked to apply mainly to the models face.</p>
<p>Looking at the results, the scarf has become much more flat in tones, while the models face has brightened up.
Considering it, the ratios look backwards a bit.  The scarf has flattened out, and the face has brightened a bit too much.</p>
<p>To flip the ratios, simply invert the colors of the layer mask.
Select the <em>mask</em> (not the layer itself!), and run:</p>
<p class="Cmd">
Colors &rarr; Invert
</p>

<p>The layers palette will now look like this:</p>
<figure>
<img src='https://lh3.googleusercontent.com/-4-xP0wRsso8/VCXBL2IBTPI/AAAAAAAAAMQ/-PRpfnuFGKc/s0-no/aldude-bw-y709f-RoverlayMaskInvert-Layers.png' alt='GIMP Layer Palette with inverted mask' />
</figure>

<p>The result on the image so far:</p>
<figure>
<img src='https://lh3.googleusercontent.com/-YjH7FDGZhYg/VCXCCnIdt-I/AAAAAAAAAMk/Am326xAfjos/w640-no/aldude-bw-y709f-Red-Overlay-Masked-Inverted.jpg' alt='GIMP B&W y709f with Red channel Overlay' data-swap-src='https://lh3.googleusercontent.com/-I7vWCN-LKD0/VCW_h0zI3GI/AAAAAAAAAL8/0upOtVWT_54/w640-no/aldude-bw-y709f-Red-Overlay-Masked.jpg' width='640' height='801' />
<figcaption>
Inverted mask results<br/>
(Click to compare to non-inverted mask)
</figcaption>
</figure>

<p>At this point the results look pretty nice and would make a fine stopping point.
The overlay and mask added some nice depth to the scarf fabric while maintaining a nice effect on the skin of the model as well.
More work could be done if wanted with adjusting layer mask levels and increasing/decreasing the results on the models skin but this looks good as it is.</p>
<p>A final comparison of the results against a straight color desaturation:</p>
<figure>
<img src='https://lh3.googleusercontent.com/-YjH7FDGZhYg/VCXCCnIdt-I/AAAAAAAAAMk/Am326xAfjos/w640-no/aldude-bw-y709f-Red-Overlay-Masked-Inverted.jpg' alt='GIMP B&W y709f with Red channel Overlay' data-swap-src='https://lh3.googleusercontent.com/-EFb0VVJFFRg/VCXDVN9PVOI/AAAAAAAAAM0/f5X1i55yGcs/w640-no/aldude-desaturation.jpg' width='640' height='801' />
<figcaption>
Final result<br/>
(Click to compare to straight color desaturation)
</figcaption>
</figure>

<p>This path was a little fussier than doing a straight color desaturation but the results are much nicer and is visually more interesting.</p>
<h3 id="methuselah">Methuselah<a href="#methuselah" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Well, this isn’t the <em>actual</em> <a href="http://en.wikipedia.org/wiki/Methuselah_(tree)">Methuselah</a>, but it is a similar species of Bristlecone Pine.  Once again, image courtesy of <a href="http://www.flickr.com">Flickr</a> user <a href="http://www.flickr.com/people/72213316@N00/">Frank Kovalchek</a>.</p>
<figure>
<img src='https://lh3.googleusercontent.com/-uROcbQJ8fow/VCXUL3EMceI/AAAAAAAAANM/PXFRRZ3bAGg/w640-no/aldude2-color.jpg' alt='GIMP B&W Base Image 2 by Frank Kovalchek' width='640' height='853' />
<figcaption>
<a href="http://www.flickr.com/photos/72213316@N00/6956555116"><em>Bristlecone pine hanging on for dear life at 10,000 feet</em></a><br/>
by <a href="http://www.flickr.com/people/72213316@N00/">Frank Kovalchek</a> on Flickr (<a class='cc' href='https://creativecommons.org/licenses/by/2.0/'>cb</a>)
</figcaption>
</figure>

<p>As before, a first look at multiple decomposition modes originally pointed to <em>Luma - Y709F</em> as being a good candidate for the conversion.
In this case, the focus would be on the texture of the tree itself.
The <em>RGB - Green</em> decomposition also looks quite good to use as a base moving forward.</p>
<p>The primary focus is the gnarled old tree itself and the secondary focus the lighting of the sun across the ground.</p>
<figure>
<img src='https://lh5.googleusercontent.com/--F61om9H5tI/VCXdbocVErI/AAAAAAAAAN8/TcRjQ66gxbs/w640-no/aldude2-bw-green.jpg' alt='GIMP B&W Base Image 2 Green Channel' width='640' height='853' />
<figcaption>
<em>RGB - Green</em> channel decomposition
</figcaption>
</figure>

<p>While the <em>RGB - Green</em> channel is nice for the tree texture, the sky still appears too bright and the ground could be a bit darker compared to the tree.
The sunlight on the upper branches of the tree and topping the brush on the ground gets slightly lost when the sky is so bright comparatively.</p>
<p>Having found a good layer for the tree texture, the other decompositions are examined for something that represents the sky and ground a little better.
The <em>RGB - Red</em> channel is a good compromise (the <em>RGB - Blue</em> channel is a little too noisy).</p>
<figure>
<img src='https://lh3.googleusercontent.com/-hNjzGq6TQyk/VCXg6Bxp1RI/AAAAAAAAAOQ/Pk_Rr5LwPR4/w640-no/aldude2-bw-red.jpg' alt='GIMP B&W Base Image 2 Green Channel' data-swap-src='https://lh5.googleusercontent.com/--F61om9H5tI/VCXdbocVErI/AAAAAAAAAN8/TcRjQ66gxbs/w640-no/aldude2-bw-green.jpg' width='640' height='853' />
<figcaption>
<em>RGB - Red</em> channel decomposition<br/>
(Click to compare to <em>RGB - Green</em>)
</figcaption>
</figure>

<p><em>RGB - Red</em> looks like a great candidate for the sky and ground, while <em>RGB - Green</em> will do nicely for the tree textures.
As before, layer masks can be used to modify the mix of the two layers to arrive at a final result.</p>
<p>Set the <em>RGB - Green</em> channel above the <em>RGB - Red</em> channel on the layer palette, and add a layer mask to the <em>RGB - Green</em> channel layer initialized to <strong>Black (full transparency)</strong>.
This lets all of the underlying <em>RGB - Red</em> channel layer show through.</p>
<figure>
<img src='https://lh4.googleusercontent.com/-pkmlbFtjCJk/VCXiTrLvIUI/AAAAAAAAAOk/XNYLpZaLmb0/w197-h180-no/aldude2-bw-green-Layers.png' alt='GIMP B&W Green channel with mask' />
<figcaption>
Red channel layer, with Green channel over + mask
</figcaption>
</figure>

<p>Now with the layer mask active (see the white outline around the layer mask, not the layer itself above), paint with a white color to allow that portion of the <em>RGB - Green</em> channel layer to show through.
When painting with white, it will turn the current layer the mask is associated with opaque in those areas – so focus on painting white where the tree is.</p>
<p>Below is a quick mask to illustrate.</p>
<figure>
<img src='https://lh3.googleusercontent.com/-zA0mNObEO1M/VCXj0WsYapI/AAAAAAAAAPI/8OEhalXw8Y8/w640-no/aldude2-bw-green-mask.jpg' alt='GIMP B&W Tree Layer Mask'  width='640' height='853' />
<figcaption>
It’s only a quick mask, don’t judge it too harshly…
</figcaption>
</figure>

<p>The layers at this point will look like this:</p>
<figure>
<img src='https://lh3.googleusercontent.com/-6Vmzoy7z60I/VCXknZZpU9I/AAAAAAAAAPw/y4cHaEAoz5c/w197-h179-no/aldude2-bw-green-Layers-mask.png' alt='GIMP Layer Mask B&W Dialog' />
</figure>

<p>The results from applying the mask above to the image:</p>
<figure>
<img src='https://lh6.googleusercontent.com/-pBi62NxVALI/VCXkNUuHfrI/AAAAAAAAAPg/1uL7GM0IL2E/w640-no/aldude2-bw-greenred-masked.jpg' alt='GIMP B&W Tree Final' data-swap-src='https://lh4.googleusercontent.com/-H-SKh5ALI2Q/VCYlWbprY7I/AAAAAAAAAQM/9W2w-PsDUXg/w640-no/aldude2-bw-desat.jpg'  width='640' height='853' />
<figcaption>
Final blend of <em>RGB - Red</em> and <em>RGB - Green</em> channels with mask<br/>
(Click to compare to straight desaturation)
</figcaption>
</figure>

<p>This could be a good final version, though there is still a bit of noise in the upper-left corner of the sky from the Red channel.
This could be fixed by adding another layer mask just for the sky which would allow adjustments to the levels of the sky relative to everything else.</p>
<h2 id="grain">Grain<a href="#grain" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Following some ideas from the great tutorial by Petteri Sulonen on <a href="http://www.prime-junta.net/pont/How_to/n_Digital_BW/a_Digital_Black_and_White.html">Digital Black and White</a>, he speaks a bit about grain in B&amp;W images.
There are a few different methods of adding synthetic grain to an image but visually the results are less than impressive.</p>
<p>Petteri was kind enough to make available a grain field that he processed himself from scanned film.
An easy way to add grain to an image using this grain field is to add it as a layer over the image, set the layer blending mode to <em>Overlay</em>, and adjust opacity to suit.</p>
<figure>
<img src='https://lh4.googleusercontent.com/-CsAOUoeabZU/VCmVscMpefI/AAAAAAAAAQo/Pd3BTmB49_k/w550-h315-no/aldude2-100-grain.png' alt='GIMP B&W Tree Grain Comparison' data-swap-src='https://lh4.googleusercontent.com/-2IKeDLcLjBI/VCmVsrB4oGI/AAAAAAAAAQs/OgkgI4FeTJI/w550-h315-no/aldude2-100-nograin.png' />
<figcaption>
100% crop with Petteri’s grain field applied as <em>Overlay</em> layer
(Click to compare no grain)
</figcaption>
</figure>

<p>You can download the grain-field to use here: <a href="http://farm8.staticflickr.com/7228/7314861896_292120872b_o.png">Petteri Sulonen’s grain field</a>.</p>
<h2 id="conclusion">Conclusion<a href="#conclusion" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>There are many ways to get to a monochrome image.
The important process to take way from this article is to consider <em>elements</em> of the final image as built up from multiple conversion methods, and controlling/applying them as needed to serve the final result best.</p>
<p>Mix and match the methods presented here to get to the best base for further modifications.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Commenting]]></title>
            <link>https://pixls.us/blog/2014/09/commenting/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2014/09/commenting/</guid>
            <pubDate>Mon, 15 Sep 2014 21:30:22 GMT</pubDate>
            <description><![CDATA[<img src="https://lh6.googleusercontent.com/-9gf4njPcjnY/VBdcwEcXBfI/AAAAAAAARcU/pRU0aMSq54o/w1650-no/Relics%2Bin%2BThomaskirche.jpg" /><br/>
                <h1>Commenting</h1> 
                <h2>I still don't have a good solution</h2>  
                <p>First things first.
I forgot to actually link to the new <a href="https://pixls.us/about" title="About Pixls.us">About page</a> in my last post.
So <a href="https://pixls.us/about">here it is</a>.
As with all things related to the site, any feedback, comments, or criticisms are welcome!</p>
<p>Speaking of feedback, comments, and criticisms, I wanted to write about it for a moment.</p>
<p>First, I want to thank everyone who has taken the time to contact me and provide me feedback on the site.
You have no idea how valuable it is to both as a motivator, and as a means to know when something is off.
I appreciate and give my full attention to each and every person and idea thrown at me.  Thank you!</p>
<!-- more -->
<p>From the beginning I have been considering how to let everyone interact with the site and posts.
It would be so much easier for folks to leave a comment on a page (or forum) directly.
Particularly if it allows everyone to view the conversation.</p>
<h2 id="disqus"><a href="#disqus" class="header-link-alt">Disqus</a></h2>
<p>One thing I could do relatively easily is just use a third party commenting system, like <a href="https://disqus.com/">Disqus</a>.
They make it <em>so</em> easy it almost seems silly <strong>not</strong> to do it.
An account, a few lines of javascript, and done.</p>
<p>This method comes with a price, though.
A price in both user privacy concerns as well as the fact that comments are no longer mine (pixls.us) to manage and archive.
I don’t know that I’m willing to pay that price yet just for convenience.</p>
<p>If anything, I may set it up as a temporary solution while I work on something a little more long term.</p>
<h2 id="discourse"><a href="#discourse" class="header-link-alt">Discourse</a></h2>
<p>From what I’ve seen so far, <a href="http://www.discourse.org/">Discourse</a> is the long term solution that I would like to get up and running.
It’s also “Yet-Another-Thing” I should thank darix on <code>#darktable</code> for pointing me to.</p>
<p>The only drawback at the moment is that my hosting provider doesn’t have what I need to get it running (relatively easily).
There are a couple of options for hosted solutions that I may go with, but I want to focus on getting the content ready to go for an “official” launch before I get too far down that rabbit hole.</p>
<h2 id="conclusion"><a href="#conclusion" class="header-link-alt">Conclusion</a></h2>
<p>Yes, I know there’s a need for having some sort of commenting system available for everyone to participate!
I’ll get one running just as soon as I can.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[An About Page and Help]]></title>
            <link>https://pixls.us/blog/2014/09/an-about-page-and-help/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2014/09/an-about-page-and-help/</guid>
            <pubDate>Sun, 14 Sep 2014 02:36:18 GMT</pubDate>
            <description><![CDATA[<img src="https://lh3.googleusercontent.com/-95I6L_COmM4/U1rYUJcK7mI/AAAAAAAAPdQ/O-Omo-gyuwI/w1650/rolf.jpg" /><br/>
                <h1>An About Page and Help</h1> 
                <h2>A little more about site</h2>  
                <p>I’ve started working a bit on the “About” page for the site.
I wanted a place to highlight the <em>mission statement</em> I’m sort of working from:</p>
<blockquote>
<p>To provide tutorials, workflows and a showcase for high-quality photography using Free/Open Source Software.</p>
</blockquote>
<p>As well as a place to let users know who is behind the scenes working on the site.
It’s mostly me at the moment, but I’ve managed to talk someone into helping me…</p>
<h2 id="enter-rolf-steinort"><a href="#enter-rolf-steinort" class="header-link-alt">Enter Rolf Steinort</a></h2>
<p>Yep, that’s right.
I’ve managed to talk Rolf Steinort of <a href="http://meetthegimp.org" title="Meet the GIMP Website">Meet the GIMP</a> fame into helping me out with the site.
We’re still not 100% sure <em>exactly</em> what this means yet, but I have already been bouncing ideas off him for some of the site details anyway.</p>
<!-- more -->
<figure>
<img src="https://lh3.googleusercontent.com/-980jZBjJRq0/U0xPe73g3pI/AAAAAAAAPu4/RHg7C4aB148/w640-no/Rolf.jpg" alt="Rolf Steinort by Pat David" />
<figcaption>
Rolf Steinort, creator of <a href="http://meetthegimp.org">Meet the GIMP</a>.
</figcaption>
</figure>

<p>Meet the GIMP is over <strong>7 years</strong> old now, and quickly closing in on episode <strong>200</strong>!
I am excited (and honored) to have his expertise and help as we build this site out.
Especially because my feeble attempts at video productions are sad at best, and Rolf has the type of voice that could read the phone book and I’d still listen to it.</p>
<h2 id="content-status"><a href="#content-status" class="header-link-alt">Content Status</a></h2>
<p>I’m currently in the process of choosing which articles from my archive on <a href="http://blog.patdavid.net/p/getting-around-in-gimp.html" title="blog.patdavid.net Getting Around in GIMP">Getting Around in GIMP</a> I want to translate over and possibly update/rewrite.
If anyone has suggestions on which ones they’d like to see, you can always let me know.</p>
<p>I’m currently thinking possibly the big 
<a href="http://blog.patdavid.net/2012/11/getting-around-in-gimp-black-and-white.html" title="blog.patdavid.net: B&amp;W Conversion">B&amp;W Conversion</a>, the 
<a href="http://blog.patdavid.net/2014/02/25d-parallax-animated-photo-tutorial.html" title="patdavid.net: 2.5D Parallax Animated Photo">2.5D Parallax</a>, and/or the
<a href="http://blog.patdavid.net/2013/09/film-emulation-presets-in-gmic-gimp.html" title="patdavid.net: Film Emulation in G&#39;MIC/GIMP">Film Emulation in GIMP/G’MIC</a>.</p>
<h2 id="breaking-up-long-pages"><a href="#breaking-up-long-pages" class="header-link-alt">Breaking Up Long Pages</a></h2>
<p>One other thing that I’m trying to decide on is if I should worry about breaking up long posts into multiple pages or not.
I don’t really have any interest in making users click through multiple pages to get all of the content (I personally hate doing this).</p>
<p>On the other hand, if the post is really long it could take some time to load all the assets if they all exist on a single page.
It may be a delicate trade-off for keeping a page responsive vs. requiring a user to click through to a second (or possibly third) page.
For the moment I’m erring on the side of convenience for the user and keeping things as long pages.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[The Big Picture]]></title>
            <link>https://pixls.us/blog/2014/09/the-big-picture/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2014/09/the-big-picture/</guid>
            <pubDate>Mon, 08 Sep 2014 16:06:28 GMT</pubDate>
            <description><![CDATA[<img src="https://lh4.googleusercontent.com/-RVauHGzbPRQ/UwvCg3d4Q6I/AAAAAAAAOS4/pLGsqpAM_8E/w1650-no/Into%2Bthe%2BFog.jpg" /><br/>
                <h1>The Big Picture</h1> 
                <h2>This is all about visual media after all...</h2>  
                <p>Sometimes I get into weird OCD mode where I need to have something for better or worse.
One of those things was a desire to break out of the mold of standard blog-type posts in articles for this site.
I’ve sometimes found images are relegated to second-class citizens on some page layouts that don’t do them justice.</p>
<p>I couldn’t let that happen here.
The problem was that I needed to do some things to make sure the typographic layouts were visually strong as well.
This meant a adding control to width and layout of main text elements, with the downside of having to hack a bit to make images large.
<!--more-->
The solution I ended up with was to add a tag surrounding elements that I wanted to break out of the current layout.
So I would end up with something like this:</p>
<pre><code class="lang-markup">&lt;!-- FULL-WIDTH --&gt;
&lt;img src=&quot;http://to be full width.png&quot;/&gt;
&lt;!-- /FULL-WIDTH --&gt;
</code></pre>
<p>Technically, in my case, I’m using the <code>&lt;figure&gt;</code> tag with <code>&lt;figcaption&gt;</code>, so my actual markup for full-width images looks like this:</p>
<pre><code class="lang-markup">&lt;!-- FULL-WIDTH --&gt;
&lt;figure&gt;
&lt;img src=&quot;http://full-width-image-src.jpg&quot; /&gt;
&lt;figcaption&gt;A caption for my image&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;!-- /FULL-WIDTH --&gt;
</code></pre>
<p>This let me capture that block in my processing when I build the site (metalsmith), and to modify the page code to accommodate what’s needed to make it full-width.
The result of this is that I can now break images out of their containers to span the full width of a page, like this:</p>
<!-- FULL-WIDTH -->
<p><figure class="full-width">
<img src="https://lh3.googleusercontent.com/-dzpZ6jpJF7E/U0k05P-js8I/AAAAAAAAO7Y/CgrjtmXgoT8/w1650-no/Nikolaikirche.jpg" alt="Nikolaikirche, Leipzig, Germany by Pat David" /></p>
<p><figcaption>
<em>A view of <a href="http://en.wikipedia.org/wiki/St._Nicholas_Church,_Leipzig">Nikolaikirche</a> in Leipzig, Germany.</em><br/>
For you <a href="http://www.darktable.org">darktable</a> fans, that’s houz in the bottom right.
</figcaption>
</figure>
<!-- FULL-WIDTH --></p>
<p>Of course, this can get very tiring very quickly.
I find that it tends to break the flow of reading, so should be used sparingly and wisely in the context of the post or article.
I promise not to abuse it.</p>
<h2 id="attribution"><a href="#attribution" class="header-link-alt">Attribution</a></h2>
<p>It’s a small thing, but I’ve added an attribution line for the lede images that you’ll find in the bottom right of the actual image.
I will also be incorporating the <a href="http://creativecommons.org/" title="Creative Commons">Creative Commons</a> icon fonts to support proper attribution notice as well.
Once I’ve done that, I will include a similar style attribution for other images (as it stands now, they can be put into the <code>&lt;figure&gt;</code> image caption).</p>
<h2 id="video-killed-the-radio-star"><a href="#video-killed-the-radio-star" class="header-link-alt">Video Killed the Radio Star</a></h2>
<p>Of course, sometimes what is needed to really explain a concept is to use a video. 
So I couldn’t just ignore a way to get good video styling.</p>
<p>My first hurdle was to find a way to keep the video container fluid with the rest of the page.
Remember, the page is built to be responsive, so it’s a single page served to all devices.
This means that I need to adapt to all possible viewing device screen resolutions (as well as possible).</p>
<p>Getting images to scale and resize correctly to fit new sizes was easy.
Doing the same thing for video is not <em>as</em> easy, but wasn’t too bad.
Once again, I’m relying on the kindness of strangers…</p>
<h3 id="the-code"><a href="#the-code" class="header-link-alt">The Code</a></h3>
<p>The answer came in the form of an <a href="http://alistapart.com/article/creating-intrinsic-ratios-for-video/">A List Apart</a> article from 2009 by Thierry Koblentz.
The basic premise was to create a box to contain the video embed, then to stretch the video to fill the box dimensions.
Then I could still the box to be responsive just like the other elements.</p>
<p>So I wrapped the video embed in a container box, and added some CSS classes:</p>
<pre><code class="lang-markup">&lt;div class=&quot;fluid-video&quot;&gt;
  &lt;iframe src=&quot;http://Normal Youtube Embed Code&quot;/&gt;
&lt;/div&gt;
</code></pre>
<p>Then it was just a matter of styling by setting the <code>padding</code> property to be percentage based on th width of the container.
To use a 16:9 ratio, the percentage should be 56.25%:</p>
<pre><code class="lang-css">.fluid-video {
    position: relative;
    padding-bottom: 56.25%;
    padding-top: 30px;
    height: 0;
    overflow: hidden;
}
</code></pre>
<p>With the container styled, it was a simple matter to fill the container with the embedded video:</p>
<pre><code class="lang-css">.fluid-video iframe {
    position: absolute;
    top: 0;
    left: 0;
    width: 100%;
    height: 100%;
}
</code></pre>
<p>Et voila!  Fluid video embeds that <em>hopefully</em> should maintain responsiveness.</p>
<p>Of course, I couldn’t leave well enough alone, and to coincide with the previous idea of displaying larger images, I have also added a little extra to embiggen video embeds as well (not full width stretching, but to give it a bit more prominence).</p>
<div class="big-vid">
<div class="fluid-vid">
<iframe width="560" height="315" src="https://pixls.us//www.youtube-nocookie.com/embed/tHTZOu668JM?list=UUMJEM7T8fpJx5CFsi0BfDGA" frameborder="0" allowfullscreen></iframe>
</div>
</div>

<p>Technically I’m stretching the video to 150% of the width of it’s parent container, which happens to be the same container as the <code>&lt;p&gt;</code> elements (so roughly 150% of the text column width).
Mostly I was going to use this type of styling for highlight videos, and leave a normal video embed if it’s not the focus of the article.</p>
<p>Just for reference, a normal (fluid) embed would look like this relative to the surrounding text:</p>
<div class="fluid-vid">
<iframe width="560" height="315" src="https://pixls.us//www.youtube-nocookie.com/embed/tHTZOu668JM?list=UUMJEM7T8fpJx5CFsi0BfDGA" frameborder="0" allowfullscreen></iframe>
</div>

<p>Which makes more sense for supporting material vs. feature videos.</p>
<h2 id="wrap-it-up-already"><a href="#wrap-it-up-already" class="header-link-alt">Wrap it up Already</a></h2>
<p>Ok, I could ramble on for longer, but I think my time is better spent getting back to writing the site.
I think the blog back-end and formatting is mostly done at this point, so on to feature articles!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[RSS Feed & Social Media]]></title>
            <link>https://pixls.us/blog/2014/09/rss-feed-social-media/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2014/09/rss-feed-social-media/</guid>
            <pubDate>Thu, 04 Sep 2014 15:15:33 GMT</pubDate>
            <description><![CDATA[<img src="https://lh6.googleusercontent.com/-LuDGEuWcAeQ/U_zlAWpDU-I/AAAAAAAARSA/wgRmO0BUoUw/s1920/Sarah-Original.jpg" /><br/>
                <h1>RSS Feed & Social Media</h1> 
                <h2>Finally getting the RSS feed working</h2>  
                <p>It took a bit of digging and wrestling to get there, but a couple of nights ago I also managed to get an RSS feed working for the blog posts on the site.
Honestly, I spent more time fiddling with dates in javascript than I should have.</p>
<p>I had to make some minor modifications this morning to accommodate where the location should be, but it should be live now.</p>
<p>The location is: <a href="http://pixls.us/blog/feed.xml" title="Pixls.us blog RSS Feed">http://pixls.us/blog/feed.xml</a>.</p>
<p>Both the blog index pages and post pages contain a <code>&lt;link&gt;</code> element that point to it, so most readers <em>should</em> find the feed if you point it at a page.
I’ll test it later, but the most important thing is the location is correct regardless of whatever hacking I do to the feed itself later.</p>
<!--more-->
<p>I’ve tested the feed quickly with <a href="http://feedly.com" title="feedly.com">feedly</a> and it appears to be working ok. If anyone else is using other feed readers and sees a problem, please let me know!</p>
<p>I intend to have a separate feed available for the articles and main site content when I get those ready to go (most likely at <a href="http://pixls.us/articles/feed.xml)">http://pixls.us/articles/feed.xml)</a>.</p>
<h2 id="social-media"><a href="#social-media" class="header-link-alt">Social Media</a></h2>
<p>I’ve also started (perhaps prematurely?) getting some social media accounts registered.
If for nothing else than to keep someone else from parking the accounts.</p>
<h3 id="google-"><a href="#google-" class="header-link-alt">Google+</a></h3>
<p>At the moment, I’ve got a <a href="https://plus.google.com/b/115344273324079495662/115344273324079495662/about" title="PIXLS.US Google+ Page">Google+ page</a> setup for the site.
I’ll try to keep updates flowing to that page as well (so if you happen to use g+, follow it!).
If you already <a href="http://plus.google.com/+PatrickDavid" title="Pat David on Google+">follow me</a> on g+ then you’ll know I’m fairly active there.</p>
<p>Now if I could just get google to allow my vanity URL to <em>only</em> read +pixlus I’d be a happy camper!</p>
<h3 id="twitter"><a href="#twitter" class="header-link-alt">Twitter</a></h3>
<p>Back when I first registered this domain name, I apparently had the foresight to register a <a href="http://www.twitter.com" title="twitter.com">Twitter</a> handle as well.
So if you want to follow the conversation there, you can find me <a href="https://twitter.com/pixlsus" title="Pixls.us Twitter Account">@pixlsus</a>.
I even found a first tweet back from Dec 2011!</p>
<h3 id="flickr"><a href="#flickr" class="header-link-alt">Flickr</a></h3>
<p>I’ve also created a <a href="http://www.flickr.com" title="flickr.com">Flickr</a> group for users on Flickr to share photos or congregate.
You can find the group <a href="https://www.flickr.com/groups/pixlsus/" title="Pixls.us Flickr Group">here</a>.</p>
<p>Really this is just a pre-emptive action to have these channels available as soon as we get going.</p>
<h2 id="moving-along"><a href="#moving-along" class="header-link-alt">Moving Along</a></h2>
<p>I feel like I’m gaining a little traction here.
There’s a few more things I need to tidy up and make some design decisions on, but at least I have a clear vision going forward.
I’ve already got an article ported over from <a href="http://blog.patdavid.net/p/getting-around-in-gimp.html" title="Getting Around in GIMP">Getting Around in GIMP</a> on my blog to use as a test case for formatting.</p>
<p>As soon as I like how it’s looking, I’ll work on porting over some other articles as well.
If it goes well, I may just go ahead and update/re-write some more things as well to test with.
As soon as I have things in a relatively stable state I’ll also get some new material out as well!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[A Push Menu]]></title>
            <link>https://pixls.us/blog/2014/09/a-push-menu/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2014/09/a-push-menu/</guid>
            <pubDate>Wed, 03 Sep 2014 17:17:16 GMT</pubDate>
            <description><![CDATA[<img src="/images/logo/pixls-atom.png" /><br/>
                <h1>A Push Menu</h1> 
                <h2>A Fanc(y|ier) Menu</h2>  
                <p>So, I’ve had the idea in my head for a while that it would be nice to get the navigation out of the way.
When I’m reading an article or tutorial, I don’t want to be inundated with elements that aren’t pertinent to what I’m reading.
I want to focus on the content.
<!--more--></p>
<p>I had to think a bit on the best way to possibly achieve this.
One option was to remove all navigation from the top of the page, and instead show them at the end of the article.
This runs on the assumption that the user wants to read the page, and when they’re finished reading to possibly navigate somewhere else.</p>
<p>If they came to the page by mistake, or want to get out, they can always use “Back” on their browser.
If they made it to the end of the article, then that’s the point where they may want other navigation options.
(This is how the page is currently laid out).</p>
<p>If they don’t have javascript turned on, they can still use the site just fine.
(This is important for accessibility, and security for some folks).</p>
<h2 id="what-about-a-little-more-"><a href="#what-about-a-little-more-" class="header-link-alt">What About a Little More?</a></h2>
<p>This is <strong>2014</strong> for the love of Pete!
Surely we can reasonably expect that <em>most</em> users will have javascript?
Well, maybe not.
If they do, however, we might be able to create something <em>slightly</em> nicer.</p>
<p>I personally like the idea of a menu hidden out of the way until needed.
So I put a small floating logo in the top-left of the page.
If you scroll down, the logo should slide out of view (not needed).
If you scroll up, it should bring the logo back into view (possibly needed).</p>
<p>This has already been here since I started building these pages, but now I’ve added a little more…</p>
<h3 id="push-menu"><a href="#push-menu" class="header-link-alt">Push Menu</a></h3>
<p>By default a click on the floating navigation logo will scroll the page to the navigation links on the bottom of the page.
If JS is turned off, the floating logo will always be visible, and when clicked will still get you to the navigation links quickly.</p>
<p>If JS is turned on, though, the floating logo will now “push” the page to the side as it reveals a navigation menu on the left edge of the page.
The first set of links mirror those at the end of the page for site navigation.
The next set of links is a representation of the “Table of Contents” for the current page.</p>
<p>This is anticipation of longer articles being posted soon.
I wanted to have an easier means of navigating long posts.</p>
<p><strong>Try it out!</strong></p>
<p>Clicking anywhere on the main page again will collapse the menu.</p>
<h4 id="pure-css-solution"><a href="#pure-css-solution" class="header-link-alt">Pure CSS Solution</a></h4>
<p>There may actually be a pure CSS solution for hiding/showing the menu.  The javascript is really only there to manage class states, all of the styling and transition effects are done in CSS.</p>
<p>Honestly, though, I think I’m mostly done for the moment.  I may come back and re-visit the pure CSS solution later, but for now I want to shift focus to working on content pages (and the actual content itself!).</p>
<h4 id="start-simple"><a href="#start-simple" class="header-link-alt">Start Simple</a></h4>
<p>My thought process so far on building the site is to minimize any requirements on stuff that’s questionable.  I’m only assuming HTML/CSS for the most part.
This is to make sure everything can still be accessible to folks.</p>
<p>It’s a royal PITA, though.</p>
<h3 id="a-table-of-contents-"><a href="#a-table-of-contents-" class="header-link-alt">A Table of Contents!</a></h3>
<p>So the addition of basic navigational elements was a no brainer, but that menu bar looked awfully sparse.
So, I used the extra space to include a “Table of Contents” for the current post/article as well.  This is generated automatically from all of the HTML heading tags in the page (h1/2/3/4/5).</p>
<p>My intention at the moment is to also have some sort of a reading progress indicator show up along the TOC.
I think this could provide nice visual feedback to users on where they are in an article, and how far along they might be.</p>
<p>Again, this is something that should degrade just fine in older browsers/no-js.  Those users simply won’t see the effect.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Building PIXLS.US]]></title>
            <link>https://pixls.us/articles/building-pixls-us/</link>
            <guid isPermaLink="true">https://pixls.us/articles/building-pixls-us/</guid>
            <pubDate>Tue, 02 Sep 2014 16:49:28 GMT</pubDate>
            <description><![CDATA[<img src="https://pixls.us/articles/building-pixls-us/dot-open-eyes.jpg" /><br/>
                <h1>Building PIXLS.US</h1> 
                <h2>A journey of enlightenment...</h2>  
                <p>This is just a log of reference material for actually building this site.  It’s mostly for my own reference and edification.  If you’re reading this, good luck making sense of my notes…</p>
<h3 id="static-website-with-node-js-and-metalsmith">Static Website with Node.js and Metalsmith<a href="#static-website-with-node-js-and-metalsmith" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I decided to build this site as a static website.  This means that I’m generating all of the material on my local machines, and then compiling them into static webpages that are then uploaded to the server for serving.  While this does sound like a pain in the ass, there are static site generators that make this job much easier.</p>
<p>So I looked around a bit more and found that apparently static site generators are the hip new thing.</p>
<p>I originally started with <a href="http://nanoc.ws/">http://nanoc.ws/</a>.  While this was pretty interesting looking, I am just not a Ruby guy.  So I had the double-whammy of learning the static build system along with Ruby occasionally.  Plus, after a host of problems getting the correct ruby and gems installed on my OSX machine I just decided it wasn’t worth the hassle. (I have to switch between win at work, and OSX/Linux at home - so I needed a consistent environment).</p>
<p>I expanded my search and finally remembered <a href="http://nodejs.org/">Node.js</a>.  Looking around a bit more and I also found a static site generator for Node.js called <a href="http://www.metalsmith.io">Metalsmith</a>.
This was good, as I was already reasonably familiar with javascript.</p>
<p>Metalsmith basically just takes a directory of files, and passes them into a javascript environment for processing and output to a new directory, ready to be uploaded to a server.
This is how this page is being generated right now as well.</p>
<h4 id="installing-the-build-tools">Installing the Build Tools<a href="#installing-the-build-tools" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The first thing to do is to get Node.js for your platform.  Once installed, you’ll have access to the commands <code>node</code> as well as <code>npm</code> (node package manager?).
Installing Metalsmith from there is as simple as:</p>
<p><code>node install metalsmith</code></p>
<p>Basically, Metalsmith just passes each of the directory contents through a stack of functions that you can use to process the files.  Many of these are available as plug-ins for Metalsmith.
For this site so far, I’ve been using these plug-ins:</p>
<ul>
<li>metalsmith-collections <code>npm install metalsmith-collections</code></li>
<li>metalsmith-permalinks <code>npm install metalsmith-permalinks</code></li>
<li>metalsmith-templates <code>metalsmith-templates</code></li>
<li>metalsmith-markdown <code>metalsmith-markdown</code></li>
</ul>
<p>For the templating option, I’m also using <a href="http://handlebarsjs.com/">Handlebars</a>.</p>
<p>There is a great tutorial on getting started with Metalsmith at <a href="http://www.robinthrift.com/posts/metalsmith-part-1-setting-up-the-forge/">Robin Thrift’s website</a>.</p>
<h4 id="project-structure">Project Structure<a href="#project-structure" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The structure of this site is still in flux.
By default metalsmith will look for a folder in the project root called “src”, and will output to a folder called “build”.
The site structure I have setup for this site is:</p>
<pre>
|-pixlsus/
    |-src/
        |-articles/
        |-images/
        |-js/
        |-pages/
        |-scripts/
        |_styles/
    |-templates/
    |-index.js
    |_package.json
</pre>

<h4 id="index-js">index.js<a href="#index-js" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The main processing file for building the site is <code>index.js</code>.</p>
<pre><code>var Metalsmith    = require(&#39;metalsmith&#39;),
    collections    = require(&#39;metalsmith-collections&#39;),
    permalinks    = require(&#39;metalsmith-permalinks&#39;),
    templates    = require(&#39;metalsmith-templates&#39;),
    markdown    = require(&#39;metalsmith-markdown&#39;),
    metadata    = require(&#39;./config.json&#39;),
    Handlebars    = require(&#39;handlebars&#39;);

Metalsmith(__dirname)
    .use(markdown({
        smartypants: true,
        gfm: true,
        tables: true
    }))
    .use(hyphenate_urls)
    .use(collections())
    .use(permalinks({
        pattern: &#39;:collection/:title&#39;
    }))
    .use(templates(&#39;handlebars&#39;))
    .destination(&#39;./build&#39;)
    .build();
</code></pre><p>There are a couple of other things I am doing for the templating, and one custom function I wrote to automatically hyphenate url’s. To avoid something like:</p>
<p> <code>articles/a%20new%20article/</code></p>
<p>I think this looks nicer: </p>
<p><code>articles/a-new-article/</code></p>
<p>Honestly, if I was just testing things out, the bare minimum I could use to get by would be:</p>
<pre><code>var Metalsmith    = require(&#39;metalsmith&#39;),
    templates     = require(&#39;metalsmith-templates&#39;),
    Handlebars    = require(&#39;handlebars&#39;);

Metalsmith(__dirname)
    .use(templates(&#39;handlebars&#39;))
    .destination(&#39;./build&#39;)
    .build();
</code></pre><p>If you have a base skeleton of a site, this would be all you need to run.</p>
<h4 id="building-the-site">Building the Site<a href="#building-the-site" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The site can be built by entering the site directory, and issuing the command <code>node index.js</code>.</p>
<p>Wait a few moments, and you should find a <code>build/</code> directory full of your files ready to go.</p>
<h3 id="uploading">Uploading<a href="#uploading" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>My host doesn’t have rsync access directly, but I can use rsync over ssh:</p>
<pre><code>rsync -PSauve ssh --exclude=EXCLUDE_FILES build/ USER@pixls.us:/home4/pixlsus/public_html/
</code></pre><p>Which works just fine.</p>
<h2 id="todo">TODO<a href="#todo" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>List of stuff I still need to get to:</p>
<ul>
<li>Test porting one of the ‘Getting Around in GIMP’ articles<ul>
<li>Working on it.</li>
</ul>
</li>
<li>Port a few other test articles</li>
<li>Use collections in Metalsmith to collect articles of a type<ul>
<li>Generate a page of those.</li>
</ul>
</li>
<li>Probably a new index.html/front page.</li>
<li>Work on “About” page</li>
<li>Finish styling article pages.<ul>
<li><del>Particularly the links (Mobile is done? - Tablet is needed).</del></li>
</ul>
</li>
</ul>
<p>This list will grow, of course, as it needs to until we launch!</p>
<h3 id="blog">Blog<a href="#blog" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I’ve started an article to represent blog posts on the site.
I intend for them to live at the path: <code>pixls.us/blog/YYYY/MM/title-of-post</code></p>
<p>The problem is that I can’t easily use <code>metalsmith-permalinks</code> for them.
There doesn’t appear to be a way to easily process a sub-folder of documents with a different path.
I don’t want the <code>articles</code> content to contain <code>YYYY/MM</code> in the path, but I <strong>do</strong> for blog posts.</p>
<p>So I think I’ll just have to write a plugin to handle that myself real quick.
Shouldn’t be too hard, just need to do something similar to what I already wrote for hyphenating urls.</p>
<p>Basically, grab all blog posts, update their paths to the hyphenated version and change the source file to <code>index.html</code> in the directory. <strong>IF</strong> the file is not already in a sub-directory.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[On Building PIXLS.US]]></title>
            <link>https://pixls.us/blog/2014/09/on-building-pixls-us/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2014/09/on-building-pixls-us/</guid>
            <pubDate>Tue, 02 Sep 2014 14:35:51 GMT</pubDate>
            <description><![CDATA[<img src="https://lh5.googleusercontent.com/-VScF_Hq-YE8/VAOA5mdIchI/AAAAAAAARYs/uj6xLzvyRiY/s0/pixls-background.jpg" /><br/>
                <h1>On Building PIXLS.US</h1> 
                <h2>Some notes from the back end</h2>  
                <p>For the curious, and to serve as an introduction, I thought I’d make a few notes about how this site is built and what I’m currently obsessing over.
Hopefully this can help define what I’m up to in case anyone wants to jump in and help out.</p>
<h2 id="the-purpose"><a href="#the-purpose" class="header-link-alt">The Purpose</a></h2>
<p>The entire point of this site, its “mission statement” if you will, is:</p>
<blockquote>
<p>To provide tutorials, workflows and a showcase for high-quality photography using Free/Open Source Software.</p>
</blockquote>
<p>Subject to revisions, of course, but mostly sums up what I’d like to accomplish here.
I also think it’s good to have this documented somewhere to remind me. :)
<!--more--></p>
<h2 id="the-technical"><a href="#the-technical" class="header-link-alt">The Technical</a></h2>
<p>I had already started writing about this elsewhere, but I’m going to reiterate it here for posterity (when I wrote it earlier I hadn’t completed the blog portion of the site yet).</p>
<h3 id="static-pages"><a href="#static-pages" class="header-link-alt">Static Pages</a></h3>
<p>On the recommendation of <a href="http://nordisch.org/">darix</a> on the <small>#darktable</small> irc channel, I looked into static site generators. 
I was originally going to use some sort of CMS and build things out from there, but I have to thank darix for causing me to pause and to think carefully about how to proceed.</p>
<p>I realized that I wanted to keep things simple.
The main focus of the site is the articles themselves (a tutorial, workflow, or showcase).
Really, this content is static by nature - so it made sense to approach it in that light.</p>
<p>The idea is to have all of the site content exist locally on my machine, then to pass it through some sort of processor to output all of the website pages ready to upload to my server. I was already familiar with the process as the <a href="http://www.gimp.org">GIMP</a> website is built in a similar fashion.</p>
<p>I just had to find a static site generator that I could use and extend as needed.</p>
<h4 id="enter-metalsmith"><a href="#enter-metalsmith" class="header-link-alt">Enter Metalsmith</a></h4>
<p>There is a plethora of static site generators out there (apparently it’s the hip new thing?), so I just had to find one that I was comfortable with using and extending.
I needed it to do what I wanted and get the hell out of the way so I could focus on content.</p>
<p>Oh, and I had to be able to extend it as needed myself.  I’m already pretty comfortable writing for the web, so I decided to go with the <a href="http://nodejs.org" title="Node.js">Node.js</a>-based <a href="http://www.metalsmith.io/" title="Metalsmith website">Metalsmith</a>.
Mostly because I’m already comfortable making a mess in javascript.</p>
<p>Metalsmith basically takes a directory full of data, and passes those objects through any series of functions I want, munges them somehow, and then spits out my website.
It’s the munging part that’s fun, and at least I can extend/modify things as needed quickly and easily.</p>
<p>tl;dr: I use javascript to process the files and output the website ready to upload.</p>
<h3 id="responsiveness"><a href="#responsiveness" class="header-link-alt">Responsiveness</a></h3>
<p>I also wanted the site to work well across different screen sizes and devices.
So I’m trying to incorporate some responsiveness in the design. 
You can actually see it working right now by resizing your browser width.
The page should reflow and elements change size to adapt to the new viewport.</p>
<p>This lets me focus on the content while knowing that it should adapt as needed to the viewer.
As a great starting point, I used Adam Kaplans <a href="http://www.adamkaplan.me/grid/">Grid</a>.</p>
<h3 id="easy-reading"><a href="#easy-reading" class="header-link-alt">Easy Reading</a></h3>
<p>Taking a cue from the past, I’m also trying to maintain legibility and readability in the pages.
This means paying attention to simple things like characters per line, font choices, and spacing.
I’m not a designer, so this topic has been fun to learn about as I go.</p>
<p>The lines on this post, for instance, should settle in around 60-75 characters per line (I’m aiming for about 65). 
The <a href="http://baymard.com/blog/line-length-readability">Baymard Institute</a> has a nice summary of the idea behind this.</p>
<h3 id="attractive"><a href="#attractive" class="header-link-alt">Attractive</a></h3>
<p>This goes without saying, I think, but who wants to look at an ugly layout/site?
I can’t say this site is beautiful, but at least I’m conciously trying to make it a pleasant experience…</p>
<p>If not for everyone, at least for me…</p>
<!-- FULL-WIDTH -->
<p><figure class="full-width">
<img src="https://lh6.googleusercontent.com/-kif88EbVMDY/U9F1NpY4YpI/AAAAAAAAQ9I/upgSaUleOaA/s1920/Dot.jpg" alt="Dot Window Portrait"/></p>
<p><figcaption>
Attractive to me. Possibly to others, but definitely to me!
</figcaption>
</figure>
<!-- /FULL-WIDTH --></p>
<h3 id="ease-of-use"><a href="#ease-of-use" class="header-link-alt">Ease of Use</a></h3>
<p>All the pretty in the world won’t fix something that’s hard to use. 
So I’m trying to put thought into user interaction.
I try to get cruft out of the way so the focus is on the articles, while also providing easy navigation or interaction (that should get the hell out of the way when it’s not needed).</p>
<h2 id="in-summary"><a href="#in-summary" class="header-link-alt">In Summary</a></h2>
<p>That’s the short version.
There’s a million things going on right now in my head as I build the site out.
I’ve got most of the pieces sorted out, and just need to finish assembling them in a way that I like.</p>
<p>So we should be ready to get things kicked off before too long!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[Hello World!]]></title>
            <link>https://pixls.us/blog/2014/08/hello-world/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2014/08/hello-world/</guid>
            <pubDate>Mon, 25 Aug 2014 00:40:00 GMT</pubDate>
            <description><![CDATA[<img src="/images/logo/pixls-atom.png" /><br/>
                <h1>Hello World!</h1> 
                <h2>Let's see if I can get this thing off the ground...</h2>  
                <p>Well, technically this isn’t the first post on the site.
I had actually started with building out the temporary <a href="https://pixls.us/">Coming Soon</a> page.
Then I shifted focus on styling the main content page for the site (articles).
After a bit I realized that I should probably be working on some sort of blog posts as a means for folks to keep up with what I’m doing.</p>
<p>So, here we are!</p>
<h2 id="who-am-i-"><a href="#who-am-i-" class="header-link-alt">Who Am I?</a></h2>
<p><strong>I’m <a href="http://blog.patdavid.net" title="Pat David&#39;s Blog">Pat David</a>.</strong></p>
<!-- FULL-WIDTH -->
<p><figure class="full-width">
<img src="https://lh3.googleusercontent.com/-GkKqZhlz7YA/U_IWqqkLDYI/AAAAAAAARMI/Wcu4JLy3m1g/s2048/Pat-David-Headshot-Crop-2048-Q60.jpg" alt="Pat David Headshot" /></p>
<p><figcaption>Yes, I need a new headshot.</figcaptions>
</figure>
<!-- /FULL-WIDTH --></p>
<!--
<figure> 
<img src="https://lh3.googleusercontent.com/-GkKqZhlz7YA/U_IWqqkLDYI/AAAAAAAARMI/Wcu4JLy3m1g/s2048/Pat-David-Headshot-Crop-2048-Q60.jpg" alt="Pat David Headshot" />
<figcaption>Yes, I need a new headshot.</figcaptions>
</figure>
-->
<p>I’m an occasional photographer and I dabble in digital artwork occasionally as the mood strikes me.
I also happen to be a fan of free software. Those two worlds collide fairly often, and lately I’ve been having a great time writing about them.</p>
<p>I’ve been writing tutorials on my blog as well as trying to modernize/update tutorials on the <a href="http://www.gimp.org" title="GIMP Website">GIMP website</a>. 
You could call me a <small>(small)</small> part of the GIMP team (but I’m trying to do more!).
I also try to help out where I can on other F/OSS projects as well (<a href="http://gmic.sourceforge.net" title="G&#39;MIC Homepage">G’MIC</a> is another place you’ll find me bumming around).
I do these things because I think it’s important to try and give back to the community in whatever way you’re capable of.</p>
<p><strong>I’m loud.</strong>  So I figured I could use that capability to help out.</p>
<p><small>(It’s my demented super-power).</small>
<!--more--></p>
<h2 id="so-what-s-going-on-here-"><a href="#so-what-s-going-on-here-" class="header-link-alt">So What’s Going on Here?</a></h2>
<p>Well, I mentioned on the main page that I felt like we could use a site/community dedicated to photography.  Particularly Free/Open Source Software and photography.</p>
<p>The problem I noticed is a lack of sites that focus explicitly on photography and workflows using F/OSS tools. 
There are plenty of blog posts on various sites, forum posts on various boards, and the occasional group on social media. 
There is <em>not</em> a great website to act as a portal specifically for photographic needs or interests.</p>
<p>It’s my sincere desire that I can build it.</p>
<p><em>I actually find it strange to write that.</em> 
How does this not exist already?!</p>
<h3 id="is-it-ready-yet-"><a href="#is-it-ready-yet-" class="header-link-alt">Is It Ready Yet?</a></h3>
<p>No.  Not quite.</p>
<p>I’m building this entire site from scratch, so it’s taking a little bit of time.
I only just got the blog portion finished, so hopefully that much is done.</p>
<p>I’ve also <em>mostly</em> finished what the main articles will look like.
I’m in the process of porting over some of my tutorials from my blog to here so that I can have some content to test things out with.
I enjoy doing this sort of thing, so it’s a nice way to relax for me.</p>
<p>After that I’ll just need to get a couple of other pages setup, and I should at least have the skeleton of the site up and running.
I promise, as soon as I have something to actually launch I will be loud and annoying about it.</p>
<h3 id="can-i-help-"><a href="#can-i-help-" class="header-link-alt">Can I Help?</a></h3>
<p>That’s the spirit!</p>
<p>Yes, absolutely. 
Just shoot me an email and I’ll be happy to answer any questions I can. 
If there’s some particular skill you’d like to bring, I’m all ears.
If you want to write an article or tutorial, let me know.</p>
<p><script type="text/javascript" language="javascript">
<!--
// Email obfuscator script 2.1 by Tim Williams, University of Arizona
// Random encryption key feature by Andrew Moulden, Site Engineering Ltd
// This code is freeware provided these four comment lines remain intact
// A wizard to generate this code is at http://www.jottings.com/obfuscator/
{ coded = "bMz@bMzkM5Yk.ptz"
  key = "PZRuYeaAcpsl30Th1G9JUtMdFbymI4j2BX8rozQk7OvqDVfCKxiNELSnWw5Hg6"
  shift=coded.length
  link=""
  for (i=0; i<coded.length; i++) {
    if (key.indexOf(coded.charAt(i))==-1) {
      ltr = coded.charAt(i)
      link += (ltr)
    }
    else {     
      ltr = (key.indexOf(coded.charAt(i))-shift+key.length) % key.length
      link += (key.charAt(ltr))
    }
  }
document.write("<a href='mailto:"+link+"'>Email me!</a>")
}
//-->
</script><noscript>Sorry, you need Javascript on to email me.</noscript></p>
  ]]>
            </description>
        </item>

    </channel>
</rss>
