Understanding Custom CSS Filters

September 29, 2012

Now that Custom CSS Filters landed in Chrome, I thought that I'd be a good idea to make a small introduction to these things called "shaders" for conventional web developers that are not familiarized with the term (but, unfortunately for them, they'll eventually need to learn how to make or at least use one in the near future).

I'll try to keep this article simple, so bear with me.

Usually, if you read about shaders you'll notice that people talk about "vertex shaders" and "pixel" or "fragment shaders". So what are they? How do they work? And what are they for?

As you probably already know, computers have a microprocessor (which is, in most cases, an Intel or AMD chip) which we usually refer to as CPU, or Central Processing Unit. However, most modern computers also include another chip called GPU, or Graphics Processing Unit. While CPUs are located in the motherboard of a computer, GPUs are usually located in a special card called a "Video/Graphics Card" (Usually made by NVidia or AMD/ATi) - that is, unless you own a laptop, in which case the video card (and the GPU) is also embedded within the motherboard, which is what we usually refer to as "integrated graphics/video card", and as you might imagine they are far less powerful than conventional, detached, video cards. However, like most things, there are always some exceptions: Some CPUs include the GPU inside them (Such as the Intel Core i Series or the AMD Fusion), and some laptops include a video card that is separate from the motherboard.

Now, why do we need to have two microprocessors? Can't we do everything with just one? The reality is that yes, we can, but unlike CPUs, GPUs have a radically different hardware architecture that allows them to calculate many things in parallel - something that CPUs can't do easily. They were made to do different things and accomplish different tasks; While CPUs are 'general purpose' tools, GPUs are really good at crunching numbers in parallel and is more focused on things that would benefit from that parallelization, such as 3D graphics, encryption/decryption or audio/video encoding/decoding.

In order to make use of all that parallel-processing power, we must write programs that are taylor-made just for GPUs; These programs are called shaders.

There are many "types" of shaders, such as vertex shaders, geometry shaders, tessellation shaders and fragment shaders, but for now we'll stick with the two most common ones: vertex and fragment/pixel shaders. At this point you must be wondering: What's the difference between a vertex and a fragment/pixel shader then? A vertex shader is a program that allows us to modify vertices. Vertices of what, you may ask? Well... Vertices of any primitive shape. A vertex is exactly what the word implies: An intersection between two or more lines as shown in the picture below:

If somehow we managed to make a program that modified the position of one (or more) of those vertices, that'd be a "vertex shader":

Fragment shaders on the other hand can be used to manipulate the colours and alpha values of pixels, among other attributes, based on the position of each vertex. For instance, based on the previous example we might use a vertex shader to manipulate the "shape" of the triangle, and we might use a fragment shader to manipulate its colours, producing the following result:

It's also important to understand that due to the way GPUs are made (it's a bit more complex than this, but this will do for now) all the primitives that we form will be composed out of triangles (which is usually called a "triangulated geometry"). This means that in the GPU a rectangle would like this:

So far, so good. But what does this have to do with CSS? Again - bear with me, we're getting close.

For example, by default a simple IMG tag (or an entire DIV with many other elements inside) would be composed, by default, of just two triangles:

Using this geometry would allow us to modify the initial shape by moving the vertices around:

However, we wouldn't be able to "bend" the image by half. Luckily for us we can specify how many subdivisions we can have (both horizontally as well as vertically):

Add enough subdivisions and you might be able to accomplish these sort of effects:

Remember that as we're just modifying the position of vertices we only need a vertex shader to modify these shapes. It's also very important to notice that the more subdivisions you add, the slower the program will execute. A good rule of thumb is to use as little subdivisions as you can to accomplish what you want to do.

Fragment shaders, on the other hand, allows us to modify the pixels inside that shape. Below are presented 4 examples of the effects you can accomplish using a fragment shader:

And what's even better, you can chain several "filters" together:

Under normal circumstances, you would need to specify both shaders. Luckily for us, CSS Custom Filters include "default" vertex and fragment shaders that doesn't do anything (called "pass-through"), which will allow us to specify just one.

While learning how to make a shader is WAY outside the scope of this article (if you're interested you can read @alteredq's excellent article on CSS Custom Shaders) at least I'll tell you how to use an existing shader.

In order to try the following examples you'll need to download Chrome Canary from this URL and once you install it you'll need to manually enable the CSS Custom Filters. To do so, you'll need to type "chrome:flags" in your browser's address bar:

Once you're there, search for "Enable CSS Shaders" and click on "Enable"

NOTE: In order to apply these changes you'll need to restart Chrome.

Probably many of you many of you remember Internet Explorer's Image filters which, in a few words, were pre-defined "effects" that you could attach to elements such as images that you could modify by specifying some parameters. Likewise, with this proposal Adobe attempted to do something similar and as such proposed to use default filters such as:

Click here to try this example in your browser (again, remember that you need Chrome Canary, otherwise it won't work)

And if the W3C accepts the proposal, Adobe suggests to add the following filters as well:

  • brightness, contrast, exposure
  • halftone
  • motion-blur (radius, angle)
  • posterize (levels)
  • bump (x, y, radius, intensity)
  • generators
  • circle-crop (x, y, radius)
  • affine-transform (some matrix)
  • crop (x, y, w, h)
  • bloom (radius, intensity)
  • gloom (radius, intensity)
  • mosaic (w,h)
  • displace (url, intensity)
  • edge-detect (intensity)
  • pinch (x, y, radius, scale)
  • twirl (x, y, radius, angle)

Specifying a default filter is extremely easy, in your CSS you just need to specify:

img {
    -webkit-filter: blur(5px);
}

And to chain multiple filters you just need to separate them using spaces:

img {
    -webkit-filter: blur(5px) opacity(50%) grayscale(50%);
}

/* OR */

img {
    -webkit-filter: blur(5px) 
                    opacity(50%)
                    grayscale(50%);
}

But besides these default filters, the most attractive thing about the CSS Custom Filters is that you can make your own or use an existing one made by someone else. Under the current proposal, specifying your own filter is just as easy:

custom(«Vertex Shader» «Fragment Shader» [ , «Vertex Mesh» ] [ , «Params» ])

A good example would be:

img { 
    -webkit-filter: custom(none mix(url(tint.fs) normal source-atop));
}

In this particular case you'll notice that the first parameter of the "custom()" function is "none", this is because we're specifying that we want to use the default, pass-through vertex shader. The second line, mix(url(tint.fs) normal source-atop) specifies that we want to mix the filter with the texture of the surface (?). mix() takes three parameters:

url(): Specifies the location of the fragment shader, if the shader can't be found (the URL returns a 403 or 404, for example) it falls back to the default, pass-through fragment shader
Blend Mode: Each pixel is blended with the mix color by using one of the predefined blend modes and its appropriate blend mode keyword
Alpha Compositing: Each pixel is composed with the mix color by using one of the predefined alpha-compositing operators and its appropriate alpha-compositing keyword

For example, this is how "tint.fs" looks like:

// tint.fs
precision mediump float;

void main()
{
    css_ColorMatrix = mat4(1.0, 0.0, 0.0, 0.0,
                           0.0, 0.0, 0.0, 0.0,
                           0.0, 0.0, 0.0, 0.0,
                           0.0, 0.0, 0.0, 1.0);
}

And it produces the following result:

Click here to try this example in your browser

Another thing that you can do is to make (or use) shaders that you can pass parameters to. For example, below I'm going to modify my existing shader (tint.fs) so that I can specify values for the R (Red), G (Green), B (Blue) and A (Alpha) channels of the shader from within CSS.

// tint.fs, modified to accept parameters
precision mediump float;

uniform float r, g, b, a;

void main()
{
    css_ColorMatrix = mat4(r, 0.0, 0.0, 0.0,
                           0.0, g, 0.0, 0.0,
                           0.0, 0.0, b, 0.0,
                           0.0, 0.0, 0.0, a);
}

To pass these parameters to the shader, all I need to do now is to separate them using commas and specifying the name of the variable and the value I want it to have:

img {
    -webkit-filter: custom(none mix(url(tint.fs) normal source-atop), r 1.0, g 0.5, b 1.0, a 1.0);
}

Click here to try this example in your browser

What's even better is that you can combine these filters with CSS Animations Keyframes to accomplish really beautiful effects as shown in the link below:

Click here to try this example in your browser

And obviously you can also combine custom shaders with preset shaders as well:

Click here to try this example in your browser

So far I've been trying to avoid talking about OpenGL's rendering pipeline (if you're looking forward to know more about it, again, refer to @alteredq's article which is a bit more technical than this one) but you do need to understand one very important thing: Vertex shaders are processed before Fragment shaders. What does this mean then? It basically means that we can make fragment shaders depending on the position of the vertices in 3D space.

In layman's terms, imagine that we have taken a surface and specified that we wanted to have two columns. Then let's suppose we used a vertex shader to "fold" one of the columns and bent it to a side. If we didn't specify a fragment shader the result would probably look like this:

However this doesn't look very impressive, but we could use a fragment shader to "darken" columns depending on their position in 3D space, therefore giving the impression that "less light" is hitting that surface:

We can start by making a simple vertex shader and to combine it with a passthrough fragment shader:

// transform.vs
precision mediump float;

attribute vec4 a_position;

uniform mat4 u_projectionMatrix;

void main()
{
    vec4 position = a_position;
    position.x -= 0.20;
    position.y -= 0.25;

    vec4 v = mix(position, vec4(0.5), 0.3);
  
    gl_Position = u_projectionMatrix * v;
}
// passthrough.fs
precision mediump float;

void main()
{
    if (gl_FrontFacing) {
        css_MixColor = vec4(0.0);
    }
}
#img {
    -webkit-filter: custom(url(transform.vs) mix(url(passthrough.fs) normal source-atop), 1 1);
}

Click here to try this example in your browser

But of course, this isn't very flexible, nor dynamic. However, we can also do a small modification to our vertex fragment to accept some parameters, such as transform, which accepts CSS 3D parameters such as rotateX, rotateY, rotateZ, perspective, translateX, translateY, translateZ and so on:

// transform.vs
precision mediump float;

attribute vec4 a_position;

uniform mat4 u_projectionMatrix;
uniform mat4 transform;

void main()
{
    vec4 position = a_position;
    position.x -= 0.20;
    position.y -= 0.25;

    vec4 v = mix(position, vec4(0.5), 0.3);
  
    gl_Position = u_projectionMatrix * transform * v;
}
#img {
    -webkit-filter: custom(url(transform.vs) mix(url(passthrough.fs) normal source-atop), 1 1 border-box, transform rotateX(30deg) rotateY(30deg));
}

Click here to try this example in your browser

Add a little more work to it and we'll eventually be able to make a vertex shader (that "bends" the image) and a fragment shader (that depending on the position of the vertices makes some pixels darker) that makes our image look like a book, as seen below:

Click here to try this example in your browser

So, while the good news is that this is now available for us to use, I regret to inform you that I also have some bad news as well.

First: Like I've mentioned before, this is only available on Chrome Canary. I think that it's only a matter of time until it also gets implemented in Firefox and Opera, but... I'm not so sure that it will ever get implemented on Safari and you can be absolutely sure that it will never be implemented in Internet Explorer, at least not in the forseeable future, as Microsoft, just like with WebGL, considers shaders to be unsafe.

Second: There are some polyfills out there for CSS Filters. This one for example allows you to use some of the default CSS filters (such as blur, sepia, grayscale, etc.) and works on Safari and Internet Explorer 7 and 8, but it doesn't support custom shaders nor does it work in Safari for iOS nor the most recent versions of IE. This other one does support custom shaders, but it uses WebGL... so again, it won't work on Safari for iOS nor any version of IE.

Third: For security reasons CSS Shaders won't be able to read the pixels of DOM objects and won't be able to "paint" pixels either (it will, however, be able to modify existing ones). In practice, Adobe suggested to use three special functions called css_ColorMatrix, css_MixColor and css_FragColor, but the latter has not been implemented in Chrome Canary yet.

Conclusion

While this technology is extremely interesting and enables the development of incredible tech demos, I wouldn't recommend to consider its use in any production site.