SyntaxHighlighter

Wednesday, 28 March 2012

New site

Just realised that I didn't leave a forwarding post here. I've migrated my blog across to Wordpress for various reasons. You can keep following my progress over at www.rabidlion.com

Hope to see you over there!

Saturday, 25 February 2012

2D Lighting System tutorial series: Part 4 - Point lights: 'Unwrapping' the shadow casters



In the last part of this series we outlined the algorithm that our lighting system will be using, as well as writing some of the code for our Light Renderer class. If you haven't already done you'll need to go back and work through that part, otherwise I'm afraid this tutorial won't make a lot of sense!


There are still some fairly big gaps to fill in our system, namely the shaders that will be doing all of the work, as well as some of the code that supports these shaders. Over the next few parts of the series we'll be focussing on writing these shaders, and adding them to our system.


To start with we will be looking at PointLights. 


PointLights and SpotLights, whilst similar, require different shaders to unwrap their rays into the columns of our render target 'unwrapTarget'. This might seem odd at first. Surely a PointLight is just a SpotLight with an arc of 360 degress?


Well, yes, and we could have written our shaders that way, but it would have made our lives (and the lives of any other programmer using our system) rather complicated. In fact, it turns out that because PointLights always light a full 360 degrees around them, that their shaders are rather simpler than those of SpotLights. Let's have a look at why this is.




The PointLight 'Unwrap' algorithm


With SpotLights we have to determine for each pixel whether it is inside or outside the arc of the light, as those outside of this arc won't lie on any of the light's rays. For PointLights we don't need to do this, as every pixel is within the arc of the light, and so every pixel will map onto one of the light's rays.


Let's outline the steps we need to take in order to Unwrap our rays. Remember - the Graphics Processor effectively iterates over each pixel of the surface we are drawing to, i.e. the unwrapTarget. Recall that for any given pixel on our unwrapTarget, the 'column' of pixels it is in (i.e. it's x coordinate) represents a ray eminating from the light, and the 'row' (y coordinate) represents how far along that ray the pixel is. With this in mind, the steps are:

1) Determine which ray on the Shadow casters texture corresponds to the current pixel on the unwrapTarget.


2) Calculate the normal of that ray (i.e. vector pointing from the light along the ray with length 1).

3) Determine the length of that ray (i.e. how long would it appear to be on the shadow casters texture.

4) Use the current pixel's y coordinate to determine how far along that ray it lies (and so how far from the light it is).

5) Multiply the normal from step 2 by the result in step 4.

6) Add the coordinates of the Light to those found in Step 5, convert the result to texture coordinates and sample that pixel from the shadow casters texture.

7) If the sampled pixel does not cast a shadow, then store the value '1' in the current pixel of the unwrapTexture. Otherwise, store the distance of the sampled pixel from the light (divided by the diagonal distance of the screen to scale it to the range 0 - 1).

Essentially we want to know what pixel on the shadow casters texture corresponds to the current pixel on the unwrapTarget, and then depending on whether or not that pixel casts a shadow, we store a distance from the light to that pixel. Hopefully all will become clear as we write the shader, and I'll throw in some illustrations which should more light on the issue (pun semi-intended).


The Unwrap Shader

Open up your solution from Part 3 and add a new effect file to the Effects folder in the content project that we created. Name this file Unwrap.fx.

Delete the contents of the file that XNA has kindly added for us, we'll be writing our shader from scratch. If your not familiar with shaders I suggest you go back and work through Part 2, or else try one of the tutorial series that I linked to at the bottom of that post.

First up, let's create a stubb for the PixelShader we'll be writing:

float4 PixelShaderFunction(float2 texCoord : TEXCOORD0) : COLOR0
{


}

And while we're here, we'll add the technique:

technique Technique1
{
    pass Pass1
    {
        PixelShader = compile ps_2_0 PixelShaderFunction();
    }
}

Now, before we continue, recall that we are intending to store the left and right halves of the screen in two different channels of the texture. This means that effectively each column of the unwrapTarget corresponds to two rays: one for the part of the target to the left of the light (which we represent in the red channel of the target) and one for the part of the target to the right of the light (which we represent in the green channel. 


So for each step of our algorithm we'll need two sections of code, one for each of the two points on the shadow caster texture that our pixel maps to. 


The first step in our shader is to determine which ray (or rather rays) our pixel lies on. To do this we'll need some trigonometry. First we need to decide on which line should represent the angle zero. The obvious choices are either directly up or down. Up is the normal choice, however in normal geometry 'up' is the direction of the positive y axis. 


In the case of texture coordinate, the positive y axis points down, and so we shall choose this as our zero line.


Next we need to choose which direction (clockwise or anti-clockwise) is the positive direction. Normally this direction is clockwise, however, in our case, since we are dealing with the two sections of the texture separately, we can choose both directions to be positive. This will simplify some calculations we need to make down the line.


In order to determine what the angle of our ray is, we need to convert it's x texture coordinate to an angle between zero (our line pointing straight down) and 180 degrees (the angle of the line pointing straight up). However working in degrees isn't particularly useful in trigonometry, as you may know. Instead we work in radians, in which case the line pointing directly upward from the light would have the angle PI. 


 So we need to map our x texture coordinate (between 0 and 1), to our angle range of 0 - PI. To do this we simply need to multiply the x texture coordinate by PI:

float rayAngle = texCoord.x * PI;

For this to work we first need to define PI. Add the following line at the top of the file:

#define PI 3.14159265

Next, as described in step 2), we will calculate the ray normals (these are just vectors pointing from the light along the rays with length 1):

float sinTheta, cosTheta;

sincos(rayAngle, sinTheta, cosTheta);

float2 norm1 = float2(-sinTheta, cosTheta);

float2 norm2 = float2(sinTheta, cosTheta);

The intrinsic function sincos() takes an angle, and two floats, and stores sin of the angle in the first float, and cos of the angle in the second. Then we use the fact that, for angles that increase in a clockwise direction, the normal at an angle theta is given by (-sin(theta), cos(theta)). 


Note for mathematicians: Normally the normal is (+sin(theta), cos(theta)) for angles that increase clockwise. However this assumes that the axes have the positive x direction at 90 degrees clockwise from the positive y direction, similar to the normal way of drawing axes for a graph. Texture coordinates are the opposite of this (positive x is 90 degress anti-clockwise of positive y), so the sign of the x component in our normals is reversed. 


To get the normal for the ray on the other side of the light, we just need to change the sign of the x coordinate. This is because essentially the coordinate system for the other side of the light is a reflection of the normal coordinate system, reflected in the line that passes vertically through our light. I've drawn a diagram below to illustrate:










Next we need to calculate the length of the rays. This is actually quite complex so we will start only by considering those rays to the left of the light and will then extend the technique to cover those the right of the light as well.


So, how do we determine the length of a given ray? Well, the length of the ray is determined by the distance between the light and whichever edge of the texture the ray hits. So first of all we'll need to know the coordinates of our light. The only way we're going to get that is through a parameter, so we'll add the following at the top of our shader file:

float2 LightPos;

Next we need to determine which edge the ray will hit. This is a bit trickier than it sounds, and took me a bit of time to figure out. My first instinct was to calculate the angle between the corners and the light and compare it to the angle of the ray, but that involves inverse trigonometry, which is expensive, and once that's done we would still need to calculate where along that edge the ray hit. 


The alternative is to calculate the point at which the ray intersects with each of the edges of the rectangle, and then whichever point of intersection is the closest to the ray is the one we want. So how do we do this? 


Well, let's start with the top edge of the rectangle. When the ray intersects this edge, the y coordinate of that point will be 0 (as the y coordinate of the top edge of the rectangle is 0). We also know that the point lies somewhere on the ray. Any point along this line can be described as some multiple of the normal we calculated above, added to the position of the light. In other words a point that is say distance 5 away from the light on the ray with normal norm1 would have the following coordinates:


(5 * norm1.x + LightPos.x, 5 * norm1.y + LightPos.y)


However, we know that norm1 is (sinTheta, cosTheta), from above, so a point that is distance d from the light would have the following coordinates:


(d * sinTheta + LightPos.x, d * cosTheta + LightPos.y)


Now, we know from above that at the point our ray intersects the top edge of the rectangle, the y coordinate is 0, which means that d * cosTheta + LightPos.y = 0.


We can rearrange this to get d = -LightPos.y / cosTheta, which tells us how far from the light the point of intersection between the ray and the top edge of the texture is! 


If we follow the same procedure for the other two possible edges (not 3, as a ray on the left of the light can never hit the right hand edge of the texture), we find that the distance to the left edge is:


d = LightPos.x / sinTheta;


and the distance to the bottom edge is:


d = (TextureHeight - LightPos.y) / cosTheta;


Where TextureHeight is the height of the texture. As an aside, we'll need to add TextureHeight as a parameter at the top of the file:

float TextureHeight;

So now we simply need to choose the smallest positive distance of these 3 distances, and we'll have the length of our ray! Why the smallest positive distance? Well, let's imagine we have a ray that hits the top of the screen. If you extend this ray on the other side of the light, it will also hit the bottom of the screen. However, our equations above will give us the distance as a negative, because it is in the opposite direction to the normal vector. Clearly we want to ignore this result, so we only choose from the positive results. 


What does this look like in code? Something like this (but don't copy it down just yet!):



float LightDist;

float topHit = -LightPos.y / cosTheta;

float leftHit = LightPos.x / sinTheta;

float bottomHit = (TextureHeight - LightPos.y) / cosTheta;

topHit = (topHit < 0) ? 2 * TextureWidth : topHit;

leftHit = (leftHit < 0) ? 2 * TextureWidth : leftHit;

bottomHit = (bottomHit < 0) ? 2 * TextureWidth : bottomHit;

LightDist = min(topHit, min(leftHit, bottomHit));

You may be wondering why we're setting each of the 'Hit' values to 2 * TextureWidth if they are less than 0. This is because we need them to be positive (otherwise min would return them instead of the smallest positive value), but we need to ensure it is longer than any of the positive values, hence we choose a value for it longer than any ray can possibly be. We'll need to add TextureWidth as a parameter while we're here:

float TextureWidth;

However, each of those ternary operators (?: operators) is a 'branching' instruction, which are quite expensive in shader programming. So, as a bit of an optimisation (this actually should result in fewer instructions), we change this code to the following (note, still not final code!):

float LightDist;

float3 hit = float3(-LightPos.y / cosTheta, LightPos.x / sinTheta, (TextureHeight - LightPos.y) / cosTheta);

hit = (hit < 0) ? 2 * TextureWidth : hit;

LightDist = min(hit.x, min(hit.y, hit.z));


The line where we use the ternary operator on hit actually performs the same operator on all 3 components at once, which is exactly what we need.  However we can't quite use this code yet unfortunately, as we haven't yet considered the other side of the light. 


Now we get to see where choosing to have the angles increase positively in both directions from the zero line will benefit us. If you look back at the equations for the distance along the ray to the top and bottom of the texture, you'll see that they only involve cosTheta. This in turn is because they only involve the y component of the normal. 


Now, if you look at the equations for our two normals, you will see that the y coordinate is the same for both, meaning that the distance along the ray to the top and bottom edges of the texture will be the same for both sides of the light. So, in order to incorporate the rays on both sides of our light into our code, we only need to calculate the distance to one more edge. In this case, using the same method as above, the equation will be:


d = (TextureWidth - LightPos.x) / sinTheta


Before we add that into our code, let's consider what will happen to LightDist. At the moment we only need a single float value, since we're only measuring a single distance. However, the distance to the nearest edge could easily be different between the rays on either side of the light (e.g. the light is in the middle vertically but very close to one of the side edges, and so very far from the other side edge). So we will need to record two LightDist's, which we will do by converting LightDist to a float2. We'll also need to take this into consideration when it comes to calculating the value to store in LightDist. 


So lets have a look at the new code (this will actually be the final version of this bit of code this time!):

float2 LightDist;

//...

float4 hit = float4(-LightPos.y / cosTheta, LightPos.x / sinTheta, (TextureHeight - LightPos.y) / cosTheta, (TextureWidth - LightPos.x) / sinTheta);

hit = (hit < 0) ? 2 * TextureWidth : hit;

LightDist = min(hit.wy, min(hit.x, hit.z));


The main thing we need to explain is the line where we assign a value to LightDist. Essentially we need to end up with LightDist.x holding the minimum of the distance to the top, bottom, and left side of the screen, which are held in hit.x, hit.z, and hit.y respectively. LightDist.y needs to end up holding the minimum between the top, bottom, and right hand side of the screen, i.e. hit.x, hit.z, and hit.w. 


Since both LightDist.x and LightDist.y need to know the minimum between hit.x and hit.z, we do that inside the nested min(), and then in the outer min() we find the minimum of that result with each of the right and left distance respectively. 


We have one more issue to deal with before we can move on to the next part of the shader. At certain angles either sinTheta or cosTheta may have the value zero. If that happens we will have an issue, as we will be dividing by zero, which in some environments would crash. In shaders it'll will give us very strange results. Either way we don't want to do it, so we'll need to add some code to handle these occassions. Fortunately, it's very easy to compute the distances manually in these situations. If cosTheta is zero it means are rays are pointing right/ left respectively, which means that the ray lengths are just TextureWidth - LightPos.x and LightPos.x. 


If the sinTheta is zero then either both rays are pointing up or both are pointing down. The way to determine which is to check cosTheta. If cosTheta is 1 then the rays are pointing down, if it's -1 then they are pointing up. We can use some clever maths to avoid using any if statements or ternary operators, and still get the right result. Let's write the final version of the code to determine the length of the ray:

float2 LightDist;

if (cosTheta == 0)
{
    LightDist = float2(TextureWidth - LightPos.x, LightPos.x);
}
else if (sinTheta == 0)
{
    LightDist = abs((((cosTheta + 1) / 2.0) * TextureHeight) - LightPos.y);
}
else
{
    float4 hit = float4(-LightPos.y / cosTheta, LightPos.x / sinTheta, (TextureHeight - LightPos.y) / cosTheta, (TextureWidth - LightPos.x) / sinTheta);

    hit = (hit < 0) ? 2 * TextureWidth : hit;

    LightDist = min(hit.wy, min(hit.x, hit.z));
}

Phew! You'll be pleased to know that that's the hardest part out of the way. Now all that's left is for us to use the information we've gathered so far to determine which pixel on the texture we want to sample. So next up is Step 4) from above, use the y texture coordinate and the length of the ray to find out how far from the light our pixel is. Since the y coordinate is between 0 and 1 we can just multiply it by the length of the ray (as 1 would give us the intersection between the ray and the edge of the texture, and 0 would give us the light itself). So we can add the following line to our shader:

LightDist = mul(LightDist, texCoord.y);

Remember that LightDist is a float2, which means that this line actually calculates the distance from the light for both the pixel on the ray to the left of the light and the pixel on the ray to the right of the light.


Next we move on to Step 5). Here we simply multiply the normals by the correct components of LightDist to get the offset from the light (in horizontal and vertical pixels) of the pixels we want to sample:

norm1 = mul(norm1, LightDist.y);

norm2 = mul(norm2, LightDist.x);

Next up is Step 6). Here we add the position of the light to our offset to get the actual coordinates (in pixels) of the pixels we want to sample. We then convert them to texture coordinates and sample them from the shadow casters texture:

float4 sample1 = tex2D(shadowCastersSampler, float2((LightPos.x + norm1.x) / TextureWidth, (LightPos.y + norm1.y) / TextureHeight));

float4 sample2 = tex2D(shadowCastersSampler, float2((LightPos.x + norm2.x) / TextureWidth, (LightPos.y + norm2.y) / TextureHeight));

In order for this to work we'll need to add a sampler for our shadow caster texture to the top of our shader file:

sampler shadowCastersSampler : register(s0);

So now finally we've sampled the pixels from our shadow caster texture so that we can determine whether or not they are shadow casting pixels. All that remains is for us to store our results for our two rays in the first two channels of our render target. As we described in Step 7), if the pixel we've sampled isn't casting a shadow then we want to store 1 as the result, otherwise we want to store the distance of that pixel from the light divided by the diagonal length of the shadow caster texture:

return float4((sample1.a < 0.01) ? 1 : LightDist.x / DiagonalLength, (sample2.a < 0.01) ? 1 : LightDist.y / DiagonalLength, 0, 1);


Once again we'll need to add a parameter to our shader file for this to work. This time it's the variable DiagonalLength. We could of course calculate this in the shader from the TextureHeight and TextureWidth. However, it would be expensive to calulate this for every single pixel on our render target when we can just calculate it once on the CPU and pass it in as a parameter:

float DiagonalLength;

And... we're done with our shader! The final code file, fully completed, should look something like this:

#define PI 3.14159265

float2 LightPos;

float TextureWidth;

float TextureHeight;

float DiagonalLength;


sampler shadowCastersSampler  : register(s0);

float4 PixelShaderFunction(float2 texcoord : TEXCOORD0) : COLOR0
{
    float sinTheta, cosTheta;

    sincos((texCoord.x * PI), sinTheta, cosTheta);

    float2 norm1 = float2(-sinTheta, cosTheta);

    float2 norm2 = float2(sinTheta, cosTheta);

    float2 LightDist;

    if (cosTheta == 0)
    {
        LightDist = float2(TextureWidth - LightPos.x, LightPos.x);
    }
    else if (sinTheta == 0)
    {
        LightDist = abs((((cosTheta + 1) / 2.0) * TextureHeight) - LightPos.y);
    }
    else
    {
        float4 hit = float4(-LightPos.y / cosTheta, LightPos.x / sinTheta, (TextureHeight - LightPos.y) / cosTheta, (TextureWidth - LightPos.x) / sinTheta);

        hit = (hit < 0) > 2 * TextureWidth : hit;

        LightDist = min(hit.wy, min(hit.x, hit.z));
    }
    
    LightDist = mul(LightDist, texCoord.y);

    norm1 = mul(norm1, LightDist.y);

    norm2 = mul(norm2, LightDist.x);

    return float4((sample1.a < 0.01) ? 1 : LightDist.x / DiagonalLength, (sample2.a < 0.01) ? 1 : LightDist.y / DiagonalLength, 0, 1);

}

technique Technique1
{
    pass Pass1
    {
        PixelShader = compile ps_2_0 PixelShaderFunction();
    }
}

Before we finish we need to add some code to the project we started in the last part of the series to set the shader parameters.


Open up the LightRenderer class file. First of all add the following field to the top of the class:

Vector2 screenDims;

And then add the following to the bottom of Initialize():

screenDims = new Vector2(graphics.GraphicsDevice.Viewport.Width, graphics.GraphicsDevice.Viewport.Height);

Followed by the following at the top of PrepareResources():

unwrap.Parameters["TextureWidth"].SetValue(screenDims.X);

unwrap.Parameters["TextureHeight"].SetValue(screenDims.Y);

unwrap.Parameters["DiagonalLength"].SetValue(screenDims.Length());

And that's it! We now have a working Unwrap shader for point lights. This generates the input for our CreateOcclusionMap() method which produces our occlusion map. I've uploaded the solution so far to codeplex as normal, which you can find here (with some typos fixed from the one I uploaded for the last part in the series. The code in the tutorial itself should be fine, I just failed to copy it correctly!):


Part 4 solution


In the next part of the series we'll see how we use the occlusion map to generate our light map for the scene. See you soon!

Saturday, 4 February 2012

2D Lighting System tutorial series: Part 3 - The Algorithm, and the LightRenderer class



Welcome to the third instalment of this series. In the last part we explained how we will be using light maps to light our scene. In this part we will be talking about the algorithm we will be using to generate these light maps each frame. 

This algorithm will not only deal with lighting pixels based on how far they are from the light, but also which pixels are in shadow based on the parts of the scene that the developer wishes to cast shadows. The details of the implementation of the algorithm will be left for later parts, but there are one or two parts that fit best in this part where I will go into slightly greater detail with some code.

After describing the algorithm I will go over the structure of the LightRenderer class which will be the main way that the developer interacts with the lighting system, and will go over the structure of the code the developer will need to write to use the system once it's finished.

Let's get going!

The algorithm

The idea for this algorithm was inspired by the lighting system described in @CatalinZima's blog (which you can find here: http://www.catalinzima.com/2010/07/my-technique-for-the-shader-based-dynamic-2d-shadows/). 

This algorithm deviates from @CatalinZima's technique in a number of ways, and also adds extra features such as spot lights, and an unlimited light radius (although I'm sure @CatalinZima's version could easily be adapted to add these features). 

I haven't done any performance testing between my system and @CatalinZima's, however I have attempted to tackle the step in the original version which @CatalinZima highlighted as being the bottleneck of the algorithm. I leave it to the reader to decide which to use (and I encourage you to find ways of improving either/ both and publishing them for the community to benefit from!). 

Overview

The basic idea of the algorithm is to cast a large number of rays out from the light in the arc of the light (360 degrees on a point light), and find the first pixel that has a shadow casting object drawn on it for each ray, and keep a record of how far it is from the light. 

Then for each pixel on the screen we determine which ray it falls closest to and test whether it is nearer to the light than the first shadow casting pixel on that ray. If it is, we determine how much light it gets based on factors such as how far from the light it is, and what color the light is. If it isn't then it is in shadow, so we don't light it at all. 

We use this information to create a light map for each light, and then combine them to get an overall light map for the scene, and then finally we light the scene using the shader we wrote in part 2. Since we regenerate the light maps each frame if the lights, shadow casting objects, or background change at all this will be reflected in the lighting/ shadows.


Step 1: Caching the background and shadow casters


First of all, the developer will need to draw all of the sprites that make up the background that will be lit by our lighting system onto a single texture. We will make this as easy as possible for the developer by keeping the interface with our system nice and familiar. We also have to do the same for the shadow casting sprites. It's important for our system that we have one single texture with the background sprites and one single texture for the shadow casting sprites, but we don't want to force the developer to handle this themselves.

Step 2: Casting the rays

For those of you that are thinking 'hang on, this whole method sounds like ray tracing', well, it is, sort of. However the way we'll be approaching this means that the expensive bit of ray tracing, finding the point at which the ray intersects the scenery, can be done for all rays simultaneously using a single draw call (we shall come to that later). 

The first stage in casting the rays is clearly borrowed from @CatalinZima's method. As illustrated below, we transform the scene so that all of the rays are aligned linearly (in our case vertically):








Now, when we 'unwrap' the full set of rays into one channel of a texture, the number of rays we can cast is limited by the horizontal resolution of the texture. Let me explain. Lets revisit the first image above, this time labelling the rays:




Now we unwrap them so that they are all vertical, keeping the labelling:




The maximum number of rays we can cast is limited by the number of 'columns' of pixels in our unwrapped texture, i.e. the horizontal resolution of the texture. However, as we discussed above, all we actually care about is how far from the light the first opaque pixel is for each ray. 

Let's take a step back and unpack that statement. There are two bits of information we need: 1) is the pixel opaque? If it is, we want to know 2) how far from the light is it.

Based on this we can actually encode the information we want for each pixel into a single float as follows:

float result = (sample.a < 0.01) ? 1 : LightDistance;

If the pixel is opaque, store it's distance from the light (we'll discuss methods for transforming this into the range 0 - 1 in a moment). Otherwise set it at 1 (the maximum distance from the light). 

Then all we need to do is find the minimum value for that column to get the distance of the first opaque pixel from the light (as all the non-opaque pixels will automatically have values further away than the opaque ones). 

So by doing this we can encode the information from each ray into a single texture channel. In a standard texture there are 4 channels, so we could increase the maximum number of rays we can cast to 4 times the horizontal resolution of the texture we're unwrapping to. 

In reality we won't use all 4 channels, for reasons I'll come onto later. In fact we use 2 channels. We unwrap all of the rays to the left of the light to the red channel of the texture and all of the rays to the right of the light to the blue channel of the texture. To see what this looks like, the following image is the unwrapped texture of the image above:





A bit psychedelic eh? 

As I mentioned above, we needed to transform the distances from the light so that they sit in the range 0 - 1 (otherwise they will be clamped to the 0 - 1 range by the graphics processor). There are few ways we could have done this. 

I chose to do this by simply taking a value guaranteed to be longer than the distance from the light to any pixel on the screen (the diagonal length of the screen), and dividing by that. Some information is lost, as most of the time the pixel distances will only be represented by a small number of the values between 0 and 1, however the final quality seemed good enough. 

The most accurate would have been to divide the distance by the length of each ray. I discounted this as it would have meant calculating the length of each ray twice, once when unwrapping the rays, and once when using the value to retrieve the distance again when we decide which parts of the ray are in shadow when creating the light map.

A cheap compromise would have been to calculate the longest distance of the light to the furthest corner of the screen on each side (since we store each side of the screen separately) and divide by that. In the worst case it would be the same as our method, but in general would theoretically provide better quality. Were I to write the system again I would probably use this method, as these distances would only need to be calculated once per frame on the CPU. 


Edit: Another method, which has only occurred to me since writing this post, would be to divide by the 'radius' or maximum reach of the light, as any values beyond this would be in shadow anyway. This will be something I will look into in the future.

There is one more issue we need to deal with when unwrapping the rays. If we were to just rotate the rays as they are, then we would have lots of columns of different lengths.

There were 3 possible solutions to this. 

The first would be to make our texture resolution big enough to hold the longest ray. Since this length will change as we move the light (and we can't create a new texture each frame without incurring the wrath of the garbage collector on the xbox) this would need to be equal to the longest value a ray can be, i.e the diagonal length of the screen. For a 1280 x 720 screen that would mean the texture would need to be 1468 pixels high. 

The second option would be to scale all of the rays down so that the longest ray fits the texture. So if we had a square texture of say 1280 x 1280, then if any ray was longer than 1280 pixels, all of the rays would be compressed to fit into the texture. 

The third option is to scale each ray individually so that it fits into the texture. That way only rays longer then 1280 long would be compressed, while those shorter would lose no information. This is the option I went for, as the first would likely end up taking up too much texture memory (particularly for a higher resolution), and the second seemed to unnecessarily penalise shorter rays just because there's a longer ray in the scene. 

So by this point we would have a texture for the current light with all of the rays unwrapped, with each pixel holding a distance from the light, scaled down to the range 0 - 1 by dividing by the diagonal length of the screen. 

Step 3) Find the nearest shadow caster

The next step is to find the distance along each ray to the first shadow casting pixel. As discussed above, because of the way we've encoded our values in the texture all we need to do is find the minimum value in each column. @CatalinZima uses a folding technique and a special shader to find the minimum, which he states is probably the bottleneck of his technique. 

We will take a different approach. Ideally what we want to end up with is a texture 1 pixel high with the same width as our unwrapped texture, with each pixel holding the minimum value from the corresponding columns in the unwrapped texture, illustrated below:





We will call this our occlusion map, as it holds, for each ray, the distance at which pixels start to become occluded.

In order to do this, we must look at 'row' of the unwrapped texture at least once and compare each one to the value already in our occlusion map.  If the value is less than the one we already have stored in our occlusion map, then we want to write it to the occlusion map, otherwise we discard it. Unfortunately there isn't (to my knowledge) an easy way of doing this with shaders in a single pass. 

However, there is a way we can compare the value we are trying to write to a surface to the value already there, and have the final value depend on some function of the two: Blending.

Aside: Blending and BlendStates 


You've probably already come across blending in XNA. The three types most people come across are Opaque Blending, Alpha Blending and Additive Blending. 


Opaque Blending is probably what you used when you started with XNA. Alpha Blending is what you use when you want to display partially transparent sprites, whereas Additive Blending is used for effects like fire, lightning, and neon glows. I'll take a moment to use these to explain exactly what blending is, before I go on to explain how we use it to find the minimum distance in each column of our texture. 


I'll start with Opaque Blending. 


Opaque Blending is very simple, in fact its not really even blending per se. When you draw to a surface (texture, the screen etc), after the shader has determined the color of the pixel it's working on (which we call the Source), it goes to output it to the surface at the correct position, but there is already a color there (the Destination). 


Possibly we've already drawn a sprite that covers that pixel, or maybe it's just the color from the GraphicsDevice.Clear() call that we always put at the beginning of Draw() in XNA. 


The Graphics Processor has a choice: Does it simply overwrite the Desitnation color with the Source color, or does it use it somehow to determine the final color that gets stored on the surface. 


In Opaque Blending, it takes the first option, and just overwrites the Desitnation color with the Source color. This means that sprites drawn later that overlap with sprites drawn earlier will appear to be 'in front' of the earlier sprites.


Additive Blending is only slightly more complex. With Additive Blending the Graphics Processor takes the Source color and adds it to the Destination Color. This makes the image in this area seem brighter, as well as blending the colors together. 


Alpha Blending is slightly more complex, and I won't go into great detail here, you can read about it on this MSDN blog: http://blogs.msdn.com/b/etayrien/archive/2006/12/07/alpha-blending-part-1.aspx


In general though, it uses the alpha value of the Source Color to determine how much of the Destination Color to add to it.  We can generalise the blending process to the following equation:

Final Color = BlendFunction(Source * SourceBlendFactor, Destination * DestinationBlendFactor)


Where BlendFunction, SourceBlendFactor, and DesitnationBlendFactor can be one of a set number of options. 


E.g. for Additive blending we have:


FinalColor = Add(Source * 1, Desination * 1)


i.e. BlendFunction = Add, SourceBlendFactor = 1 & DestinationBlendFactor = 1.


And for Opaque blending we have:


Final Color = Add(Source * 1, Destination * 0)




Using the Min BlendFunction


For our purposes, we're interested in a different Blend function: Min.


As the name suggests, Min finds the minimum of it's two parameters, so the blend function we want is 


Final Color = Min(Source * 1, Destination * 1)


This way, if we can draw all of the 'rows' of the unwrapped texture onto our occlusion map texture, we should be left with the minimum value of each column in the corresponding pixel of our occlusion map. What does this look like in code? Like this:

graphics.GraphicsDevice.SetRenderTarget(occlusionMap);

graphics.GraphicsDevice.Clear(Color.White);

spriteBatch.Begin(SpriteSortMode.Deferred, collapseBlendState, sampleState, null, null);

for (int j = 0; j < fullScreen.Width; j++)
{
    spriteBatch.Draw(unwrapTarget, new Rectangle(0, 0, graphics.GraphicsDevice.Viewport.Width, 1), new Rectangle(0, j, graphics.GraphicsDevice.Viewport.Width, 1), Color.White);
}
spriteBatch.End();

During initialization we create the BlendState we want to use:

collapseBlendState = new BlendState();

collapseBlendState.ColorBlendFunction = BlendFunction.Min;

collapseBlendState.AlphaBlendFunction = BlendFunction.Min;

collapseBlendState.ColorSourceBlend = Blend.One;

collapseBlendState.ColorDestinationBlend = Blend.One;

collapseBlendState.AlphaSourceBlend = Blend.One;

collapseBlendState.AlphaDestinationBlend = Blend.One;

Then we begin a SpriteBatch using this BlendState:

spriteBatch.Begin(SpriteSortMode.Deferred, collapseBlendState, sampleState, null, null);

Then, assuming that we've set the graphics device to draw to our occlusion map (more on this later), we simply use a for loop to draw a sprite containing a single row of the unwrapped texture at a time onto the occlusion map:

for (int j = 0; j < fullScreen.Width; j++)
{
    spriteBatch.Draw(unwrapTarget, new Rectangle(0, 0, graphics.GraphicsDevice.Viewport.Width, 1), new Rectangle(0, j, graphics.GraphicsDevice.Viewport.Width, 1), Color.White);
}

And then we simply class End() on the spritebatch:

spriteBatch.End();

Because this is only using a single texture as a source, SpriteBatch should batch this up into a single draw call, meaning that we effectively find the first shadow casting pixel for each ray in one go! Huzzah!




Step 4) Creating the light map


In this step we need to work out how much light each pixel receives from our light and generate a light map, using our occlusion map to determine whether a pixel is in shadow or not. 


We do this using a special shader, which, for each pixel:


1) Determines which ray that pixel lies on


2) Determines the pixel's distance from the light


3) Looks up the distance from the light to the first shadow casting pixel


4) Compares the two distances to determine if the pixel is in shadow: if it is, color it black, if not continue


5) Based on how far it is from the light (and the angle it makes with the light, for spotlights) determine how much light it should get.


6) Multiply the light's color by this value and color the pixel with the result In reality we use a method which avoids the branching logic of an if statement in step 4, but this is the basic idea.




Step 5) Blurring the light map


As you may know (or can observe), shadows that are close to a light have very crisp edges, whereas shadows further from a light source are more indistinct. 


In reality there are some fairly complex equations that govern this, but I again took a leaf from @CatalinZima's book and used a radial blur to approximate this effect. If we had a lot of processing headroom in our game and wanted our shadows to be a bit more realistic we could do some research and attempt a more complex method to create these soft shadows.


In our case however we simply blur the light map for our light using two shaders, one that blurs the image horizontally, and the other vertically. The shader works by taking a number of samples from the light map around the current pixel, and averaging them to give a final color. 


The more spread out these samples are, the more blurred the image. In our case we use the distance from the light of the current pixel to decide how spread out these samples should be, so that the light map gets more blurred the further from the light it is. 




Step 6) Adding the light maps together


Once we've completed steps 1 - 4 with our light we can add the result to a texture representing our final light map. Because in the real world lights work additively (if you have two lights lighting the same point the resulting light reflected is the sum of the two lights) we can simply additively blend the light maps together into the final light map texture.




Step 7) Render the final scene


Finally we draw our background lit by the light map using the light blend shader we wrote in Part 2, followed by the shadow casters in the scene, and any foreground sprites that are not affected by the light system (e.g. a Heads Up Display). 


We will expand on each of these stages as we get to them in the series, but that should give you an outline of the algorithm our light system uses. 


For the rest of the post we will talk about the main classes that we will be writing that the developers using our lighting system will need to interact with, and how their game code will use the lighting system.




The LightRenderer class


Fire up Visual Studio and create a new Windows Game project, which we shall call "2DLightingSystem". Visual studio will create the default files with a namespace of '_2DLightingSystem'.


Once it's been created we need to create 4 new classes. 


The first 3 will just be stubs that we'll flesh out later on in the series, so create 3 empty code files called "Light.cs", "PointLight.cs", and "SpotLight.cs", and add the following code to each respectively:

using Microsoft.Xna.Framework;

namespace _2DLightinSystem
{
    public abstract class Light
    {
        public Vector2 Position;
        public float Power = 0f;
        public Color color;
        public float radius = 0f;

        public Light(Vector2 pos)
           : this(pos, 0f, Color.White)
        {
        }

        public Light(Vector2 pos, float power, Color color)
        {
            Position = pos;
            Power = power;
            this.color = color;
        }
    }
}


using Microsoft.Xna.Framework;

namespace _2DLightingSystem
{
    public class PointLight : Light
    {
        public PointLight(Vector2 pos)
            : base(pos, 0f, Color.White)
        {
        }

        public PointLight(Vector2 pos, float power, float radius)
            : base(pos, 0f, Color.White)
        {
            this.radius = radius;
        }
  
        public PointLight(Vector2 pos, Color color)
            : base(pos, 0f, color)
        {
        }

        public PointLight(Vector2 pos, float radius)
           : base(pos, 0f, Color.White)
        {
            this.radius = radius;
        }

        public PointLight(Vector2 pos, float power, float radius, Color color)
            : base(pos, power, color)
        {
            this.radius = radius;
        }
    }
}


using System;

using Microsoft.Xna.Framework;

namespace _2DLightingSystem
{
    public class SpotLight : Light
    {
        public Vector2 direction;
        public float innerAngle;
        public float outerAngle;

        public SpotLight(Vector2 pos, Vector2 dir, float inner, float outer, float power, float _radius, Color col)
            : base(pos, power, col)
        {
            direction = dir;
            innerAngle = inner;
            outerAngle = outer;
            radius = _radius;
        }

        public SpotLight(Vector2 pos, Vector2 dir, float angle, float power, Color col)
            : base(pos, power, col)
        {
            direction = dir;
            innerAngle = angle;
            outerAngle = angle;
        }

        public SpotLight(Vector2 pos, Vector2 dir, float angle, float power, Color col)
            : base(pos, 0f, col)
        {
            direction = dir;
            innerAngle = outerAngle = angle;
        }

        public float GetAngleBias()
        {
            float diffAngle = (float)Math.Acos(Vector2.Dot(direction, Vector2.UnitY));
            if (float.IsNaN(diffAngle))
                 diffAngle = (float)(((Math.Sign(-direction.Y) + 1) / 2f) * Math.PI);
            if (diffAngle - (outerAngle / 2f) < 0)
                return 0;
            return MathHelper.Pi * 2f;
        }
    }
}

Next up create an empty file called "LightRenderer.cs", and add the following stub to it: 

using System;

using System.Collections.Generic;

using Microsoft.Xna.Framework;

using Microsoft.Xna.Framework.Graphics;

using Microsoft.Xna.Framework.Content;

namespace _2DLightingSystem
{
    public class LightRenderer
    {

    }
}

We'll need to keep hold of a reference to the GraphicsDeviceManager, so add the following to the top of the class:

public GraphicsDeviceManager graphics;

And create the following constructor:

public LightRenderer(GraphicsDeviceManager _graphics)
{
    graphics = _graphics;
}

The first thing we need to do is decide how our developer will interface with the system. Since this system is built on the assumption that the developer is familiar with XNA, they will likely also be familiar with SpriteBatch's Begin(), Draw(), End() pattern, so we shall borrow from that.


As mentioned in Step 1 above, we will need the developer to draw all of their background sprites onto a single texture, and all of their shadow casting sprites onto another. In order to signal this to the developer, we will require them to use the following pattern:

BeginDrawBackground(); 

spriteBatch.Begin();

//Developer draws sprites with spritebatch

spriteBatch.End(); 

EndDrawBackground();

BeginDrawShadowCasters();

spriteBatch.Begin();

//Developer draws sprites with spritebatch

spriteBatch.End();

EndDrawShadowCasters();



So let's create stubs for these methods in our LightRenderer class:

public void BeginDrawBackground()
{
}

public void EndDrawBackground()
{
}

public void BeginDrawShadowCasters()
{
}

public void EndDrawShadowCasters()
{
}

We'll come back to what we need to do in these methods shortly.


After the developer has done this, we have what we need to draw everything (almost) to draw the scene, with the exception of the foreground, so we will let the developer just use a simple method call to make that happen:

DrawLitScene();

Then the developer can continue to draw any sprites, such as a foreground, HUD etc as usual. 


The final missing element for the developer is a way to add lights to the scene. To keep this simple we will just keep two public lists of the lights (one for point lights and one for spot lights), which the developer can manipulate as they wish. All we need to add is the following at the top of our LightRenderer class:

public List<SpotLight> spotLights;

public List<PointLight> pointLights;



And that's it for the public interface to the LightRenderer class. 


Next we'll flesh out some of these methods a bit, leaving space for us to come back later in the series to add code that we're not quite ready for yet. 


We'll start with BeginDrawBackground(). Before the developer can start drawing their background sprites, we need to make sure they're drawing to our texture, and not the screen. Before we write the code for this, we'll need a texture. More specifically, we need a special kind of texture that we can draw to, called a render target or, in XNA, a RenderTarget2D.


So add the following to the top of the class:

public RenderTarget2D backBufferCache;

And create an Initialize(): method containing the following code:

public void Initialize()
{
    backBufferCache = new RenderTarget2D(graphics.GraphicsDevice, graphics.GraphicsDevice.Viewport.Width, graphics.GraphicsDevice.Viewport.Height);
}

So now we have a render target, we need to make sure that when the developer starts drawing their background sprites the Graphics Processor draws them to our render target and not the screen. 


We do this by adding the following at the start of BeginDrawBackground():

graphics.GraphicsDevice.SetRenderTarget(backBufferCache);

This code does exactly as you'd expect, it tells the Graphics Processor that from now on it should draw to our render target, and not the screen. 


And we're done with BeginDrawBackground(). 


For EndDrawBackground, there's actually nothing we need to do. You could in fact omit it entirely. The reason I've left it in is because it demarcates the area that the developer should draw their background sprites and matches the API pattern they're used to with XNA and SpriteBatch.


Next up is BeginDrawShadowCasters(). Once again we'll need a RenderTarget2D for the developer to draw to, so add the following at the top of the class:

public RenderTarget2D midGroundTarget;

And the following to Initialize():

midGroundTarget = new RenderTarget2D(graphics.GraphicsDevice, graphics.GraphicsDevice.Viewport.Width, graphics.GraphicsDevice.Viewport.Height);

Then in the method we tell the GraphicsDevice to use our new RenderTarget2D. 


Note: by setting another render target on the GraphicsDevice using SetTarget we are implicitly un-setting the current render target: We also need to clear the render target. The developer shouldn't clear the render target, as they are effectively drawing the mid-ground of their scene, and clearing the target at this point normally would mean erasing their background (it would have a similar effect here). 


However the render target needs to be cleared, as uninitialized bits of render target default to a horrible lovely  shade of purple. In our case we want to clear it to Color.Transparent, as we want the alpha values of all non-shadow casting pixels to be 0. So we add the following to our BeginDrawShadowCasters() method:

graphics.GraphicsDevice.SetRenderTarget(midGroundTarget);
graphics.GraphicsDevice.Clear(Color.Transparent);

And we're done with BeginDrawShadowCasters(). 


Once again, EndDrawShadowCasters() is empty, but again, we want to use the familiar Begin...End pattern). 


Now we move on to the heavy lifter of our system - DrawLitScene(). For now most of this method will just refer to method stubs, or else use place-holder comments, and we will flesh them out later in the series. For now though, the structure of the method is going to look like this:

public void DrawLitScene()
{
    //Error checking
    //.....
    //

    PrepareResources();

    for (int i = 0; i < spotLights.Count; i++)
    {
        //Spotlight specific calculations
        //.....
        //

        UnwrapShadowCasters(spotLights[i] /*,other params*/);

        CreateOcclusionMap();

        CreateLightMap(spotLights[i] /*,other params*/);

        BlurLightMaps(spotLights[i]);

        AccumulateLightMaps(spotLights[i]);
    }

    for (int i = 0; i < pointLights.Count; i++)
    {
        UnwrapShadowCasters(pointLights[i]);
        
        CreateOcclusionMap();

        CreateLightMap(pointLights[i]);

        BlurLightMaps(pointLights[i]);

        AccumulateLightMaps(pointLights[i]);
    }

    RenderFinalScene();
}




Let's have a quick look at each part of this. First up is some error checking in case our developer has set some nonsense values for some of the parameters. In this series I haven't bothered with throwing exceptions or returning error codes, but obviously you'd want to follow your normal error checking strategy in a game to be released.


Next up is PrepareResources(). This is mostly concerned with setting the parameters in our shaders that hold true for all lights in the scene, such as scene dimensions, shadow bias (we will discuss what that is in a later part) etc. 


This could actually all be done outside of the rendering loop, using properties to update the Effect objects when the developer changes them, but we're being slightly lazy setting them each frame. 


Also in PrepareResources() we'll need to set the first of a number of RenderTarget2D's that we'll be using throughout the method, so add the following to the beginning of the class:

public RenderTarget2D lightMap;

Then add this to Initialize(): 

lightMap = new RenderTarget2D(graphics.GraphicsDevice, graphics.GraphicsDevice.Viewport.Width, graphics.GraphicsDevice.Viewport.Height, false, SurfaceFormat.Color, DepthFormat.None, 1, RenderTargetUsage.PreserveContents);

That's a slightly different RenderTarget2D constructor than we're used to. In particular the last parameter sets the RenderTargetUsage for our render target. In our case we're setting it to RenderTargetUsage.PreserveContents. This tells the Graphics Processor not to throw away the current contents of the render target when we set it as the active target (which is normally does by default). This is important because, as discussed above, we want to add the light maps of each of our lights in turn to the final light map. 


Then create a the PrepareResources() method like so:

private void PrepareResources()
{
    //Set effect parameters
}



At the beginning of the method we will eventually be setting some parameters in our shaders, however for now we'll just leave the place-holder as above.


Next, add this code to the end of the method to set and clear our render target:

graphics.GraphicsDevice.SetRenderTarget(lightMap);
graphics.GraphicsDevice.Clear(Color.Black);

And that's it for PrepareResources(). 


The next parts of our DrawLitScene() method are those actions that we need to do per light. 


First of all we loop through our list of spot lights and calculate some values that we'll need to pass in our shaders. We'll cover these in the part of the series on spot lights. 


The first call in our loop through the spot lights list is to UnwrapShadowCasters(), which takes the current light as a parameter along with the values that we will eventually be calculating at the start of the loop. 


There are actually 2 different versions of UnwrapShadowCasters() that take different parameters. For now we can distinguish between them by the first parameter, as one takes a spot light and the other a point light. Also for now they will both hold the same contents, so you can create 2 stubs with the same code:

private void UnwrapShadowCasters(SpotLight sLight /*,other params*/)
{
    graphics.GraphicsDevice.SetRenderTarget(unwrapTarget);
    graphics.GraphicsDevice.Clear(Color.Transparent);

    //More setting of effect parameters

    spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Opaque, SamplerState.PointClamp, null, null, unwrapSpotlight);
    spriteBatch.Draw(midGroundTarget, new Rectangle(0, 0, fullScreen.Width, fullScreen.Width), Color.White);
    spriteBatch.End();
}

private void UnwrapShadowCastsers(PointLight pLight)
{
    graphics.GraphicsDevice.SetRenderTarget(unwrapTarget);
    graphics.GraphicsDevice.Clear(Color.Transparent);

    //Set effect parameters

    spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Opaque, SamplerState.PointClamp, null, null, unwrap);
    spriteBatch.Draw(midGroundTarget, new Rectangle(0, 0, fullScreen.Width, fullScreen.Width), Color.White);
    spriteBatch.End();
}

Visual Studio will now complain about several of the objects we've referenced in that snippet, as most of them don't exist yet! Let's fix that. 


First, add the following declarations at the top of the class:

public RenderTarget2D unwrapTarget;

public Effect unwrapSpotlight;

public Effect unwrap;

Rectangle fullScreen;

SpriteBatch spriteBatch;

followed by the following in Initialize():

unwrapTarget = new RenderTarget2D(graphics.GraphicsDevice, graphics.GraphicsDevice.Viewport.Width, graphics.GraphicsDevice.Viewport.Width, false, SurfaceFormat.HdrBlendable, DepthFormat.None);

fullscreen = new Rectangle(0, 0, graphics.GraphicsDevice.Viewport.Width, graphics.GraphicsDevice.Viewport.Height);

and then add a LoadContent() method with the following content:

public void LoadContent(ContentManager Content)
{
    spriteBatch = new SpriteBatch(graphics.GraphicsDevice);

    unwrap = Content.Load<Effect>(@"Effects\Unwrap");

    unwrapSpotlight = Content.Load<Effect>(@"Effects\UnwrapSpotlight");
}

First let's look at the spotLight version of UnwrapShadowCasters(). The unwrapSpotlight Effect is exactly what you'd expect, the shader that handles the 'unwrapping' of the rays that we discussed earlier. The full screen rectangle just caches the screen dimensions, as we use them a large number of times throughout the class. 


The unwrapTarget RenderTarget2D is where we will be storing the unwrapped rays so that we can use them in the next stage of the algorithm. This target is slightly different to the others so far. Rather than go into the reason for this here, I will discuss them at the end of the post. 


The same goes for the SamplerState parameter in spriteBatch.Begin(). For now, all you need to know is that because we are using a different type of RenderTarget2D, we need to sample it (i.e. pick out individual pixels) in a slightly different way. 


Now let's look at what will be in the point light version of the method. As you can see, the only new object is the unwrap Effect.  As you can probably guess, this is the equivalent unwrap shader for point lights that we shall be looking at later in the series.


Returning to DrawLitScene(), the next method that we indicated that we'd be calling is CreateOcclusionMap(). This method is the same for both spot lights and point lights, and doesn't take any parameters. 


As we discussed earlier once we've unwrapped our rays into a texture so that each ray is represented by a column of the texture, we use a special blend state to find the minimum value in each column.  So first up, let's create the method stub:

private void CreateOcclusionMap()
{
    
}

Next we need our RenderTarget2D, which needs to be the same size as a single row of the unwrap texture, i.e. the same width but only 1 pixel high. Lets add a declaration for it at the top of the class:

public RenderTarget2D occlusionMap;

with the following in Initialize():

occlusionMap = new RenderTarget2D(graphics.GraphicsDevice, graphics.GraphicsDevice.Viewport.Width, 1, false, SurfaceFormat.HdrBlendable, DepthFormat.None);

Back in CreateOcclusionMap(), the first thing we need to do is set our occlusionMap texture as the active render target:

graphics.GraphicsDevice.SetRenderTarget(occlusionMap);
graphics.GraphicsDevice.Clear(Color.White);


Next, let's create our special BlendState. First let's declare our BlendState at the top of the class:

public BlendState collapseBlendState;

I've named this the collapseBlendState because each column 'collapses' down to it's the minimum value of any pixel in the column.  To create a blend state we need the following in Initialize():

collapseBlendState = new BlendState();
collapseBlendState.ColorBlendFunction = BlendFunction.Min;
collapseBlendState.AlphaBlendFunction = BlendFunction.Min;
collapseBlendState.ColorSourceBlend = Blend.One;
collapseBlendState.ColorDestinationBlend = Blend.One;
collapseBlendState.AlphaSourceBlend = Blend.One;
collapseBlendState.AlphaDestinationBlend = Blend.One;

As you can see, we have various fields in collapseBlendState. Above we discussed the following equation:


Final Color = BlendFunction(Source * SourceBlendFactor, Destination * DestinationBlendFactor)

Now, you might be slightly confused by the fact that instead of a single BlendFunction, we have ColorBlendFunction and AlphaBlendFunction. The reason for this is that we can specify different functions for the rgb, and the a components of the colors that we're blending. In other words the actual equation is something like this:


FinalColor.rgb = ColorBlendFunction(Source.rgb * ColorSourceBlend, Destination.rgb * ColorDestinationBlend);


FinalColor.a = AlphaBlendFunction(Source.a * AlphaSourceBlend, Destination.a * AlphaDestinationBlend);


In our case, for simplicity, we will set alpha to behave in the same way as rgb. Also notice that are SourceBlendFactor has been split into ColorSourceBlend and AlphaSourceBlend, and that DestinationBlendFactor has been split into ColorDestinationBlend and AlphaDesinationBlend.


For us we want our equations to be:


FinalColor.rgb = Min(Source.rgb * 1, Destination.rgb * 1);


FinalColor.a = Min(Source.rgb * 1, Destination.a * 1);


So we set the values in our blend state accordingly. 


Returning to CreateOcclusionMap(), we now need to use our blend state with spritebatch to draw each row of the unwrap texture one after the other onto our render target:

spriteBatch.Begin(SpriteSortMode.Deferred, collapseBlendState, SamplerState.PointClamp, null, null);
for (int i = 0; i < fullScreen.Width; i++)
{
        spriteBatch.Draw(unwrapTarget, new Rectangle(0, 0, graphics.GraphicsDevice.Viewport.Width, 1), new Rectangle(0, i, graphics.GraphicsDevice.Viewport.Width, 1), Color.White);
}
spriteBatch.End();

Note that our unwrap texture was fullscreen.Width high as well as wide, which is why we are looping up to fullscreen.Width. And that's it for CreateOcclusionMap(). 


Next up is CreateLightMap().  Much like UnwrapShadowCasters(), there are two version of this method, one for spot lights and one for pointlights. Once again, the spot lights version takes some as parameters some of the values that we will be calculating at the beginning of the spotlight loop in DrawLitScene(). Let's create the stub for both versions of the method:

private void CreateLightMap(SpotLight sLight /*,other params*/)
{
    
}

private void CreateLightMap(PointLight pLight)
{

}

In fact, for the moment there will only be one difference between our two methods. The contents of the spotlight version of CreateLightMap() looks like this:

graphics.GraphicsDevice.SetRenderTarget(postProcessTarget);
graphics.GraphicsDevice.Clear(Color.Black);

//Set params

spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Opaque, SamplerState.PointClamp, null, null, spotLight);
spriteBatch.Draw(occlusionMap, fullScreen, Color.White);
spriteBatch.End();

This should all be very self explanatory by now, but as always, we need to create and initialize a few variables so that we don't run into problems later. Add the following to the top of the class:

public RenderTarget2D postProcessTarget;

public Effect spotLight;

And then this to Initialize():

postProcessTarget = new RenderTarget2D(graphics.GraphicsDevice, graphics.GraphicsDevice.Viewport.Width, graphics.GraphicsDevice.Viewport.Height);

Followed by this to LoadContent():

spotLight = Content.Load<Effect>(@"Effects\SpotLight");

The render target is called postProcessTarget because it's going to be acting as the source for the various processes that we will be performing to the light map after it's been rendered, i.e. post-process. 


The spotLight effect will be using our occlusionMap to create the light map for this light. Similarly, the contents of the point light version will be the following:

graphics.GraphicsDevice.SetRenderTarget(postProcessTarget);
graphics.GraphicsDevice.Clear(Color.Black);

//Set params

spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Opaque, SamplerState.PointClamp, null, null, pointLight);
spriteBatch.Draw(occlusionMap, fullScreen, Color.White);
spriteBatch.End();

As you can see, the only difference is that we're using the pointLight Effect instead of the spotLight Effect, as the two create different light maps (as you'd expect!). Before we move on we need to declare and initialize our pointLight Effect. Add the following to your other declarations:

public Effect pointLight;

And this line to LoadContent():

pointLight = Content.Load<Effect>(@"Effects\PointLight");

And we're done for this part with CreateLightMap(). Obviously we'll be coming back to these methods later in the series.


In DrawLitScene() once more, the next method we'll call is BlurLightMaps(). There is actually only one version of this, and it takes a Light object as a parameter (recall both SpotLight and PointLight inherit from Light). 


Our BlurLightMaps() method is going to look something like this:

private void BlurLightMaps(Light light)
{
    graphics.GraphicsDevice.SetRenderTarget(horizontalBlurTarget);
    graphics.GraphicsDevice.Clear(Color.CornflowerBlue);

    //Set some params

    spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Opaque, null, null, null, horizontalBlur);
    spriteBatch.Draw(postProcessTarget, fullScreen, Color.White);
    spriteBatch.End();

    graphics.GraphicsDevice.SetRenderTarget(verticalBlurTarget);
    graphics.GraphicsDevice.Clear(Color.CornflowerBlue);

    //Set some more params

    spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Opaque, null, null, null, verticalBlur);
    spriteBatch.Draw(horizontalBlurTarget, fullScreen, Color.White);
    spriteBatch.End();
}

As you can see we have two more render targets and two more effects to add declarations for:

public RenderTarget2D horizontalBlurTarget;

public RenderTarget2D verticalBlurTarget;

public Effect horizontalBlur;

public Effect verticalBlur;

along with code in Initialize():

horizontalBlurTarget = new RenderTarget2D(graphics.GraphicsDevice, graphics.GraphicsDevice.Viewport.Width, graphics.GraphicsDevice.Viewport.Height);

verticalBlurTarget = new RenderTarget2D(graphics.GraphicsDevice, graphics.GraphicsDevice.Viewport.Width, graphics.GraphicsDevice.Viewport.Height);

and code in LoadContent():

verticalBlur = Content.Load<Effect>(@"Effects\VerticalBlur");

horizontalBlur = Content.Load<Effect>(@"Effects\HorizontalBlur");

This should all be pretty much self-explanatory by now. As discussed above, we blur the lightMap first in one direction, and then the other. 


One point to note is that we need two different render targets. The reason for this is that we need the results of the horizontal blur in order to do the vertical blur, and a texture can't be set as the render target AND appear in the list of textures at the same time, so instead we have two separate render targets.


Back in DrawLitScene() and we're onto the last method that we'll be calling from within our two loops - AccumulateLightMaps(). 


Again there is only one version of this method. The idea of this method is to 'add' our light map onto a single, final light map which we will eventually use to light the scene. It will look something like this:

private void AccumulateLightMaps(Light light)
{
    graphics.GraphicsDevice.SetRenderTarget(lightMap);

    spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Additive, null, null, null);
    spriteBatch.Draw(verticalBlurTarget, fullScreen, light.color);
    spriteBatch.End();
}

As you can see, it's remarkably simple. We already declared and initialized the render target when we cleared it in PrepareResources. The only part we need to explain is the use of light.color in spriteBatch.Draw(). As you can probably work out, by passing the color of the light to spriteBatch we are tinting the lightmap in the color of the light.


Since the light at each point needs to be added together to get the correct value, we can just use additive blending as described above, and blend the light maps for each light onto the final lightMap target. 


We should note at this point, that is we're tight on memory we can easily reuse some of these render targets for different purposes at different points in our algorithm. For our purposes we'll be fine with one for each usage, just to keep things clear. 


We return to DrawLitScene() one final time. After we have closed our second loop we have just one more method to call - RenderFinalScene(). By this point we have our final lightMap and so all that remains is to use our lightBlend shader from part 2 to render the final scene. If you read part 2 this should all look familiar, so I won't dwell on it long. The contents RenderFinalScene() looks like this:

private void RenderFinalScene()
{
    graphics.GraphicsDevice.SetRenderTarget(null);
    graphics.GraphicsDevice.Clear(Color.CornflowerBlue);

    graphics.GraphicsDevice.Textures[1] = lightMap;
    lightBlend.Parameters["MinLight"].SetValue(minLight);

    spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, null, null, null, lightBlend);
    spriteBatch.Draw(backBufferCache, fullScreen, Color.White);
    spriteBatch.End();
    graphics.GraphicsDevice.Textures[1] = null;

    spriteBatch.Begin();
    spriteBatch.Draw(midGroundTarget, fullScreen, Color.White);
    spriteBatch.End();
}


Obviously we'll need to declare all of the resources we used in part 2:

float minLight = -1f;

public Effect lightBlend;

We will set the value of minLight from within our game code, which we'll write later in the series. Next we need to load the content of the effect:

lightBlend = Content.Load<Effect>(@"Effects\LightBlend");

We'll add the game code to load and draw the textures later in the series, but for now you can add the textures and effect file to the content project. You should create one sub-folder entitled 'Textures', and a second called 'Effects'. You can find the textures here:

Midground texture


Background Texture


And for those of you who didn't follow along in Part 2, the code for the LightBlend shader is here:


LightBlend shader


This shader just linearly interpolates the values in the lightMap so that they are between MinLight and 1, where MinLight is a value specified by the developer, and is the the color that the parts of the scene that are in shadow get multiplied by. It then multiplies the background texture by this new value.


And that's it for LightRenderer! 


We will revisit bits of the code as we go through the series to add in the necessary parameters after we've written each of the shaders. 


Unfortunately you won't be able to build and run your code until the end of the series, but save the project as I will assume that you've followed along when we come to adding code to the class later in the series.  If you want to double check your code I've uploaded the full source of the project up to this point here:


2DLightingSystem (Part 3)



Before we stop, there's one more thing I promised I'd talk about - Different types of render target.




Render target formats and precision


When we store colors in a texture, the range of colors we can store is limited by how many bits (data item that can be either 1 or 0) we use to store the color for each pixel. 


For example, if we only had 1 bit for each of r, g, and b, then we could only have the following colors: Black (0, 0, 0), Blue (0, 0, 1), Green (0, 1, 0), Cyan (0, 1, 1), Red (1, 0, 0), Magenta (1, 0, 1), Yellow (1, 1, 0), White (1, 1, 1). A whole 8 colors!  By contrast if you had 8 bits (also known as a byte) for each of r, g, and b, then you could show 16777216 colors! 


This is the default amount of storage each pixel gets in XNA. In general the more bits you have to store the color, the more precisely you can represent a given color, i.e. the higher the precision


The different ways of storing data in a texture are called texture formats, or in the case of render targets, render target formats. The standard XNA format is called 'Color'.


We have the same concern when storing our data on the distance of the first shadow casting pixel from the light. If we use a render target format with too low a precision, then we'll be limited as to how precisely we can measure how far the pixel is from the light. This could lead to very jagged looking shadows, which could be particularly noticeable if the shadow casting object is very thin. This effect would look something like this (blurring turned off for clarity):






Note the jagged edges along curved shadow casters that are close to the light. 


In fact, the normal color format isn't quite enough for our purposes. Since we are only storing our values in one or two channels, we only have 256 different values to represent our distance, which are stored in even intervals between 1 and 0. 


In the worst case, when our light is in the corner of the screen, a ray could need to cover the diagonal distance across the screen, which at 720p is ~1469 pixels. This means that we could only be accurate up to the nearest ~6 pixels. However if your shadow starts 6 pixels to close or 6 pixels too far away from the light, in some circumstances this could be quite noticeable. 


To solve this problem we had a couple of options. 


The first that I considered was to somehow use the extra channels in the texture to encode a higher precision into the standard texture format. We could do this in many ways, some which are simpler but not very efficient, and some which are more accurate, but we pay the price in more complexity in our shaders, along with making it more complicated to understand. 


In a production environment, if we were short on memory budget for textures then I would use one of these methods.


A simpler way is to choose a render target format which offers more precision. If we want to guarantee that our system will work on PC and Xbox there is only one choice - the HdrBlendable format. HDR stands for 'high dynamic range', and is used to create certain effects in 3D games. 


To do this it requires higher precision, which is exactly what we need! Looking at the XNA docs, HdrBlendable gives 16 bits to each of r, g, b, and alpha channels on PC, and 10 for each of r, g, b, and 2 for alpha on the Xbox. 


The Xbox version may not seem much better than the 8 bits we had in the Color format, but it actually allows us 4 times the precision (for every extra bit in a channel, you double the number of values you can represent in that channel). 


This means that on the Xbox we can represent 1024 different distances on our ray, giving us worst case accuracy of ~1.5 pixels on our rays. This is probably acceptable, as the worst case will only occur rarely in most scenes. On the PC we can easily be precise enough to represent pixel-perfect shadows.


For this reason, in any texture that we are storing the distance of the pixel to the light, we require an HdrBlendable format. However, this raises another issue - Sampling.


Without going into too much detail, Sampling is the term to describe how, given a set of texture coordinates, the Graphics Processor decides what the color of the texture is at that point. Now imagine we have a texture that is precisely half Red and half Blue, divided left and right, like so:






Now imagine we're sampling texture coordinates (0.5, 0.5). We're exactly in between the two halves of the texture, and so half-way between the two colors. The Graphics Processor needs to decide which color to display, or some mixture of the two. 


One option is to choose to round down or round up, and pick one of the colors. In general this method chooses the nearest pixel to our coordinates to give the final sampled color. This is called Point sampling. 


Our other option is to use some method of mixing the values of the pixels we're between to get our final color.  Choosing the nearest pixel to the texture coordinates can potentially give us slightly jagged edges around high contrast areas of our textures, e.g. if we have a diagonal black line running across a white background, the edge of the line should run through individual pixels, but each of those pixels need to be either white or black, not half and half as they would be in reality. 


In general XNA uses one of a number of methods to take a kind of average of the nearby pixels to determine the final sampled color. However, this only works on textures that have format Color. For HdrBlendable textures we can only use point sampling, hence the need for a special sample state when we use this format for our render targets.




Onwards


That's it for this part. We've talked over the outline of the algorithm that our system will be using, described the way that a developer using our system will interact with it, and coded a skeleton of the main class that our system relies on. 


Over the next few parts we'll be focusing on writing the shaders for the various stages of the algorithm, adding the necessary code to our LightRenderer class as we go along.


'til next time!