Edge detection is an algorithm that comes handy from time to time in every graphics programmer's life. There are various approaches to this problem and various tasks that we need edge detection for. For instance, we might use edge detection to perform anti-aliasing (like in here http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter09.html) or to draw some stylish outlines. So how can we detect the edges? Some algortihms compare color differences between the pixel we are currently processing and its neighbours (like antialiasing algorithms FXAA and MLAA). Others, like the one I linked a few words before, use the rendered scene's geometry information, like the depth-buffer and normals, to do so. In this post I present a variant of this algorithm.
The algorithm I use needs three pixels. The one we're processing and some two "different" neighbours. You can take the one to the left along with the one to the top for instance. Now, we read normal vectors (either in view or world space) at those pixels and depths. We also recover view or world space positions from the sampled depths (I showed how to do this here https://wojtsterna.blogspot.com/2013/11/recovering-camera-position-from-depth.html).
So, to sum up at this point, we have three (I assume view space) normals ($N$, $N_{left}$ and $N_{top}$) and positions ($P$, $P_{left}$ and $P_{top}$). Here comes the shader code:
bool IsEdge( float3 centerNormal, float3 leftNormal, float3 topNormal, float3 centerPosition, float3 leftPosition, float3 topPosition, float normalsDotThreshold, float distanceThreshold) { // normals dot criterium float centerLeftNormalsDot = dot(centerNormal, leftNormal); float centerTopNormalsDot = dot(centerNormal, topNormal); if (centerLeftNormalsDot < normalsDotThreshold || centerTopNormalsDot < normalsDotThreshold) { return true; } // distances difference criterium float3 v1 = leftPosition - centerPosition; float3 v2 = topPosition - centerPosition; if (abs(dot(centerNormal, v1)) > distanceThreshold || abs(dot(centerNormal, v2)) > distanceThreshold) { return true; } // return false; }Good starting value for normalsDotThreshold is $0.98$ and for distanceThreshold it's $0.01$.
The code uses two criterions to determine whether we are on an edge. The first checks dots of the normals. It's purpose is to detect edges that appear on the continous surface but with varying normals, where, for instance, two perpendicular planes meet (think of a side of a box standing on a floor).
The second criterium checks for distances. Imagine two planes parallel to each other but at different heights. When viewed in such a way that an edge of the top plane appears in front of the bottom plane we clearly have, well, an edge to detect. The normals dot product won't work because normals of all pixels here are the same (the planes are parallel). So in this case we need to track vectors from the center pixel to the neighbouring pixels. If they are too far away then we have an edge. Note here that we actually dot those vectors with the center pixel's normal. This is to avoid false positive cases for pixels who are on the same plane but are far away from each other. In that case, obviously, we don't care they are far away. They lie on the same plane so there is no edge.
I know that a picture is worth more than a thousand words. That's why I preferred to described the code with words instead of putting images here. Wait... what? That's right. Your's best bet is to just take this simple piece of code and try it in practice to see exactly how it works. I highly recommend doing the following modifications and see what happens:
1. Leave just the normals dot criterium.
2. Leave just the distances difference criterium.
3. Normalize vectors $v_1$ and $v_2$ in the distances difference criterium.
Have fun :).
No comments:
Post a Comment