Surface based lighting


Содержание

GPU Gems

GPU Gems is now available, right here, online. You can purchase a beautifully printed version of this book, and others in the series, at a 30% discount courtesy of InformIT and Addison-Wesley.

Please visit our Recent Documents page to see all the latest whitepapers and conference presentations that can help you with your projects.

Chapter 19. Image-Based Lighting

Kevin Bjorke
NVIDIA

Cube maps are typically used to create reflections from an environment that is considered to be infinitely far away. But with a small amount of shader math, we can place objects inside a reflection environment of a specific size and location, providing higher quality, image-based lighting (IBL).

19.1 Localizing Image-Based Lighting

Cube-mapped reflections are now a standard part of real-time graphics, and they are key to the appearance of many models. Yet one aspect of such reflections defies realism: the reflection from a cube map always appears as if it’s infinitely far away. This limits the usefulness of cube maps for small, enclosed environments, unless we are willing to accept the expense of regenerating cube maps each time our models move relative to one another. See Figure 19-1.

Figure 19-1 Typical «Infinite» Reflections

When moving models through an interior environment, it would be useful to have a cube map that behaved as if it were only a short distance away—say, as big as the current room. As our model moved within that room, the reflections would scale appropriately bigger or smaller, according to the model’s location in the room. Such an approach could be very powerful, grounding the viewer’s sense of the solidity of our simulated set, especially in environments containing windows, video monitors, and other recognizable light sources. See Figure 19-2.

Fortunately, such a localized reflection can be achieved with only a small amount of additional shader math. Developers of some recent games, in fact, have managed to replace a lot of their localized lighting with such an approach.

Let’s look at Figure 19-3. We see a reflective object (a large gold mask) in a fairly typical reflection-mapped environment.

Figure 19-3 Reflective Object with Localized Reflection

Now let’s consider Figure 19-4, a different frame from the same short animation. The maps have not changed, but look at the differences in the reflection! The reflection of the window, which was previously small, is now large—and it lines up with the object. In fact, the mask slightly protrudes through the surface of the window, and the reflections of the texture-mapped window blinds line up precisely. Likewise, look for the reflected picture frame, now strongly evident in the new image.

Figure 19-4 Localized Reflection in a Different Location

At the same time, the green ceiling panels (this photographic cube map shows the lobby of an NVIDIA building), which were evident in the first frame, have now receded in the distance and cover only a small part of the reflection.

This reflection can also be bump mapped, as shown in Figure 19-5 (only bump has been added). See the close-up of this same frame in Figure 19-6.

Figure 19-5 Bump Applied to Localized Reflection

Figure 19-6 Close-Up of , Showing Reflection Alignment

Unshaded, the minimalism of the geometry is readily apparent in Figure 19-7.

Figure 19-7 Flat-Shaded Geometry from the Sample Scene

The illustration in Figure 19-8 shows the complete simple scene. The large cube is our model of the room (the shading will be described later). The 3D transform of the room volume is passed to the shader on the reflective object, allowing us to create the correct distortions in the reflection directly in the pixel shader.

Figure 19-8 Top, Side, and Front Views Showing Camera, Reflective Object, and Simple «Room» Object

19.2 The Vertex Shader

To create a localized frame of reference for lighting, we need to create a new coordinate system. In addition to the standard coordinate spaces such as eye space and object space, we need to create lighting space— locations relative to the cube map itself. This new coordinate space will allow us to evaluate object locations relative to the finite dimensions of the cube map.

To simplify the math, we’ll assume a fixed «radius» of 1.0 for our cube map—that is, a cube ranging from –1.0 to 1.0 in each dimension (the cube shape is really a convenience for the texturing hardware; we will project its angles against the sphere of all 3D direction vectors). This size makes it relatively easy for animators and lighting/level designers to pose the location and size of the cube map using 3ds max nulls, Maya place3DTexture nodes, or similar «dummy» objects.

In our example, we’ll pass two float4x4 transforms to the vertex shader: the matrix of the lighting space (relative to world coordinates) and its inverse transpose. Combined with the world and view transforms, we can express the surface coordinates in lighting space.

We’ll pass per-vertex normal, tangent, and binormal data from the CPU application, so that we can also bump map the localized reflection.

The data we’ll send to the pixel shader will contain values in both world and lighting coordinate systems.

Listing 19-1 shows the vertex shader.

Example 19-1. Vertex Shader to Generate World-Space and Lighting-Space Coordinates

In this example, the point and vector values are transformed twice: once into world space, and then from world space into lighting space. If your CPU application is willing to do a bit more work, you can also preconcatenate these matrices, and transform the position, normal, tangent, and binormal vectors with only one multiplication operator. The method shown is used in CgFX, where the «World» and «WorldIT» transforms are automatically tracked and supplied by the CgFX parser, while the lighting-space transforms are supplied by user-defined values (say, from a DCC application).

19.3 The Fragment Shader

Given the location of the shaded points and their shading vectors, relative to lighting space, the pixel portion is relatively straightforward. We look at the reflection vector expressed in lighting space, and starting from the surface location in lighting space, we intersect it with a sphere of radius = 1.0, centered at the origin of light space, by solving the quadratic equation of that sphere.

As a «safety precaution,» we assign a default color of red (float4(1, 0, 0, 0)): if a point is shaded outside the sphere (so there can be no reflection), that point will appear red, making any error obvious during development. The fragment shader is shown in Listing 19-2.

Example 19-2. Localized-Reflection Pixel Shader

19.3.1 Additional Shader Details

We supply a few additional optional terms, to enhance the shader’s realism.

The first enhancement is for surface color: this is supplied for metal surfaces, because the reflections from metals will pick up the color of that metal. For dielectric materials such as plastic or water, you can eliminate this term or assign it as white.

The second set of terms provides Fresnel-style attenuation of the reflection. These terms can be eliminated for purely metallic surfaces, but they are crucial for realism on plastics and other dielectrics. The math here uses a power function: if user control over the Fresnel approximation isn’t needed, the falloff can be encoded as a 1D texture and indexed against abs(vdn).

For some models, you may find it looks better to attenuate the Fresnel against the unbumped normal: this can help suppress high-frequency «sparklies» along object edges. In that case, use Nu instead of Nb when calculating vdn.

For pure, smooth metals, the Fresnel attenuation is zero: just drop the calculation of fres and use Kr instead. But in the real world, few materials are truly pure; a slight drop in reflectivity is usually seen even on fairly clean metal surfaces, and the drop is pronounced on dirty surfaces. Likewise, dirty metal reflections will often tend toward less-saturated color than the «pure» metal. Use your best judgment, balancing your performance and complexity needs.

Try experimenting with the value of the FresExp exponent. See Figure 19-9. While Christophe Schlick (1994), the originator of this approximation, specified an exponent of 5.0, using lower values can create a more layered, or lacquered, appearance. An exponent of 4.0 can also be quickly calculated by two multiplies, rather than the potentially expensive pow() function.

Figure 19-9 Effects of the Fresnel-Attenuation Terms


The shader in Listing 19-2 can optionally flip the y portion of the reflection vector. This optional step was added to accommodate heterogeneous development environments where cube maps created for DirectX and OpenGL may be intermixed (the cube map specifications for these APIs differ in their handling of «up»). For example, a scene may be developed in Maya (OpenGL) for a game engine developed in DirectX.

19.4 Diffuse IBL

Cube maps can also be used to determine diffuse lighting. Programs such as Debevec’s HDRShop can integrate the full Lambertian contributions from a cube-mapped lighting environment, so that the diffuse contribution can be looked up simply by passing the surface normal to this preconvolved cube map (as opposed to reflective lighting, where we would pass a reflection vector based on both the surface normal and the eye location).

Localizing the diffuse vector, unfortunately, provides a less satisfying result than localizing the reflections, because the diffuse-lighting map has encoded its notion of the point’s «visible hemisphere.» These integrations will be incorrect for values away from the center of the sphere. Depending on your application, these errors may be acceptable or not. For some cases, linearly interpolating between multiple diffuse maps may also provide a degree of localization. Such maps tend to have very low frequencies. This is a boon to use for simple lighting, because errors must be large before they are noticeable (if noticeable at all). Some applications, therefore, will be able to perform all lighting calculations simply by using diffuse and specular cube maps.

By combining diffuse and specular lighting into cube maps, you may find that some applications have no need of any additional lighting information.

19.5 Shadows

Using shadows with IBL complicates matters but does not preclude their use. Stencil shadow volume techniques can be applied here, as can shadow maps. In both cases, it may be wise to provide a small ambient-lighting term (applied in an additional pass when using stencil shadow volumes) to avoid objects disappearing entirely into darkness (unless that’s what you want).

With image-based lighting, it’s natural to ask: Where does the shadow come from? Shadows can function as powerful visual cues even if they are not perfectly «motivated.» That is, the actual source of the shadow may not exactly correspond to the light source. In the case of IBL, this is almost certainly true: shadows from IBL would need to match a large number of potential light directions, often resulting in a very soft shadow. Yet techniques such as shadow mapping and stencil shadowing typically result in shadows with hard edges or only slight softening.

Fortunately, this is often not a problem if the directions of the shadow sources are chosen wisely. Viewers will often accept highly artificial shadows, because the spatial and graphical aspects of shadows are usually more important than as a means to «justify» the lighting (in fact, most television shows and movies tend to have very «unjustified» lighting). The best bet, when adding shadows to an arbitrary IBL scene, is to pick the direction in your cube map with the brightest area. Barring that, aim the shadow where you think it will provide the most graphic «snap» to the dimensionality of your models.

Shadows in animation are most crucial for connecting characters and models to their surroundings. The shadow of a character on the ground tells you if he is standing, running, or leaping in relationship to the ground surface. If his feet touch their shadow, he’s on the ground (we call shadows drawn for this purpose contact shadows). If not, he’s in the air.

This characteristic of shadowing, exploited for many years by cel animators, suggests that it may often be advantageous to worry only about the contact shadows in an IBL scene. If all we care about is the shadow of the character on the ground, then we can make the simplifying assumption when rendering that the shadow doesn’t need to be evaluated for depth, only for color. This means we can just create a projected black-and-white or full-color shadow, potentially with blur, and just assume that it always hits objects that access that shadow map. This avoids depth comparisons and gives us a gain in effective texture bandwidth (because simple eight-bit textures can be used).

In such a scenario, characters’ surfaces don’t access their own shadow maps; that is, they don’t self-shadow. Their lighting instead comes potentially exclusively from IBL. Game players will still see the character shadows on the environment, providing them with the primary benefit of shadows: a solid connection between the character and the 3D game environment.

19.6 Using Localized Cube Maps As Backgrounds

In the illustrations in this chapter, we can see the reflective object interacting with the background. Without the presence of the background, the effect might be nearly unnoticeable.

In many cases, we can make cube maps from 3D geometry and just apply the map(s) to the objects within that environment—while rendering the environment normally. Alternatively, as we’ve done in Figure 19-10, we can use the map as the environment, and project it onto simpler geometry.

Figure 19-10 Lines Showing the Edges of the Room Cube Object

For the background cube, we also pass the same transform for the unit-cube room. In fact, for the demo scene, we simply pass the room shader its own transform. The simple geometry is just that—geometry—and doesn’t need to have UV mapping coordinates or even surface normals.

As we can also see from Figure 19-10, using a simple cube in place of full scene geometry has definite limits! Note the «bent» ceiling on the left. Using proxy geometry in this way usually works best when the camera is near the center of the cube. Synthetic environments (as opposed to photographs, such as this one) can also benefit by lining up flat surfaces such as walls and ceilings exactly with the boundaries of the lighting space.

The vertex shader will pass a view vector and the usual required clip-space position.

Listing 19-3 shows the vertex shader itself.

Example 19-3. Vertex Shader for Background Cube Object

The pixel shader just uses the fragment shader to derive a direct texture lookup into the cube map, along with an optional tint color. See Listing 19-4.

Example 19-4. Pixel Shader for Background Cube Object

This shader is designed specifically to work well when projected onto a (potentially distorted) cube. Using variations with other simple geometries, such as a sphere, a cylinder, or a flat backplane, is also straightforward.

19.7 Conclusion

Image-based lighting provides a complex yet inexpensive alternative to numerically intensive lighting calculations. Adding a little math to this texturing method can give us a much wider range of effects than «simple» IBL, providing a stronger sense of place to our 3D images.

19.8 Further Reading

Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and Addison-Wesley was aware of a trademark claim, the designations have been printed with initial capital letters or in all capitals.

The authors and publisher have taken care in the preparation of this book, but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein.

The publisher offers discounts on this book when ordered in quantity for bulk purchases and special sales. For more information, please contact:

For sales outside of the U.S., please contact:

Visit Addison-Wesley on the Web: www.awprofessional.com

Library of Congress Control Number: 2004100582

GeForce™ and NVIDIA Quadro ® are trademarks or registered trademarks of NVIDIA Corporation.
RenderMan ® is a registered trademark of Pixar Animation Studios.
«Shadow Map Antialiasing» © 2003 NVIDIA Corporation and Pixar Animation Studios.
«Cinematic Lighting» © 2003 Pixar Animation Studios.
Dawn images © 2002 NVIDIA Corporation. Vulcan images © 2003 NVIDIA Corporation.

Copyright © 2004 by NVIDIA Corporation.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form, or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior consent of the publisher. Printed in the United States of America. Published simultaneously in Canada.

For information on obtaining permission for use of material from this work, please submit a written request to:

Pearson Education, Inc.
Rights and Contracts Department
One Lake Street
Upper Saddle River, NJ 07458

Text printed on recycled and acid-free paper.

Standard Surface

A conversion script (Convert Deprecated) that converts legacy Standard shaders to Standard Surface shaders is available in the Arnold Utility menu of MtoA. This simple script also converts unsupported materials (Phong, Blinn, mia_material, etc..).

The Standard Surface shader is a physically-based shader capable of producing many types of materials. It includes a diffuse layer, a specular layer with complex Fresnel for metals, specular transmission for glass, subsurface scattering for skin, thin scattering for water and ice, a secondary specular coat, and light emission.


Material Types

By default, the parameters are appropriate for materials such as plastic, wood or stone. By setting a few key parameters to 1, different types of materials can be quickly created:

  • Metalness: gold, silver, iron, car paint.
  • Transmission: glass, water, honey, soap bubble.
  • Subsurface: skin, marble, wax, paper, leaves.
  • Thin Walled: paper, leaves, soap bubble.

Parameter values between 0 and 1 may be used to create more complex materials that are a mix of basic material types.

Energy Conservation

Standard Surface is energy conserving by default. All its layers are balanced so that the amount of light leaving the surface does not exceed the amount of incoming light. For example, as a surface is made more metallic and the specular layer contribution is increased, the diffuse layer contribution is reduced accordingly to ensure energy conservation.

Diffuse and rough (left) to metallic specular (right).

When using layer weights or colors with values higher than 1, energy conservation is broken. It is discouraged to create such materials, as they will not behave predictably under different lighting, and may lead to increased noise and poor rendering performance.

Due to a large number of controls, the Standard Surface shader is split up into several groups. The individual settings for each group are described in more detail in the pages below.

The MtoA material library can be found here.

Further information about physically based rendering in Arnold can be found here.

Surface Normal Direction

When rendering diffuse surfaces, it is very important that the normals of the geometry face in the right direction. In the example below, you can see the difference between normals that are facing inwards in the wrong direction (left side) versus those that are facing correctly in the outwards direction (right side).

Surface Shader lighting examples

This page provides examples of custom Surface Shader lighting models in Surface Shaders. For more general Surface Shader guidance, see Surface Shader Examples.

Because Deferred Lighting does not play well with some custom per-material lighting models, most of the examples below make the shaders compile to Forward Rendering only.

Diffuse

The following is an example of a shader that uses the built-in Lambert lighting model:

Here’s how it looks like with a Texture and without a Texture, with one directional Light in the Scene:

The following example shows how to achieve the same result by writing a custom lighting model instead of using the built-in Lambert model.

To do this, you need to use a number of Surface Shader lighting model functions. Here’s a simple Lambert one. Note that only the CGPROGRAM section changes; the surrounding Shader code is exactly the same:

This simple Diffuse lighting model uses the LightingSimpleLambert function. It computes lighting by calculating a dot product between surface normal and light direction, and then applying light attenuation and color.

Diffuse Wrap

The following example shows Wrapped Diffuse, a modification of Diffuse lighting where illumination “wraps around” the edges of objects. It’s useful for simulating subsurface scattering effects. Only the CGPROGRAM section changes, so once again, the surrounding Shader code is omitted:

Here’s how it looks like with a Texture and without a Texture, with one directional Light in the Scene:

Toon Ramp

The following example shows a “Ramp” lighting model that uses a Texture ramp to define how surfaces respond to the angles between the light and the normal. This can be used for a variety of effects, and is especially effective when used with Toon lighting.

Илон Маск рекомендует:  hebrevc - Преобразует текст на иврите из логической кодировки в визуальную с преобразованием

Here’s how it looks like with a Texture and without a Texture, with one directional Light in the Scene:

Simple Specular

The following example shows a simple specular lighting model, similar to the built-in BlinnPhong lighting model.

Here’s how it looks like with a Texture and without a Texture, with one directional Light in the Scene:

Custom GI

We’ll start with a Shader that mimics Unity’s built-in GI:

Now, let’s add some tone mapping on top of the GI:

Is something described here not working as you expect it to? It might be a Known Issue. Please check with the Issue Tracker at issuetracker.unity3d.com.

Copyright В© 2020 Unity Technologies. Publication: 5.6-001N. Built: 2020-07-12.

 User Manual

To get best results with Physically Based Rendering in PlayCanvas you can use the technique called Image Based Lighting or IBL, it allows to use pre-rendered image data as source information for ambient and reflection light.

This technique relies on CubeMap — the environment map that is made of 6 texture (faces) forming a cube to have full surround texture coverage.

Image data can be stored in LDR or HDR (High Dynamic Range) color space, which allows to store more than 0.0 to 1.0 (256 gradations) in single channel. HDR allows to store values above 1.0 (what is considered «white»), with combination of many factors of environment such as gamma correction, tonemapping and exposure it allows to contain more light details and provide much better control over light quality and desirable results to artists.

Notice how bright parts in texture are clamped using LDR

Energy Conservation

The concept is derived from the fact that the diffuse light and the reflected light all come from the light hitting the material, the sum of diffuse and reflected light can not be more than the total light hitting the material. In practise this means that if a surface is highly reflective it will show very little diffuse color. And the opposite, if a material has a bright diffuse color, it can not reflect much.


In nature, smoother surfaces have sharper reflections and rougher surfaces have blurrier. The reason for that is basically that rougher surfaces have larger, more prominent microfacets, reflecting light in many directions, while smooth surfaces tend to reflect it mostly in one direction. When light coming from different directions is averaged inside a tiny visible point, the result looks blurry to us, and also less bright, thanks to energy conservation. PlayCanvas simulates this behaviour with the glossiness parameter, which works automatically for lights, however, for IBL we must precalculate the correct blurred response in advance. This is what the Prefilter button does.

Prefilter button is available on CubeMap asset in the Inspector, it is mandatory to enable IBL on physical materials using a CubeMap.

Authoring Environment Maps

Environment Maps comes in different projections: equirectangular, CubeMap (face list), azimuthal and many others. WebGL and GPU works with face list — set of 6 textures representing sides of a cube — CubeMap. So environment map should be converted into 6 textures if it comes in any other projection.

In order to convert between projections you can use various tools, one of them is cross-platform open-source CubeMap filtering tool: cmftStudio.

CubeMaps can be CGI rendered or assembled from photography, and there are websites to download/buy HDR environment maps. Some of good sources for experimenting can be: sIBL Archive, No Emotion HDR’s, Open Footage, Paul Debevec. Environment maps can come in equirectangular projection and convertible by cmftStudio mentioned above.

Rendering CubeMap

CubeMap is made of 6 faces, each representing square side of a cube, simply put: it can be rendered using square viewport camera, by rotating it in different 90 degrees dirrections with 90 degrees field of view.

You can use popular 3D modelling tools, or photography and 360 Imagery software. They should be rendered in linear gamma space and without color corrections that is described in Lightmapping Gamma Correction section.

One of the plugins for 3D Studio Max such as this can be used to render VRay CubeMap Faces, ready to be uploaded into PlayCanvas Editor.

Applying IBL

This can be done using two methods:

  1. Use CubeMap as Skybox in Scene Settings.
  2. Use CubeMap as environment map on the Material directly.

Box Projection Mapping

This technique changes the projection of environment map which allows to specify box within the space so CubeMap corresponds to its bounds. The most common use is to simulate reflections on surfaces within room scale environment.

Example

Here is an example and project of the scene using CubeMap Box Projection, notice the reflection on the wooden floor of the windows and sattle reflection on the ceiling, as well as reflection of the room on the metal PlayCanvas logo on the wall on the right. This is a dynamic effect, and can provide very realistic reflections and control to artist of how surfaces reflect the room environment.

The lighting in this scene is implemented using Lightmap and AO textures and Box Projected IBL (reflections)

Physically Based Shading and Image Based Lighting 12

Light is complicated, and we really don’t have a full equation that accurately models light in the real world. This might sound confusing because of all the recent strides in CG and visual technology. Well, it’s all approximate – it’s just that we select functions which approximate really well. The unfortunate truth is: There is no one equation for light – only approximations.

Blinn-Phong is an approximation. If you’ve been following my recent blog posts you might have identified that the lighting model I have been using since the start has been Blinn-Phong. But let’s face it, Blinn-Phong hasn’t really improved with age – the paper on this algorithm was published originally in `77. At the time of writing, it’s almost a 40 year-old approximation for lighting. Can’t we do better?

Well, yes actually!

Physically Based Shading

Important: The following article has a ton of before and after pictures. When something is paired with a direction (Left), that refers to the image that takes up the Left side of the view).

Modern shading models are often referred to as Physically Based. They feature a more complicated lighting model, which separates into multiple equations with three special interchangeable factors – the whole equation forms a Bidirectional Reflectance Distribution Function (BRDF) known as the Cook-Torrance Model. A BRDF is essentially a function which models the amount of reflected light across the surface of an object. Bidirectional means that if the light and the view were to switch places, the equation would produce the same results. Reflectance is just what it sounds like, some factor representing the amount of light reflected. Distribution is the integral of the probability, in our case the distribution is the light over the object. Much like cumulative distribution functions in probability, we expect that the sum of all it’s parts to equal 1 (conservation of energy, in our case). And Function – it is a function.

The Cook-Torrance Model can be expressed as follows:

This model represents the amount of light reflected from an object (similar to Blinn-Phong) but with an approximation that takes into account the microscopic levels of detail on the surface of the object. The three functions F, G, and D are the specular factors which represent (respectively) Fresnel, Geometric Occlusion, and Normal (of Microfacet) Distribution. The power of this kind of BRDF is that different specular functions can be swapped out with whatever approximation you see fit (so long as they correspond to the same geometric meaning). What I mean by this is that there are several approximations to each of these functions, you only need to choose one, but you have the freedom to select whichever you want.

Let’s discuss the factors in more detail.

Fresnel Factor

Fresnel is the amount of light that reflects based on the current angle of incident between the light and the normal. As the incident angle becomes increasingly large, the amount of light that reflects into our eyes becomes greater. At 90° Angle of Incidence (AOI) the amount of light that reflects is 100%. An interesting fact about the Fresnel factor is that every type of known material has reflection – yes, even the ones you wouldn’t expect. If you look towards a light where you and the light have an increasing angle of incidence, you can force out this specular factor. It would make sense that no object completely consumes light, that wouldn’t physically make sense.

However not everything reflects the same amount of light at all angles – in fact the base value with angle of incidence is known as F0. Different types of materials have different values of F0 – ranging between 0.01

0.95. Absolutely nothing outside of that range. (Silver is the most reflective metal, and it has a base F0 of 0.95, to my knowledge ice is the lowest with 0.018).

Sc0tt Games has a pretty good table of non-metal reflective indices.

Geometric Occlusion Factor

The next factor represents the amount of the surface – at a microscopic level – that is self-occluding. This parameter should ideally only affect rough objects. As an object becomes more rough, the amount of microfacet self-occlusion increases, so the amount of specular light observed decreases.

If we try to imagine a perfectly smooth surface, we can identify that there are still impurities with it at a microscopic level. Because of this, we can say that there is some amount of shading that’s going on, even if it’s small. SmithSchlickBeckmann, SmithGgx, and Cook-Torrance all seem to have pretty good equations for Geometric Occlusion.

Normal Distribution Factor

This factor is very unfortunately named. The reason why is because it’s often confused with regular, mathematic Normal Distributions (like what we used to blur Exponential Shadow Maps in the previous blog post). However, the name is appropriate.

The Normal Distribution is a function which determines the probability that the faces on a microfacet surface are oriented towards the normal of the surface. This tends to control the spread and falloff of the specular term. You often see Ggx used because it has a much wider tail to the specular reflection – which is pretty pleasing to the eye.

Microfacet BRDF Equations

So this wouldn’t be an experiment of all the different shading equations without a laundry list of equations to try. I’m just going to list the functions I found, and at the end of each section talk a little bit about my favorite combinations.

Definitions


Fresnel Equations

Geometry Equations

Note: The general form of the Smith equations is to take the product of the function called twice – once with arguments (l, h) and one with arguments (v, h). As such:

In the following equations, the variable i is the placeholder for whichever variable is plugged in first (l or v).

Geometry Equations (Smith)

Distribution Equations

Cumulative Distribution (Sample Skewing)

Phong:

Beckmann:

Comparison

Generally GGX, or some mixture of Smith/GGX is very popular. I tend to like different ones depending on the scene and light composition. I stick with Schlick’s Approximation for Fresnel. For Geometry Occlusion I prefer either Smith/Ggx, Smith-Schlick-Beckmann, or Cook-Torrance. And for Normal Distribution, Ggx has a longer specular tail – I tend to prefer that. For Importance-Sampling, I tend to mix and match (even though mathematically this is incorrect) by using Beckmann sampling with Ggx Normal Distribution. But you can see how the terms work together to produce pretty impressive results.

A sample showing interpolations between different Metallic and Roughness values.

What’s most impressive about the above picture is that every object here is white. The only changing parameters is Metallic and Roughness.

By these two variables alone we can represent a wide spectrum of materials. Towards the top we can see metals ranging from brushed and rough, to smooth and reflective. Move down the metal spectrum we hit a wall where objects seem to maintain some of their own diffuse color – these are called dielectrics. These objects range from glossy, crystalline materials, to smooth plastics. At the rough end of the spectrum you can spot matte surfaces and rubber materials.

In order to compare differences in specular factors, I have implemented all of the functions above as shader subroutines (OpenGL 4.0 >) which allows me to dynamically switch factors for the BRDF without recompiling shaders. It’s definitely not as efficient as writing a compact implementation of the entire BRDF, but it allows us to see all of the possibilities with great ease. One interesting anomaly is that Smith-Beckmann didn’t seem to play nicely with any other factor aside from the Beckmann distribution. You’ll notice white specles where the reflection is over-pronounced when Smith-Beckmann is paired improperly.

Material Structure

The material structure I’ve settled on is a simplified version of Unreal’s material system (Base Color, Metallic, and Roughness).

The Base Color is the color which we use for the diffuse portion of our lighting equation. It also doubles as the specular tint for metallic objects. So if we have an object that falls in the range of Dielectrics, this color is used for the diffuse term. If it falls in the range of Metals, it’s multiplied in as the specular tint. Metallic is simply the F0 value for the material, and it is clamped to be within the range [0.02, 0.99] (Cook-Torrance’s Fresnel equation didn’t play nice with F0 of 1, and everything should have at least some specular). Roughness is a term which is used in several of the Microfacet BRDF functions above, in order to make the distribution of rough/smooth more linear, we have re-parametrized roughness by squaring it (as outlined above in the Definitions section), and that there is a minimum roughness of 0.01 (Materials with surfaces infinitely smooth can exist in a vacuum, but due to Cold Welding this is a short-lived experience).

As I pointed it out, when travelling along the spectrum of metals we hit a wall where diffuse is no longer applied. This “wall” is what separates the dielectrics (non-conductive material) with the metals (conductive material). This section of F0 is more commonly referred to as semiconductors (somewhatly conductive materials). An interesting fact about measurements of different metals is that they tend to have absolutely no diffuse term to them. So what I do for convenience-sake is split the materials into two separate calculations – Dielectrics and Metals. How we interpolate between those calculations is through the lesser-seen semiconductors range.

Very few materials fall under the range of semiconductors [0.2, 0.45]. But for ease of implementation, and to allow some form of physical blending, I do allow these ranges. This semiconductor range is where I interpolate between the two blend models. So starting from the base F0 of the semiconductors, to the top-most value, we interpolate between the two results of the different blend modes. Here is some shader code showing this interpolation:

GPU Gems

GPU Gems is now available, right here, online. You can purchase a beautifully printed version of this book, and others in the series, at a 30% discount courtesy of InformIT and Addison-Wesley.

Please visit our Recent Documents page to see all the latest whitepapers and conference presentations that can help you with your projects.

Chapter 19. Image-Based Lighting

Kevin Bjorke
NVIDIA

Cube maps are typically used to create reflections from an environment that is considered to be infinitely far away. But with a small amount of shader math, we can place objects inside a reflection environment of a specific size and location, providing higher quality, image-based lighting (IBL).

19.1 Localizing Image-Based Lighting

Cube-mapped reflections are now a standard part of real-time graphics, and they are key to the appearance of many models. Yet one aspect of such reflections defies realism: the reflection from a cube map always appears as if it’s infinitely far away. This limits the usefulness of cube maps for small, enclosed environments, unless we are willing to accept the expense of regenerating cube maps each time our models move relative to one another. See Figure 19-1.

Figure 19-1 Typical «Infinite» Reflections

When moving models through an interior environment, it would be useful to have a cube map that behaved as if it were only a short distance away—say, as big as the current room. As our model moved within that room, the reflections would scale appropriately bigger or smaller, according to the model’s location in the room. Such an approach could be very powerful, grounding the viewer’s sense of the solidity of our simulated set, especially in environments containing windows, video monitors, and other recognizable light sources. See Figure 19-2.

Fortunately, such a localized reflection can be achieved with only a small amount of additional shader math. Developers of some recent games, in fact, have managed to replace a lot of their localized lighting with such an approach.

Let’s look at Figure 19-3. We see a reflective object (a large gold mask) in a fairly typical reflection-mapped environment.

Figure 19-3 Reflective Object with Localized Reflection

Now let’s consider Figure 19-4, a different frame from the same short animation. The maps have not changed, but look at the differences in the reflection! The reflection of the window, which was previously small, is now large—and it lines up with the object. In fact, the mask slightly protrudes through the surface of the window, and the reflections of the texture-mapped window blinds line up precisely. Likewise, look for the reflected picture frame, now strongly evident in the new image.

Figure 19-4 Localized Reflection in a Different Location

At the same time, the green ceiling panels (this photographic cube map shows the lobby of an NVIDIA building), which were evident in the first frame, have now receded in the distance and cover only a small part of the reflection.

This reflection can also be bump mapped, as shown in Figure 19-5 (only bump has been added). See the close-up of this same frame in Figure 19-6.

Figure 19-5 Bump Applied to Localized Reflection

Figure 19-6 Close-Up of , Showing Reflection Alignment

Unshaded, the minimalism of the geometry is readily apparent in Figure 19-7.

Figure 19-7 Flat-Shaded Geometry from the Sample Scene

The illustration in Figure 19-8 shows the complete simple scene. The large cube is our model of the room (the shading will be described later). The 3D transform of the room volume is passed to the shader on the reflective object, allowing us to create the correct distortions in the reflection directly in the pixel shader.

Figure 19-8 Top, Side, and Front Views Showing Camera, Reflective Object, and Simple «Room» Object

19.2 The Vertex Shader


To create a localized frame of reference for lighting, we need to create a new coordinate system. In addition to the standard coordinate spaces such as eye space and object space, we need to create lighting space— locations relative to the cube map itself. This new coordinate space will allow us to evaluate object locations relative to the finite dimensions of the cube map.

To simplify the math, we’ll assume a fixed «radius» of 1.0 for our cube map—that is, a cube ranging from –1.0 to 1.0 in each dimension (the cube shape is really a convenience for the texturing hardware; we will project its angles against the sphere of all 3D direction vectors). This size makes it relatively easy for animators and lighting/level designers to pose the location and size of the cube map using 3ds max nulls, Maya place3DTexture nodes, or similar «dummy» objects.

In our example, we’ll pass two float4x4 transforms to the vertex shader: the matrix of the lighting space (relative to world coordinates) and its inverse transpose. Combined with the world and view transforms, we can express the surface coordinates in lighting space.

We’ll pass per-vertex normal, tangent, and binormal data from the CPU application, so that we can also bump map the localized reflection.

The data we’ll send to the pixel shader will contain values in both world and lighting coordinate systems.

Listing 19-1 shows the vertex shader.

Example 19-1. Vertex Shader to Generate World-Space and Lighting-Space Coordinates

In this example, the point and vector values are transformed twice: once into world space, and then from world space into lighting space. If your CPU application is willing to do a bit more work, you can also preconcatenate these matrices, and transform the position, normal, tangent, and binormal vectors with only one multiplication operator. The method shown is used in CgFX, where the «World» and «WorldIT» transforms are automatically tracked and supplied by the CgFX parser, while the lighting-space transforms are supplied by user-defined values (say, from a DCC application).

19.3 The Fragment Shader

Given the location of the shaded points and their shading vectors, relative to lighting space, the pixel portion is relatively straightforward. We look at the reflection vector expressed in lighting space, and starting from the surface location in lighting space, we intersect it with a sphere of radius = 1.0, centered at the origin of light space, by solving the quadratic equation of that sphere.

As a «safety precaution,» we assign a default color of red (float4(1, 0, 0, 0)): if a point is shaded outside the sphere (so there can be no reflection), that point will appear red, making any error obvious during development. The fragment shader is shown in Listing 19-2.

Example 19-2. Localized-Reflection Pixel Shader

19.3.1 Additional Shader Details

We supply a few additional optional terms, to enhance the shader’s realism.

The first enhancement is for surface color: this is supplied for metal surfaces, because the reflections from metals will pick up the color of that metal. For dielectric materials such as plastic or water, you can eliminate this term or assign it as white.

The second set of terms provides Fresnel-style attenuation of the reflection. These terms can be eliminated for purely metallic surfaces, but they are crucial for realism on plastics and other dielectrics. The math here uses a power function: if user control over the Fresnel approximation isn’t needed, the falloff can be encoded as a 1D texture and indexed against abs(vdn).

For some models, you may find it looks better to attenuate the Fresnel against the unbumped normal: this can help suppress high-frequency «sparklies» along object edges. In that case, use Nu instead of Nb when calculating vdn.

Илон Маск рекомендует:  Что такое код iis администрирование

For pure, smooth metals, the Fresnel attenuation is zero: just drop the calculation of fres and use Kr instead. But in the real world, few materials are truly pure; a slight drop in reflectivity is usually seen even on fairly clean metal surfaces, and the drop is pronounced on dirty surfaces. Likewise, dirty metal reflections will often tend toward less-saturated color than the «pure» metal. Use your best judgment, balancing your performance and complexity needs.

Try experimenting with the value of the FresExp exponent. See Figure 19-9. While Christophe Schlick (1994), the originator of this approximation, specified an exponent of 5.0, using lower values can create a more layered, or lacquered, appearance. An exponent of 4.0 can also be quickly calculated by two multiplies, rather than the potentially expensive pow() function.

Figure 19-9 Effects of the Fresnel-Attenuation Terms

The shader in Listing 19-2 can optionally flip the y portion of the reflection vector. This optional step was added to accommodate heterogeneous development environments where cube maps created for DirectX and OpenGL may be intermixed (the cube map specifications for these APIs differ in their handling of «up»). For example, a scene may be developed in Maya (OpenGL) for a game engine developed in DirectX.

19.4 Diffuse IBL

Cube maps can also be used to determine diffuse lighting. Programs such as Debevec’s HDRShop can integrate the full Lambertian contributions from a cube-mapped lighting environment, so that the diffuse contribution can be looked up simply by passing the surface normal to this preconvolved cube map (as opposed to reflective lighting, where we would pass a reflection vector based on both the surface normal and the eye location).

Localizing the diffuse vector, unfortunately, provides a less satisfying result than localizing the reflections, because the diffuse-lighting map has encoded its notion of the point’s «visible hemisphere.» These integrations will be incorrect for values away from the center of the sphere. Depending on your application, these errors may be acceptable or not. For some cases, linearly interpolating between multiple diffuse maps may also provide a degree of localization. Such maps tend to have very low frequencies. This is a boon to use for simple lighting, because errors must be large before they are noticeable (if noticeable at all). Some applications, therefore, will be able to perform all lighting calculations simply by using diffuse and specular cube maps.

By combining diffuse and specular lighting into cube maps, you may find that some applications have no need of any additional lighting information.

19.5 Shadows

Using shadows with IBL complicates matters but does not preclude their use. Stencil shadow volume techniques can be applied here, as can shadow maps. In both cases, it may be wise to provide a small ambient-lighting term (applied in an additional pass when using stencil shadow volumes) to avoid objects disappearing entirely into darkness (unless that’s what you want).

With image-based lighting, it’s natural to ask: Where does the shadow come from? Shadows can function as powerful visual cues even if they are not perfectly «motivated.» That is, the actual source of the shadow may not exactly correspond to the light source. In the case of IBL, this is almost certainly true: shadows from IBL would need to match a large number of potential light directions, often resulting in a very soft shadow. Yet techniques such as shadow mapping and stencil shadowing typically result in shadows with hard edges or only slight softening.

Fortunately, this is often not a problem if the directions of the shadow sources are chosen wisely. Viewers will often accept highly artificial shadows, because the spatial and graphical aspects of shadows are usually more important than as a means to «justify» the lighting (in fact, most television shows and movies tend to have very «unjustified» lighting). The best bet, when adding shadows to an arbitrary IBL scene, is to pick the direction in your cube map with the brightest area. Barring that, aim the shadow where you think it will provide the most graphic «snap» to the dimensionality of your models.

Shadows in animation are most crucial for connecting characters and models to their surroundings. The shadow of a character on the ground tells you if he is standing, running, or leaping in relationship to the ground surface. If his feet touch their shadow, he’s on the ground (we call shadows drawn for this purpose contact shadows). If not, he’s in the air.

This characteristic of shadowing, exploited for many years by cel animators, suggests that it may often be advantageous to worry only about the contact shadows in an IBL scene. If all we care about is the shadow of the character on the ground, then we can make the simplifying assumption when rendering that the shadow doesn’t need to be evaluated for depth, only for color. This means we can just create a projected black-and-white or full-color shadow, potentially with blur, and just assume that it always hits objects that access that shadow map. This avoids depth comparisons and gives us a gain in effective texture bandwidth (because simple eight-bit textures can be used).

In such a scenario, characters’ surfaces don’t access their own shadow maps; that is, they don’t self-shadow. Their lighting instead comes potentially exclusively from IBL. Game players will still see the character shadows on the environment, providing them with the primary benefit of shadows: a solid connection between the character and the 3D game environment.

19.6 Using Localized Cube Maps As Backgrounds

In the illustrations in this chapter, we can see the reflective object interacting with the background. Without the presence of the background, the effect might be nearly unnoticeable.

In many cases, we can make cube maps from 3D geometry and just apply the map(s) to the objects within that environment—while rendering the environment normally. Alternatively, as we’ve done in Figure 19-10, we can use the map as the environment, and project it onto simpler geometry.

Figure 19-10 Lines Showing the Edges of the Room Cube Object

For the background cube, we also pass the same transform for the unit-cube room. In fact, for the demo scene, we simply pass the room shader its own transform. The simple geometry is just that—geometry—and doesn’t need to have UV mapping coordinates or even surface normals.

As we can also see from Figure 19-10, using a simple cube in place of full scene geometry has definite limits! Note the «bent» ceiling on the left. Using proxy geometry in this way usually works best when the camera is near the center of the cube. Synthetic environments (as opposed to photographs, such as this one) can also benefit by lining up flat surfaces such as walls and ceilings exactly with the boundaries of the lighting space.

The vertex shader will pass a view vector and the usual required clip-space position.

Listing 19-3 shows the vertex shader itself.

Example 19-3. Vertex Shader for Background Cube Object

The pixel shader just uses the fragment shader to derive a direct texture lookup into the cube map, along with an optional tint color. See Listing 19-4.

Example 19-4. Pixel Shader for Background Cube Object


This shader is designed specifically to work well when projected onto a (potentially distorted) cube. Using variations with other simple geometries, such as a sphere, a cylinder, or a flat backplane, is also straightforward.

19.7 Conclusion

Image-based lighting provides a complex yet inexpensive alternative to numerically intensive lighting calculations. Adding a little math to this texturing method can give us a much wider range of effects than «simple» IBL, providing a stronger sense of place to our 3D images.

19.8 Further Reading

Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and Addison-Wesley was aware of a trademark claim, the designations have been printed with initial capital letters or in all capitals.

The authors and publisher have taken care in the preparation of this book, but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein.

The publisher offers discounts on this book when ordered in quantity for bulk purchases and special sales. For more information, please contact:

For sales outside of the U.S., please contact:

Visit Addison-Wesley on the Web: www.awprofessional.com

Library of Congress Control Number: 2004100582

GeForce™ and NVIDIA Quadro ® are trademarks or registered trademarks of NVIDIA Corporation.
RenderMan ® is a registered trademark of Pixar Animation Studios.
«Shadow Map Antialiasing» © 2003 NVIDIA Corporation and Pixar Animation Studios.
«Cinematic Lighting» © 2003 Pixar Animation Studios.
Dawn images © 2002 NVIDIA Corporation. Vulcan images © 2003 NVIDIA Corporation.

Copyright © 2004 by NVIDIA Corporation.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form, or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior consent of the publisher. Printed in the United States of America. Published simultaneously in Canada.

For information on obtaining permission for use of material from this work, please submit a written request to:

Pearson Education, Inc.
Rights and Contracts Department
One Lake Street
Upper Saddle River, NJ 07458

Text printed on recycled and acid-free paper.

 User Manual

To get best results with Physically Based Rendering in PlayCanvas you can use the technique called Image Based Lighting or IBL, it allows to use pre-rendered image data as source information for ambient and reflection light.

This technique relies on CubeMap — the environment map that is made of 6 texture (faces) forming a cube to have full surround texture coverage.

Image data can be stored in LDR or HDR (High Dynamic Range) color space, which allows to store more than 0.0 to 1.0 (256 gradations) in single channel. HDR allows to store values above 1.0 (what is considered «white»), with combination of many factors of environment such as gamma correction, tonemapping and exposure it allows to contain more light details and provide much better control over light quality and desirable results to artists.

Notice how bright parts in texture are clamped using LDR

Energy Conservation

The concept is derived from the fact that the diffuse light and the reflected light all come from the light hitting the material, the sum of diffuse and reflected light can not be more than the total light hitting the material. In practise this means that if a surface is highly reflective it will show very little diffuse color. And the opposite, if a material has a bright diffuse color, it can not reflect much.

In nature, smoother surfaces have sharper reflections and rougher surfaces have blurrier. The reason for that is basically that rougher surfaces have larger, more prominent microfacets, reflecting light in many directions, while smooth surfaces tend to reflect it mostly in one direction. When light coming from different directions is averaged inside a tiny visible point, the result looks blurry to us, and also less bright, thanks to energy conservation. PlayCanvas simulates this behaviour with the glossiness parameter, which works automatically for lights, however, for IBL we must precalculate the correct blurred response in advance. This is what the Prefilter button does.

Prefilter button is available on CubeMap asset in the Inspector, it is mandatory to enable IBL on physical materials using a CubeMap.

Authoring Environment Maps

Environment Maps comes in different projections: equirectangular, CubeMap (face list), azimuthal and many others. WebGL and GPU works with face list — set of 6 textures representing sides of a cube — CubeMap. So environment map should be converted into 6 textures if it comes in any other projection.

In order to convert between projections you can use various tools, one of them is cross-platform open-source CubeMap filtering tool: cmftStudio.

CubeMaps can be CGI rendered or assembled from photography, and there are websites to download/buy HDR environment maps. Some of good sources for experimenting can be: sIBL Archive, No Emotion HDR’s, Open Footage, Paul Debevec. Environment maps can come in equirectangular projection and convertible by cmftStudio mentioned above.

Rendering CubeMap

CubeMap is made of 6 faces, each representing square side of a cube, simply put: it can be rendered using square viewport camera, by rotating it in different 90 degrees dirrections with 90 degrees field of view.

You can use popular 3D modelling tools, or photography and 360 Imagery software. They should be rendered in linear gamma space and without color corrections that is described in Lightmapping Gamma Correction section.

One of the plugins for 3D Studio Max such as this can be used to render VRay CubeMap Faces, ready to be uploaded into PlayCanvas Editor.

Applying IBL

This can be done using two methods:

  1. Use CubeMap as Skybox in Scene Settings.
  2. Use CubeMap as environment map on the Material directly.

Box Projection Mapping

This technique changes the projection of environment map which allows to specify box within the space so CubeMap corresponds to its bounds. The most common use is to simulate reflections on surfaces within room scale environment.

Example

Here is an example and project of the scene using CubeMap Box Projection, notice the reflection on the wooden floor of the windows and sattle reflection on the ceiling, as well as reflection of the room on the metal PlayCanvas logo on the wall on the right. This is a dynamic effect, and can provide very realistic reflections and control to artist of how surfaces reflect the room environment.

The lighting in this scene is implemented using Lightmap and AO textures and Box Projected IBL (reflections)


GPGPU-based surface inspection from structured white light — ee.oulu.fi

GPGPU-based surface inspection from structured white light — ee.oulu.fi

GPGPU-based surface inspection from structured white light Miguel Bordallo L´opeza , Karri Niemel¨ab and Olli Silv´ena a Machine

Vision Group, University of Oulu, Oulu, Finland b VTT Research Center, Oulu, Finland ABSTRACT

Automatic surface inspection has been used in the industry to reliably detect all kinds of surface defects and to measure the overall quality of a produced piece. Structured light systems (SLS) are based on the reconstruction of the 3D information of a selected area by projecting several phase-shifted sinusoidal patterns onto a surface. Due to the high speed of production lines, surface inspection systems require extremely fast imaging methods and lots of computational power. The cost of such systems can easily become considerable. The use of standard PCs and Graphics Processing Units (GPUs) for data processing tasks facilitates the construction of cost-effective systems. We present a parallel implementation of the required algorithms written in C with CUDA extensions. In our contribution, we describe the challenges of the design on a GPU, compared with a traditional CPU implementation. We provide a qualitative evaluation of the results and a comparison of the algorithm speed performance on several platforms. The system is able to compute two megapixels height maps with 100 micrometers spatial resolution in less than 200ms on a mid-budget laptop. Our GPU implementation runs about ten times faster than our previous C code implementation. Keywords: topography measurement, GPGPU, structured light, fringe projection

1. INTRODUCTION Automatic surface inspection has been used in the industry to reliably detect all kinds of surface defects and to measure the overall quality of a produced piece. For many applications, the most convenient inspection method consists on the employment of a measurement technique capable of providing exact 3D information. Structured light systems (SLS) are based on the reconstruction of the 3D information of a selected area by projecting several phase-shifted sinusoidal patterns onto a surface. In SLSs, an imaging device captures high quality images of the surface and recovers the 3D information with the use of image processing techniques. As an area-measuring technique, a fringe projecting method produces a topography map in principle faster and with more accurate location/patch resolution than the traditional distance measuring based on laser beam. Converting the measured raw data into height information is however significantly more complicated and computationally heavier when compared to the traditional laser triangle measuring. Due to the high speed of production lines, surface inspection systems require extremely fast imaging methods and lots of computational power. The cost of such systems can easily become considerable. The use of standard PCs and Graphics Processing Units (GPUs) for data processing tasks facilitates the construction of cost-effective systems. Using GPUs to perform computationally intensive tasks has become popular in many industrial applications. As GPU computing is well suited for parallel processing, it is also a very interesting solution for accelerating image processing solutions. Traditionally, the GPU has been mainly used to accelerate certain parts of the graphics pipeline such as geometrical transformations. General-purpose computing on graphics processing units (GPGPU) is the technique of using a GPU to perform computations that are usually handled by the CPU. The GPU inclusion of programmable stages and high precision arithmetic allows developers to use stream processing of general data. In this context, we evaluate the use of structured light and sine projections for computing the surface topography of a moving object to measure its roughness and dimensions without touching it, allowing a continuous Further author information: Send correspondence to Miguel Bordallo L´ opez E-mail: [email protected], Telephone: +358 449170541

measurement of a moving target. Our work describes a method for obtaining accurate 3D measurements in real time using an SLS based on an iterative phase measurement algorithm and phase unwrapping. In this context, we present a moving surface topography measurement system based on the projection of white structured light and the use of an 8bpp grayscale camera. Our contribution includes the description and evaluation of the measurement method. To demonstrate the functionality of our methods, our reconstruction software has been integrated in a prototype that has been constructed in the VTT Oulu Research Center. Figure 1 depicts the measuring prototype attached to the measurement subsystem that integrates our software. The implementation of the most expensive parts of the algorithms on a CUDA-capable GPU platform provides the desire real-time performance to the system.

Figure 1. Measuring and processing prototype.

The article is organized as follows: in Section 2 we describe the principles of 3D surface topography based on structured light systems and the different techniques found in the literature. Section 3 discusses the characteristics of the measurement prototype and the image acquisition system. Section 4 explains selection and the offline implementation of the algorithms used for the reconstruction of the surface topography and how they fit in the imaging framework. Section 5 presents the tests and qualitative results of our methods while measuring different surfaces. The implementation of the critical parts of the algorithms with attention to scalability, quantitative examining and performance analysis when applying graphics cards computing in topography computing are presented in the Section 6. Finally, Section 7 summarizes the article and shows future directions.

2. TOPOGRAPHY MEASUREMENT Phase-shifting methods based on fringe pattern projections or structured light have been extensively utilized in topography measurement. They can provide for high resolution height measurements on each pixel. These methods can be used to measure complex surfaces since they are robust against ambient light and surface reflectivity variations.1 Various phase shifting algorithms can be found in the literature,2 including three step,3 ,4 four step and least-square algorithms. On an SLS, the main part is the illuminator that will project a sine pattern on the moving target in a synchronized manner, allowing the camera system to obtain the input pictures to our reconstruction software. In our design, the illuminator is synchronized to secuentially project several sine patterns onto the target in a non-perpendicular manner. The shape of the sine patterns has been corrected according to the designed optics and geometry. The illuminated area contains a non-patterned section on one of the sides, which will be used to compute the displacement of the moving target between two frames.

This scheme is used in a pulse-like illumination and synchronized camera subsystem that will take pictures at a certain known rate. On the input pictures, the target surface has a pattern of projected sines with different phase. The equiespaciated case provides the best performance, e.g. 120◦ for a three pictures scheme. Using this scheme while measuring a moving object, an increment in the wave period ∆t is proportional to an increment in a longitudinal displacement, ∆l. Figure 2 shows the measurement geometry of the sine projection method. The sine pattern illuminator projects a sine on the top of the measurement surface.

Figure 2. Geometry of the sine projection method.

The camera subsystem obtains the input images that will be used to compute the height map. In the simplest case, the camera takes at least three pictures of the moving target that take the following form: I1 (i) = I0 (i)<1 + m(i)cos(φ(i)) δ1>,

where I1 , I2 and I3 are the picture intensity values, i is the camera pixel-index, I0 is the original intensity, m is the modulation amplitude, φ is the defined phase shift and δ1 , δ2 , δ3 are the phase displacements for every picture. If δ1 , δ2 , δ3 are known, then φ can be expressed as: tan(φ) =

(I3 − I2 )cos(δ1 ) + (I1 − I3 ) + cos(δ2 ) + (I2 − I1 )cos(δ3 ) . (I3 − I2 )sin(δ1 ) + (I1 − I3 )sin(δ2 ) + (I2 − I1 )sin(δ3 )

When φ is known, the height of the profile ∆h can be obtained according to: ∆h =

where Λ is the period of the sin pattern and α is the arrival angle of the sine respect to the surface normal.

In practice δ1 , δ2 , δ3 are not known in beforehand and they have to be defined from the pictures by the registration of the unprojected part of the images, for example by using a correlation method. To determine the height map in a robust way, the phase average can be computed iteratively by, for example, the minimum least square sum error method.

3. MEASURING PROTOTYPE To demonstrate the functionality of our measurement methods, they have been incorporated into a prototype developed in VTT Oulu Research Center. The prototype is comprised of a sine projector that integrates a LED illuminator with the projecting optics, a fast image acquisition subsystem and a motor that is able to move the target surface. In the prototype, an ATMEL Atmega 128 microcontroler synchronously activates the projection LEDS and the camera shutter simultaneously, and regulates the motor that moves the surface sample. Figure 3(a) presents the mechanical structure of the prototype and Figure 3(b) presents the block diagram of the test environment.

Илон Маск рекомендует:  Классы в php3

(a) Mechanical structure

(b) Block schematic

Figure 3. Designed and constructed prototype schematics.

The function of the projecting optics is to show the sine pattern of the grid on the surface that is being measured. The performance demands of a projecting objects drawing ability depends on the height resolution being pursued. When aiming for under one micrometer height resolution, the sine pattern’s period must be around 100 micrometers or below, therefore projecting optics must be able to draw at least 10 line pairs in a millimeter. Table 1 shows the requirement specification of the projection optics. Table 1. Requirement specification of the projection optics

Image circle diameter Numerical aperture Depth of field Output illumination Illumination angle Modulation Transfer function Geometric distortion Wavelength band

Surface Pro

Всего через полтора года Microsoft выпустил Surface Laptop 2. Как и в случае с Surface Pro 6, нового довольно мало, но обновленный процессор и черная расцветка, пожалуй, делают этот ноутбук еще более привлекательным. Однако он не идеален, эта версия скорее могла бы называться Surface Laptop 1,5, а не 2. И вот почему.

Об этом обзоре

В этом обзоре используется версия в черном цвете с 8 Гб оперативной памяти, 256 Гб встроенной памяти и с процессором Intel Core-i5 8250U, бизнес-версия с Windows 10 Pro обойдется в нашем магазине за 114990 рублей. Самая дешевая — 95000 рублей (только в платиновом цвете), у такой версии процессор Core i5, 8 Гб оперативной памяти и 128 Гб встроенной памяти. Более продвинутая модель: Core i7, 16 Гб оперативки и 1 Тб встроенной памяти — вскоре также поступит в продажу. Доступные расцветки: платина, черный, бургунди и кобальт.

Как и в случае с Surface Pro 6, в Surface Laptop 2 произошло всего два изменения:

  • новые процессоры 8-го поколения от Intel — от двухъядерного к четырехъядерному для большей производительности. Новый Surface Laptop 2 гораздо быстрее своего предшественника, по словам производителя — на целых 85%, и это благодаря новым процессорам Intel Core i5-8250U и Core i7-8650U. Также в ноутбуке была обновлена система охлаждения, чтобы избегать перегрева, Работает она при этом довольно тихо.
  • Черная расцветка — теперь Surface Laptop 2 можно купить в черном цвете, в дополнение к остальным трем, которые появились в 2020 году.

Другие незначительные изменения включают в себя возможность мгновенного включения и входа с помощью системы распознавания WIndows Hello, а также улучшенную работу SSD.

Больше нет модели Surface Laptop с процессором Intel Core m3 за $799 с оперативной и внутренней памятью 4 Гб и 128 Гб соответственно. Вместо этого доступна модель с четырехъядерным Core i5 и таким же объемом памяти за $999 — довольно значительная разница в цене, но производительность стала заметно выше.

Также теперь недоступна модель с Intel Graphics Iris Plus 640, так как Intel прекратил производство такого GPU для процессоров 8-го поколения на 15 Ватт. Вместо этого, в Surface Laptop 2 теперь стоит Intel Graphics UHD 620. В связи с более высокой производительностью новых четырехъядерных процессоров производитель также обновил систему охлаждения — и тепловые трубки, и кулер. Теперь она работает значительно тише.

Помимо всего прочего, Microsoft перевел ноутбук с Windows 10 S, на Windows 10 Home. Ранее пользователи могли переключиться с режима S на Pro бесплатно, но теперь нужно заплатить $99 за апгрейд или выбрать бизнес версию компьютера, которая поставляется с лицензией Windows 10 Pro. При этом многие приложения от производителя, в том числе нужные для учебы, работают только с версией Pro.

Технические характеристики Surface Laptop 2

С прошлого года поменялось мало, и тем не менее

Дисплей 13.5-дюймовый Pixel Sense
10-ти точечный мульти-тач
Разрешение 2256 x 1504 (201ppi)
соотношение сторон 3:2
Операционная система Windows 10 Home
Процессор Intel Core i5-8250U либо i7-8650U 8-го поколения
Хранилище 128 Гб, 256 Гб, 512 Гб или 1 Тб SSD
Оперативная память 8 Гб или 16 Гб LPDDR3
Графический процессор Intel Graphics UHD 620
Задняя камера отсутствует
Передняя камера 720p
распознавание лиц Windows Hello
Динамики динамики Omnisonic с Dolby Audio Premium
Порты полноразмерный USB 3.0, Mini DisplayPort, джек для наушников, Surface Connect
Датчики датчик освещения
Клавиатура полноразмерная, софт-тач, подсветка
толщина кнопок 1.5 мм
Безопасность TPM 2.0
Время работы батареи 14.5 часов использования
Стилус Surface Pen (не идет в комплекте)
Вес i5 1,25 кг
i7 1,28 кг
Размеры 308.02 мм x 223.27 мм x 0.57 14.47 мм

Двухъядерные процессоры поменялись на четырехъядерные i5 и i7 — такая тенденция появилась в 2020, когда Intel выпустил новые Core i5-8250U и Core i7-8650U. Они являются частью так называемой эры Coffee Lake, в отличие от еще более новых улучшенных Whiskey Lake, которые только сейчас стали появляться в ноутбуках. Большое преимущество этих процессоров — драйверы, они обеспечивают более равномерную работу.

Хотя на словах переход на четерхъядерные процессоры может показаться незначительной деталью, в результате мы получаем невероятную разницу в мощности. Добавьте к этому отличную батарею и более быструю работу SSD, и вы поймете, что случай Surface Laptop 2 — это история скорее об улучшении, нежели о полном изменении. Очень жаль, что больше не доступен графический процессор Iris Plus 640, но это не вина Microsoft. Для новых четырехъядерных процессоров Intel пока не предлагает ничего лучше, чем UHD 620, скорее всего, из-за изменившихся температурных ограничений. Тем не менее, UHD 620 — отличный вариант для большинства пользователей.

Дизайн Surface Laptop 2


В дизайне со времен первого поколения Surface Laptop не изменилось ничего.

Все еще отличный алюминиевый корпус, гладкий, без заметных стыков, винтиков или швов. Все, начиная с решетки на вентиляторе и заканчивая дисплеем, выглядит идеально симметрично и безупречно. Как и в прошлом году.

Единственное изменение — теперь доступна черная расцветка. Матовый корпус Surface Laptop 2 в сочетании с Алькантарой и черными USB- и прочими портами выглядит стильно. Это поразительное сочетание, и выбрать между черным и классическим бургунди, который нам тоже невероятно нравится, оказывается, очень сложно. В Китае также доступен пятый, розовый вариант под названием «румянец».

На черной версии отпечатки пальцев видно гораздо лучше, чем на более светлых моделях, и протирать корпус периодически придется. Тем не менее, так как внутри ткань, большая часть влаги будет практически поглощаться.

Алькантара держится весьма неплохо, хотя многие переживали, что она может начать засаливаться от длительного использования. Тем не менее, пятна иногда все же виднеются, в основном на моделях платинового цвета.

Мы не проводили тестов, но, скорее всего, как и у прошлой модели, алюминиевый корпус крайне подвержен физическим повреждениям, типа царапин, нанесенных металлическими предметами.

Увы, набор портов также не изменился. Все еще есть USB-A и mini DisplayPort. Type-C с поддержкой 3.1 для зарядки, дисплея и передачи данных, к сожалению, не появился, не смотря на его наличие на бюджетном Surface Go, что для конца 2020 года довольно странно.

Дисплей Surface Laptop 2

Дисплей хоть и несильно поменялся, все же он один из лучших в своем классе. У Surface Laptop 2 сенсорный дисплей PixelSense с разрешением 2256 x 1504 (201 ppi). Разрешение несколько ниже, чем на 12,5-дюймовом Surface Pro 6 (2736 x 1824 (267 ppi)) и 13,5-дюймовом Surface Book 2 (3000 x 2000 (267 ppi)), но невооруженным глазом заметить это трудно.

Рамка вокруг экрана тонкая. Также с Surface Laptop 2 вы можете пользоваться стилусом Surface Pen, однако это будет удобно скорее для быстрой подписи документов или набросков, нежели для полноценного рисования. Дисплей отличный, но, пожалуй, слишком блестящий. Он устойчив к жиру, так что на нем не должны оставаться отпечатки пальцев. По словам Microsoft, это самый тонкий и low parallax сенсорный LCD-дисплей среди всех ноутбуков на сегодняшний день.

Gorilla Glass 3 защищает экран от царапин, каждый дисплей откалиброван на 100% по профилю sRGB. По результатам нашего теста, получилось 99% по sRGB и 81% по AdobeRGB. Для сравнения, у Surface Pro 6 во втором случае вышло 79%.

Яркость экрана достигает 330 нит по центру, что выше среднего, но ниже результатов Surface Pro 6 — 470 нит.

К сожалению, со времен прошлой модели экран так и не сделали контрастнее. У Surface Pro 6 есть режим повышенной яркости помимо профиля sRGB, а здесь — нет.

Клавиатура и трэкпад Surface Laptop 2

У Surface Laptop 2 полноразмерная клавиатура с миниатюрными клавишами толщиной 1,5 мм (у Surface Pro 6 — 1,3 мм), также на ней доступно три режима подсветки. Это потрясающая клавиатура, особенно учитывая неметаллические клавиши, приятное нажатие и мягкое покрытие Алькантарой. В целом, впечатление от печати — роскошно. Редкость для ноутбуков.

Некоторые говорят, что клавиатура также стала тише, чем в том году, но на оригинальном Surface Laptop она и так было невероятно тихая, так что тут особых улучшений незаметно.

Огромный трэк-пад размером 105 мм х 70 мм и покрытый стеклом также не изменился с прошлого года. Это нормально, потому что трэк-пад у Surface Laptop один из лучших.

Звук Surface Laptop 2

Сетки динамиков или микрофона нигде особо не заметны, так что ноутбук выглядит очень цельно по сравнению со всеми остальными на рынке. Microsoft убрал динамики под клавиатуру, что может показаться странным решением, но работает оно отлично. Звук гораздо лучше, чем у Surface Pro и Surface Book, благодаря акустике. Из-за расположения динамиков на нижней части ноутбука звук может сильнее резонировать, вы можете даже почувствовать его физически на определенной громкости. Звук чистый, громкий и отчетливый. Благодаря поддержке Dolby Audio Premium и всевозможным улучшениям — громкости, басов — его можно подогнать под себя.

Если вкратце, это отличные динамики, а в сочетании с большим экраном 3:2 вы получаете устройство для комфортного просмотра видео и прочего.

Производительность, нагрев, шум Surface Laptop 2

На оригинальный Surface Laptop часто жаловались из-за скорости SSD, которая совершенно не соответствовала классу ноутбука и его цене.

С этой проблемой Microsoft разобрался, и в новом Surface Laptop 2, встроенная память которого — 256 Гб KBG30ZPZ256G от Toshiba, скорость чтения возросла почти втрое: с 486 мб/с до 1,500 мб/с. Скорость записи также поднялась с 244 мб/с до гораздо более приемлемых 811 мб/с.

Тем не менее, эти характеристики все еще ниже, чем у большинства высококлассных ноутбуков, однако, достаточно неплохие. Более высокой скорости следует ожидать от моделей на 512 Гб или 1 Тб, благодаря технологии NAND.

Результаты тестов

Центральный процессор

Судя по результатам тестов, Core i5-8250U работает отлично. Хоть по данным с одного ядра Core i5 отстает от прошлогоднего i7, но не намного. Скорость нескольких ядер — 13,233 — превысила прошлогоднюю отметку i7 в 9,535.

Здесь и далее: чем больше цифра, тем лучше результат.

Устройство центральный процессор Одно ядро Несколько ядер
Surface Laptop 2 i5-8250U 4,203 13,233
Surface Laptop i5-7200U 3,725 7,523
Surface Laptop i7-7660U 4,714 9,535
Surface Pro 6 i5-8250U 4,207 13,851
Surface Pro 5 i5-7300U 4,302 8,482
Surface Pro 5 i7-7660U 4,513 9,346
Surface Pro 4 i5-6300U 3,319 6,950
Dell XPS 13 i7-8550U 4,681 14,816

К сожалению, проверить новый Surface Laptop 2 с процессором Core i7 у нас не было возможности, однако Core i5-8250U работает отлично и подойдет большинству пользователей.

Графический процессор

Хотя Intel Graphics UHD 620 не такой мощный, как Iris Plus 640, из-за удвоенных графических lanes — 48 против 24 — он неплохо показал себя на тесте Open CL, показав результаты выше, чем его предшественник.

Устройство графический процессор скорость
Surface Laptop 2 UHD 620 35,473
Surface Laptop HD 620 19,256
Surface Laptop Iris 640 31,010
Surface Pro 6 UHD 620 36,283
Surface Pro 5 HD 620 20,688
Surface Pro 5 Iris 640 30,678
Surface Pro 4 HD 520 17,395
Surface Book HD 520 18,197
Surface Book GTX 965M 64,108
Surface Book 2 GTX 1060 138,758

Со скоростью SSD у линейки Surface всегда было не очень. Результаты, как правило, выходят выше среднего, но на верхние позиции не тянут. В 2020 ситуация не поменялась, однако, по сравнению с прошлогодней моделью, Surface Laptop 2 изменился в лучшую сторону. Скорости увеличились втрое, что крайне важно для использования.

устройство чтение запись
Surface Laptop 2 1,509 мб/с 811 мб/с
Surface Laptop 486 мб/с 244 мб/с
Surface Pro 6 1,632 мб/с 814 мб/с
Surface Pro 5 847 мб/с 801 мб/с
Surface Book 1,018 мб/с 967 мб/с

Обратите внимание, что скорости Surface Laptop 2 были зафиксированы с SSD на 256 Гб (Toshiba KBG30ZPZ256G). Модели на 512 Гб и 1 Тб, скорее всего, будут работать быстрее, в силу большего количества каналов NAND.

Общий результат
Устройство балл
Surface Laptop 2 (i5) 3,451
Surface Laptop (i5) 2,720
Surface Pro 6 (i5) 2,522
Surface Pro 5 (i5) 2,351
Surface Pro 6 (i7) 3,451
Surface Pro 5 (i7) 3,746

Благодаря значительно улучшенным процессору и скорости SSD ноутбук включается мгновенно. Время, которое уходит на включение и распознавание лица гораздо меньше, чем у прошлогодней модели. Microsoft явно поработал над целым рядом мелочей, а также с операционной системой и драйверами, чтобы добиться такого эффекта, и это сработало.

Хотя на геймерский ноутбук Surface Laptop 2 пока не тянет, да и самые верхние позиции по производительности пока не занимает, благодаря всем улучшениям пользоваться им невероятно приятно.

По поводу нагрева и шума: модель на Core i5 работает тихо и не перегревается. Даже лучше, чем первый Surface Laptop. Вентилятор практически не включался, а если и включался, его было едва слышно.

Температурные показатели сравнимы с прошлогодними — 40 градусов максимум в нижней части ноутбука и 38 в верхней после 10 минут интенсивной работы. Тепло, но не горячо.

В простых играх скорость подскочила с 32,4 кадров в секунду до 40,1.


Батарея Surface Laptop 2

устройство время
Surface Laptop 2 (i5) 6 часов 20 минут
Surface Laptop (i5) 5 часов 7 минут
Surface Pro 6 (i5) 5 часов 25 минут
Surface Pro 5 (i5) 4 часов 30 минут

Как и в случае в Surface Pro 6, продолжительность работы на Surafec Laptop 2 стала выше. Тест батареи, который включает в себя интенсивную нагрузку центрального и графического процессоров разнообразными повторяющимися задачами — использованием интернета, геймингом, обработкой фото, видео-чатом — показал, что новая модель обошла оригинальный Surface Laptop на час.

Эти результаты ниже обещанных производителем 14,5 часов, но при обычном использовании Surface Laptop 2 сможет продержаться от 8 часов, возможно, все 10. По сравнению с предыдущей моделью, это на час дольше.

Повышенное время работы, улучшенная производительность и бесшумный кулер — все это делает Surface Laptop 2 отличным ноутбуком как для работы, так и для игр.

Итоги

Один из самых красивых, приятных и дорогих ноутбуков на рынке, но находится в условиях жесткой конкуренции.

Если взять Surface Pro — тут у Microsoft особо нет конкуренции, у всех есть похожие устройства, но гораздо слабее доведенного до совершенства Surface Pro.

А вот со стандартными ноутбуками дело обстоит иначе. Dell, HP, Lenovo и даже Huawei показали, что способны создать выдающиеся устройства, с точки зрения аппаратного обеспечения, с высокой производительностью, которые при этом будут отлично выглядеть. И все это по гораздо более низкой цене. Так, советовать Surface Laptop 2 становится затруднительно, когда у конкурентов можно найти технику более функциональную и дешевую.

Пожалуй, самая большая проблема — отсутствия порта USB-C. Кажется глупостью платить $2700 за ноутбук, который не поддерживает Thunderbolt 3. Это делает его покупку неразумной — дорого, меньше функций чем у других ноутбуков, требует крайне бережного отношения.

И, тем не менее, нам он нравится.

Всегда самое сложное — это объяснить, насколько приятно пользоваться любым устройством из линейки Surface. Клавиатура, трэк-пад, дисплей, вес, сборка, цвета, быстрая загрузка, отлаженная ОС и классный дизайн — все это практически идеально. Даже просто открывать и закрывать этот ноутбук — одно удовольствие. Что уж говорить о теплой и мягкой Алькантаре. Хотя другие ноутбуки могут больше за меньшие деньги, пользоваться ими не доставляет такого наслаждения.

Хороший ход — выбрать модель на Core i5 с оперативной памятью 8 Гб и встроенной памятью на 256 Гб. Нет смысла брать модель на процессоре i7, если вам не важна принципиально именно его мощность. По расцветкам: лучший выбор — черный или бургунди, но кобальт и платина тоже смотрятся красиво.

4,5 из 5

Microsoft создал ноутбук, который скорее про искусство, чем про функциональность. Такое соотношение по вкусу далеко не всем, но почему бы не представить на рынке и такой вариант? Если что-то приносит вам удовольствие и приятно глазу — это уже ценно. Именно поэтому мы в восторге от Surface Laptop 2, это удивительное сочетание искусства и технологий для тех, кто хочет ноутбук в классическом смысле этого слова. Может, это и неразумно, но любовь — она всегда такая, даже если кажется, что могло бы быть и лучше.

GLSL Programming/Blender/Lighting of Bumpy Surfaces

Contents

This tutorial covers normal mapping.

It’s the first of two tutorials about texturing techniques that go beyond two-dimensional surfaces (or layers of surfaces). In this tutorial, we start with normal mapping, which is a very well established technique to fake the lighting of small bumps and dents — even on coarse polygon meshes. The code of this tutorial is based on the tutorial on smooth specular highlights and the tutorial on textured spheres.

Perceiving Shapes Based on Lighting [ edit ]

The painting by Caravaggio that is depicted to the left is about the incredulity of Saint Thomas, who did not believe in Christ’s resurrection until he put his finger in Christ’s side. The furrowed brows of the apostles not only symbolize this incredulity but clearly convey it by means of a common facial expression. However, why do we know that their foreheads are actually furrowed instead of being painted with some light and dark lines? After all, this is just a flat painting. In fact, viewers intuitively make the assumption that these are furrowed instead of painted brows — even though the painting itself allows for both interpretations. The lesson is: bumps on smooth surfaces can often be convincingly conveyed by the lighting alone without any other cues (shadows, occlusions, parallax effects, stereo, etc.).

Normal Mapping [ edit ]

Normal mapping tries to convey bumps on smooth surfaces (i.e. coarse triangle meshes with interpolated normals) by changing the surface normal vectors according to some virtual bumps. When the lighting is computed with these modified normal vectors, viewers will often perceive the virtual bumps — even though a perfectly flat triangle has been rendered. The illusion can certainly break down (in particular at silhouettes) but in many cases it is very convincing.

More specifically, the normal vectors that represent the virtual bumps are first encoded in a texture image (i.e. a normal map). A fragment shader then looks up these vectors up in the texture image and computes the lighting based on them. That’s about it. The problem, of course, is the encoding of the normal vectors in a texture image. There are different possibilities and the fragment shader has to be adapted to the specific encoding that was used to generate the normal map.

Normal Mapping in Blender [ edit ]

Normal maps are supported by Blender; see the description in the Blender 3D: Noob to Pro wikibook. Here, however, we will use the normal map to the left and write a GLSL shader to use it.

For this tutorial, you should use a cube mesh instead of the UV sphere that was used in the tutorial on textured spheres. Apart from that you can follow the same steps to assign a material and the texture image to the object. Note that you should specify a default UV Map in the Properties window > Object Data tab. Furthermore, you should specify Coordinates > UV in the Properties window > Textures tab > Mapping.

When decoding the normal information, it would be best to know how the data was encoded. However, there are not so many choices; thus, even if you don’t know how the normal map was encoded, a bit of experimentation can often lead to sufficiently good results. First of all, the RGB components are numbers between 0 and 1; however, they usually represent coordinates between -1 and 1 in a local surface coordinate system (since the vector is normalized, none of the coordinates can be greater than +1 or less than -1). Thus, the mapping from RGB components to coordinates of the normal vector n = ( n x , n y , n z ) <\displaystyle =(n_,n_,n_)> could be:

If in doubt, the latter decoding should be chosen because it will never generate surface normals that point inwards. Furthermore, it is often necessary to normalize the resulting vector.

An implementation in a fragment shader that computes the normalized vector n = ( n x , n y , n z ) <\displaystyle =(n_,n_,n_)> in the variable localCoords could be:

Note that the normal vector N is transformed with the transpose of the inverse model-view matrix from object space to view space (because it is orthogonal to a surface; see “Applying Matrix Transformations”) while the tangent vector T specifies a direction between points on a surface and is therefore transformed with the model-view matrix. The binormal vector B represents a third class of vectors which are transformed differently. (If you really want to know: the skew-symmetric matrix B corresponding to “B×” is transformed like a quadratic form.) Thus, the best choice is to first transform N and T to view space, and then to compute B in view space using the cross product of the transformed vectors.

Also note that the configuration of these axes depends on the tangent data that is provided, the encoding of the normal map, and the texture coordinates. However, the axes are practically always orthogonal and a bluish tint of the normal map indicates that the blue component is in the direction of the interpolated normal vector.

With the normalized directions T, B, and N in view space, we can easily form a matrix that maps any normal vector n of the normal map from the local surface coordinate system to view space because the columns of such a matrix are just the vectors of the axes; thus, the 3×3 matrix for the mapping of n to view space is:

These calculations are performed by the vertex shader, for example this way:

In the fragment shader, we multiply this matrix with n (i.e. localCoords ). For example, with this line:

With the new normal vector in view space, we can compute the lighting as in the tutorial on smooth specular highlights.

Complete Shader Code [ edit ]

The complete fragment shader simply integrates all the snippets and the per-pixel lighting from the tutorial on smooth specular highlights. Also, we have to request tangent attributes and set the texture sampler (make sure that the normal map is in the first position of the list of textures or adapt the second argument of the call to setSampler ). The Python script is then:

Summary [ edit ]

Congratulations! You finished this tutorial! We have look at:

  • How human perception of shapes often relies on lighting.
  • What normal mapping is.
  • How to decode common normal maps.
  • How a fragment shader can decode a normal map and use it for per-pixel lighting.

Further Reading [ edit ]

If you still want to know more

  • about texture mapping (including tiling and offseting), you should read the tutorial on textured spheres.
  • about per-pixel lighting with the Phong reflection model, you should read the tutorial on smooth specular highlights.
  • about transforming normal vectors, you should read “Applying Matrix Transformations”.
  • about normal mapping, you could read Mark J. Kilgard: “A Practical and Robust Bump-mapping Technique for Today’s GPUs”, GDC 2000: Advanced OpenGL Game Development, which is available online.
Понравилась статья? Поделиться с друзьями:
Кодинг, CSS и SQL