Thanks for Watching
I Hope You Enjoy This Video
If You Like This Video Than
Thank Don't forget to click on Subscribe Button
Also Click on Bell icon for latest updates
-------------------------------------------
Lee Se Young Transforms Into Bewitching Priestess For "Hwayugi" - Duration: 1:34. For more infomation >> Lee Se Young Transforms Into Bewitching Priestess For "Hwayugi" - Duration: 1:34.-------------------------------------------
Drivers Warned To Avoid Sierra Or Prepare For Long Drive - Duration: 2:43. For more infomation >> Drivers Warned To Avoid Sierra Or Prepare For Long Drive - Duration: 2:43.-------------------------------------------
HTU - Unity for NET Professionals. Graphics Pipeline. - Duration: 5:54.Hi everyone and welcome to the first unity lesson for .net professionals
This lesson will be dedicated to computer graphics and here I will try to explain how
the geometry is getting rendered.
So, here we have Autodesk Maya, and let's create a simple cube.
It consists of vertices, edges and triangles.
We cannot see triangles right now because Maya simplifies the geometry in order to make
it more simpler to work with.
So let's make maya display them.
The basic building block of any geometry is vertex.
Vertex is a simple thing that has some attributes.
The first is vertex position, which defines the position of vertex in local space of this
model.
The second attribute is color, and you can define with your shaders how to use it.
The third attribute of vertex is UV coordinate.
The UVs are used to project two-dimensional texture into three-dimensional space.
Each vertex has its own coordinate in UV space, which is bounded into square with side of
one unit.
The fourth is normal, which defines the orientation of vertex in space, and is used, for example,
to define the visibility of triangles (so-called back-face culling).
So, the geometry has to pass several graphics pipeline steps in order to appear on user's
screen.
For deeper understanding of each step, please refer to Render Hell book by Simon Trumpler,
followed by the link below.
Within this lesson I will cover most important parts of rendering process.
The first step is Input Assembly.
At this step CPU combines vertices into vertex buffers in the specific order, so the graphics
card could understand how are they connected to each other to make triangles.
Then comes the second - Vertex Shading stage.
At this stage vertex positions are transformed from local space to so-called clip space - the
space of the camera frustum, as can be seen on the picture.
Model-View-Projection, or MVP matrix is used to transform vertices.
As for example, unity does this in vertex shader when multiplying vertex position by
built-in UNITY_MATRIX_MVP variable.
Then, vertex attributes have to be interpolated.
This is done so the graphics card could understand what's going on in between the vertices,
because before interpolation all that GPU has is a set of separated vertex structures.
So let's look how interpolation works on the example of colors and normals.
At first, let's paint some vertices in different colors.
This one will be red, this one is blue, and another one is yellow.
This is how the cube looks like after colors interpolation.
Anyway, in order to see vertex colors of your mesh, the shader should take them in account.
Now, let's see the normals interpolation.
As you can notice, each vertex here has three normals.
But that is not possible since every vertex can have only one.
What is actually happening, is that Maya makes three copies of vertex, so they would have
different normals, and shows them as one.
See this sharp edge here?
This edge is the reason for such manipulations.
Let me show you what happens if we merge those "virtual" vertices.
Here we soften all cube's edges, so maya merged every three virtual vertices into one
and averaged their normals.
You can see how maya tries to shade the cube according to new normals, which are now smoothly
interpolated along edges from one to another, so now this cube behaves more like a sphere.
In hypershade window we can see that our cube uses lambertian lighting model, which shades
the geometry according to how it's normals oriented to light direction.
Let's get it back to normal.
So you can see how the colors and the normals were interpolated, and now GPU has a little
bit more information about how to display the geometry.
Next comes the fragment shading stage.
The geometry is rasterized into fragments, and now GPU refers to fragment shader in order
to shade each one of them.
Here the lighting and textures are combined to calculate the final color of each fragment.
At last, rasterized geometry is sent to frame buffer, and after some final calculations
it gets to the screen.
Anyway, there are even more stages in graphics pipeline, which are optional.
For example - tessellation stage, which generates even more vertices than was originally sent
to GPU to make smoother and more detailed geometry without affecting amount of operational
memory used by the mesh.
So this is the basics of how the graphics pipeline works.
Now, I hope that you have a deeper understanding of the geometry you're gonna work with, what
parts it consists of, how is it transformed when getting from a model to the screen.
Thank you for your attention, don't forget to leave you questions and comments, and of
course subscribe to my channel.
See you soon in the next lessons.
Không có nhận xét nào:
Đăng nhận xét