“DirectX will only draw polygons with an [X,Y] from [-1,-1] to [1,1] and with a Z from 0 to 1.”, What...











up vote
6
down vote

favorite












I've been following the D3D11 tutorials on Rastertek to expand my knowledge on rendering which I will need as I want to build my own engine, as a hobby.



After finally getting a model to be rendered to the screen in 3D space, and being able to move and rotate both the camera and the model, I wanted to import custom FBX models instead, which isn't included in the tutorial. I found this tutorial online on just that, which I followed alongside the fbx SDK's documentation.



It now works to load and render FBX models, however, the vertices are quite messed up on some models, and on some, it works perfectly, then I read in the tutorial, and I quote:



"Finally, if you don’t use the shitty-bell FBX attached up at the top of this post, take note. In its default state, DirectX will only draw polygons with an [X,Y] from [-1,-1] to [1,1] and with a Z from 0 to 1. Look through the vertices in your FBX and make sure everything is in that range!"



As you can guess, that "shitty bell" works perfectly to draw. Anyway, what does it mean that it'll draw polygons within that range, also, how do I work around that when modeling?










share|improve this question






















  • I'm not familiar with DirectX, but having used other libraries, did you try to draw a triangle that one of its vertices is like on (2, 0, 0) or something like that? The limits [-1, -1] and [1, 1] seem like homogeneous coordinates, but these shouldn't be accessible to someone drawing a mesh, and should be done behind the scenes. Is there a chance you misunderstood something?
    – TomTsagk
    Dec 4 at 16:20










  • @TomTsagk I could definitely be misunderstanding something, although, I haven't tried to manually draw something by inputting a manually created vertex and index buffer (well I have before, but that's got nothing to do with this). They're not accessible either, I just didn't understand what they even were, I thought I had to alter the way blender exports the FBX in order for it to work with D3D at a point, but I just flew away in my own thoughts, and after some googling I gave up and asked my question here instead. Altough, read the answer Josh posted, that answers my thouts very well!
    – larssonmartin
    Dec 4 at 17:30















up vote
6
down vote

favorite












I've been following the D3D11 tutorials on Rastertek to expand my knowledge on rendering which I will need as I want to build my own engine, as a hobby.



After finally getting a model to be rendered to the screen in 3D space, and being able to move and rotate both the camera and the model, I wanted to import custom FBX models instead, which isn't included in the tutorial. I found this tutorial online on just that, which I followed alongside the fbx SDK's documentation.



It now works to load and render FBX models, however, the vertices are quite messed up on some models, and on some, it works perfectly, then I read in the tutorial, and I quote:



"Finally, if you don’t use the shitty-bell FBX attached up at the top of this post, take note. In its default state, DirectX will only draw polygons with an [X,Y] from [-1,-1] to [1,1] and with a Z from 0 to 1. Look through the vertices in your FBX and make sure everything is in that range!"



As you can guess, that "shitty bell" works perfectly to draw. Anyway, what does it mean that it'll draw polygons within that range, also, how do I work around that when modeling?










share|improve this question






















  • I'm not familiar with DirectX, but having used other libraries, did you try to draw a triangle that one of its vertices is like on (2, 0, 0) or something like that? The limits [-1, -1] and [1, 1] seem like homogeneous coordinates, but these shouldn't be accessible to someone drawing a mesh, and should be done behind the scenes. Is there a chance you misunderstood something?
    – TomTsagk
    Dec 4 at 16:20










  • @TomTsagk I could definitely be misunderstanding something, although, I haven't tried to manually draw something by inputting a manually created vertex and index buffer (well I have before, but that's got nothing to do with this). They're not accessible either, I just didn't understand what they even were, I thought I had to alter the way blender exports the FBX in order for it to work with D3D at a point, but I just flew away in my own thoughts, and after some googling I gave up and asked my question here instead. Altough, read the answer Josh posted, that answers my thouts very well!
    – larssonmartin
    Dec 4 at 17:30













up vote
6
down vote

favorite









up vote
6
down vote

favorite











I've been following the D3D11 tutorials on Rastertek to expand my knowledge on rendering which I will need as I want to build my own engine, as a hobby.



After finally getting a model to be rendered to the screen in 3D space, and being able to move and rotate both the camera and the model, I wanted to import custom FBX models instead, which isn't included in the tutorial. I found this tutorial online on just that, which I followed alongside the fbx SDK's documentation.



It now works to load and render FBX models, however, the vertices are quite messed up on some models, and on some, it works perfectly, then I read in the tutorial, and I quote:



"Finally, if you don’t use the shitty-bell FBX attached up at the top of this post, take note. In its default state, DirectX will only draw polygons with an [X,Y] from [-1,-1] to [1,1] and with a Z from 0 to 1. Look through the vertices in your FBX and make sure everything is in that range!"



As you can guess, that "shitty bell" works perfectly to draw. Anyway, what does it mean that it'll draw polygons within that range, also, how do I work around that when modeling?










share|improve this question













I've been following the D3D11 tutorials on Rastertek to expand my knowledge on rendering which I will need as I want to build my own engine, as a hobby.



After finally getting a model to be rendered to the screen in 3D space, and being able to move and rotate both the camera and the model, I wanted to import custom FBX models instead, which isn't included in the tutorial. I found this tutorial online on just that, which I followed alongside the fbx SDK's documentation.



It now works to load and render FBX models, however, the vertices are quite messed up on some models, and on some, it works perfectly, then I read in the tutorial, and I quote:



"Finally, if you don’t use the shitty-bell FBX attached up at the top of this post, take note. In its default state, DirectX will only draw polygons with an [X,Y] from [-1,-1] to [1,1] and with a Z from 0 to 1. Look through the vertices in your FBX and make sure everything is in that range!"



As you can guess, that "shitty bell" works perfectly to draw. Anyway, what does it mean that it'll draw polygons within that range, also, how do I work around that when modeling?







rendering directx11 fbx






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Dec 4 at 16:09









larssonmartin

313




313












  • I'm not familiar with DirectX, but having used other libraries, did you try to draw a triangle that one of its vertices is like on (2, 0, 0) or something like that? The limits [-1, -1] and [1, 1] seem like homogeneous coordinates, but these shouldn't be accessible to someone drawing a mesh, and should be done behind the scenes. Is there a chance you misunderstood something?
    – TomTsagk
    Dec 4 at 16:20










  • @TomTsagk I could definitely be misunderstanding something, although, I haven't tried to manually draw something by inputting a manually created vertex and index buffer (well I have before, but that's got nothing to do with this). They're not accessible either, I just didn't understand what they even were, I thought I had to alter the way blender exports the FBX in order for it to work with D3D at a point, but I just flew away in my own thoughts, and after some googling I gave up and asked my question here instead. Altough, read the answer Josh posted, that answers my thouts very well!
    – larssonmartin
    Dec 4 at 17:30


















  • I'm not familiar with DirectX, but having used other libraries, did you try to draw a triangle that one of its vertices is like on (2, 0, 0) or something like that? The limits [-1, -1] and [1, 1] seem like homogeneous coordinates, but these shouldn't be accessible to someone drawing a mesh, and should be done behind the scenes. Is there a chance you misunderstood something?
    – TomTsagk
    Dec 4 at 16:20










  • @TomTsagk I could definitely be misunderstanding something, although, I haven't tried to manually draw something by inputting a manually created vertex and index buffer (well I have before, but that's got nothing to do with this). They're not accessible either, I just didn't understand what they even were, I thought I had to alter the way blender exports the FBX in order for it to work with D3D at a point, but I just flew away in my own thoughts, and after some googling I gave up and asked my question here instead. Altough, read the answer Josh posted, that answers my thouts very well!
    – larssonmartin
    Dec 4 at 17:30
















I'm not familiar with DirectX, but having used other libraries, did you try to draw a triangle that one of its vertices is like on (2, 0, 0) or something like that? The limits [-1, -1] and [1, 1] seem like homogeneous coordinates, but these shouldn't be accessible to someone drawing a mesh, and should be done behind the scenes. Is there a chance you misunderstood something?
– TomTsagk
Dec 4 at 16:20




I'm not familiar with DirectX, but having used other libraries, did you try to draw a triangle that one of its vertices is like on (2, 0, 0) or something like that? The limits [-1, -1] and [1, 1] seem like homogeneous coordinates, but these shouldn't be accessible to someone drawing a mesh, and should be done behind the scenes. Is there a chance you misunderstood something?
– TomTsagk
Dec 4 at 16:20












@TomTsagk I could definitely be misunderstanding something, although, I haven't tried to manually draw something by inputting a manually created vertex and index buffer (well I have before, but that's got nothing to do with this). They're not accessible either, I just didn't understand what they even were, I thought I had to alter the way blender exports the FBX in order for it to work with D3D at a point, but I just flew away in my own thoughts, and after some googling I gave up and asked my question here instead. Altough, read the answer Josh posted, that answers my thouts very well!
– larssonmartin
Dec 4 at 17:30




@TomTsagk I could definitely be misunderstanding something, although, I haven't tried to manually draw something by inputting a manually created vertex and index buffer (well I have before, but that's got nothing to do with this). They're not accessible either, I just didn't understand what they even were, I thought I had to alter the way blender exports the FBX in order for it to work with D3D at a point, but I just flew away in my own thoughts, and after some googling I gave up and asked my question here instead. Altough, read the answer Josh posted, that answers my thouts very well!
– larssonmartin
Dec 4 at 17:30










2 Answers
2






active

oldest

votes

















up vote
16
down vote













While technically true, the statement in that tutorial is phrased somewhat misleadingly and in a bit of an alarmist fashion.



Generally speaking you do not need to worry about this.




Anyway, what does it mean that it'll draw polygons within that range?




Model vertices in the graphics pipeline go through several different stages as they pass through the graphics pipeline. Each of these stages is its own coordinate system, and a vertex passes between those coordinate systems by way of transformation matrices.



This tutorial is referring to one of the final stages in the pipeline that vertices exist in just prior to rasterization, called homogeneous screen space (but note that it also has other names). The purpose of this coordinate space is to scale all the vertex data down into a known, normalized range so that it can be mapped to the pixel space of the target window. Typical ranges for axes in this space in various graphics APIs are -1 to 1 or 0 to 1, because both are simple to then map to the 0-to-some-large-integer pixel space of the target window.



Normally you normally configure the model, view and projection matrices (as well as the viewport transform if needed) appropriately for your scene. The tutorial you linked omits these steps, perhaps for brevity. If you don't explicitly do any transformation of vertex data in your shaders or the like, the net result is as if all your transformations were identity matrices, which means all transformations that do end up done by the pipeline automatically also end up essentially be no-ops (except the viewport transform) and you do effectively have a visible range that matches the ranges of homogeneous screen space.




how do I work around that when modeling?




You don't. Model your objects however you need, and then adapt Direct3D to those needs, as above, rather than the other way around. While ultimately it is true that your coordinates will end up in some normalized, -1-to-1 type of form just prior to drawing, you can configure D3D to transform and scale any coordinate range you want into that penultimate range.






share|improve this answer



















  • 11




    This is also not at all unique to DirectX. It's true for OpenGL and other 3D graphics APIs. You might find this older article useful. The API instructions are out of date because they are Direct3D 9, but the concepts are all correct.
    – Chuck Walbourn
    Dec 4 at 17:17












  • @ChuckWalbourn Thanks for the article, I'll for sure give it a read.
    – larssonmartin
    Dec 4 at 17:32






  • 2




    TL:DR when the tutorial author writes "In its default state...", he means "If you don't set scene/camera transform", what hopefully gets explained in further parts.
    – IMil
    Dec 4 at 23:00






  • 13




    Wow, the statement in OP's tutorial seems about as accurate as telling someone "here's a camera on a tripod - you'll only get a decent photo if you set your object at a location in front of it, at the right distance to be in focus" -- it's not wrong, but leaving out the idea of moving the camera or adjusting its focus, even for the sake of simplicity, kind of leads to exactly OP's confusion...
    – A C
    Dec 5 at 2:20












  • @lMil Alright, which I do in my vertex shader based off of the vertices positions, got it, thanks!
    – larssonmartin
    Dec 5 at 6:35


















up vote
7
down vote













The range [-1 ; 1] x [-1 ; 1] x [0 ; 1] mentioned in the tutorial refers to the canonical view volume. It is the final coordinate space vertex data gets mapped to before everything is rasterized to your screen. To understand what exactly this means, it helps to take a look at what a rendering pipeline typically looks like.



Coordinate spaces



A coordinate space refers to the coordinate system you use to define the positions of vertices within it. As a real-life example, imagine you have a desk with a keyboard on top of it and you want to express the position of the keyboard. You could define the front left corner of the desk to be position (0, 0, 0) — this is called the origin —, the X-axis to be along the length of the desk (left to right), the Y-axis to be along the depth of the desk (near to far), and the Z-axis to be vertically upward from the desk. If your keyboard is located 50 centimeters to the right of this corner, and 10 centimeters away from the nearest edge, it is located at position (50, 10, 0).



Alternatively, you could define the corner of your room to be position (0, 0, 0). Lets say your desk is located 200 cm from the left wall, 300 cm from the front wall, and the desk is 70cm in height. In this case your desk's top is located at position (200, 300, 70), and your keyboard is located at (250, 310, 70).



These are two examples of different coordinate spaces, and how they affect the position coordinates of the objects within them. Similarly, vertex data in a 3D rendering pipeline is transformed across various coordinate spaces before it ends up on your screen.



Coordinate spaces in a 3D rendering pipeline



Individual objects are modelled in 3D software such as Autodesk Maya, Blender ... . They are often modeled centered around the origin. This coordinate space is called model space. If you were to render several objects in model space together, they would all be piled up centered around the origin.



Instead a new coordinate space called world space is defined. Think of this as your game world, with the origin being the center of the world. When transforming model space to world space coordinates, translations, rotations, scaling and other operations are performed. For example if you want to render a keyboard at position (250, 310, 70) of your world, you would offset all its vertices by this vector. Mathematically speaking, this is done using a transformation matrix. You can apply a different transformation to each individual object to place objects in your game world.



You now have a big pile of vertices where every objects is placed in the correct position. You now need to define what part of the world you want to look at. This is done by moving all vertex data to camera space. An often-employed convention is have the camera positioned in the origin of camera space, to have it look towards the positive Z-axis (the eye-vector) and to have the positive Y-axis point upward (the up-vector). When converting from world space to model space, we thus want to move and rotate all vertex data so that our objects of focus are near the origin and have positive Z-coordinates.



When you look at objects in real life, you will notice a phenomenon called foreshortening. This means objects near you appear bigger (i.e. take up more of your view), while objects far away from you appear smaller (i.e. take up less of your view). We simulate this by applying a perspective transformation, which moves our camera space vertex coordinates to projected space.



Finally, note that we have 3D vertex data, that needs to be rendered on a 2D screen (e.g. 1920 by 1080 pixels). The vertex data in camera space is therefore transformed to screen space. Your graphics API takes care of rendering the screen space vertex data to your screen. The process of converting vertex data to pixels on your screen is called rasterization. But what vertex coordinates end up where on your screen? This is where the canonical view volume comes into play.



DirectX specifies that the X-coordinate of the vertex is mapped to the horizontal position on the screen. Specifically: range [-1 ; 1] is mapped to [0 ; 1920] (in case of a 1920 x 1080 screen). The Y-coordinate of the vertex is mapped to the vertical position on the screen. Specifically: range [-1 ; 1] is mapped to [0 ; 1080] (in case of a 1920 x 1080 screen). The Z-coordinate is used to determine what vertices need to be rendered in front or behind each other. Specifically, vertices near 0 are near the camera and should be rendered in front. Vertices near 1 are far away from the canera and rendered behind. Vertices with a Z-coordinate smaller than 0 are behind the camera and thus clipped — i.e. not rendered. Vertices with a Z-coordinate larger than 1 are too far away and are clipped as well.



Your perspective transform thus needs to move all vertices you want visible on your screen inside this canonical view volume. In the tutorial you followed, all of these transformations are omitted to keep the tutorial simple. You are thus directly rendering to the canonical view volume. This is why the author says anything outside of range [-1 ; 1] x [-1 ; 1] x [0 ; 1] is not visible.



References



For an article with images to illustrate these various coordinate spaces, see World, View and Projection Transformation Matrices by CodingLabs.






share|improve this answer

















  • 1




    I've read a lot of math and theory behind the rendering pipelines, during school courses and at home out of curiosity, however, your simple explanation gave more than long articles have done, it's easier to dive deep into articles when you understand the basic concepts, I had never before heard of the Canonical view volume before, that explains my confusion around the small range for the vertices. Thanks for taking your time with this!
    – larssonmartin
    Dec 5 at 6:33











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
});
});
}, "mathjax-editing");

StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "53"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fgamedev.stackexchange.com%2fquestions%2f165895%2fdirectx-will-only-draw-polygons-with-an-x-y-from-1-1-to-1-1-and-with-a%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
16
down vote













While technically true, the statement in that tutorial is phrased somewhat misleadingly and in a bit of an alarmist fashion.



Generally speaking you do not need to worry about this.




Anyway, what does it mean that it'll draw polygons within that range?




Model vertices in the graphics pipeline go through several different stages as they pass through the graphics pipeline. Each of these stages is its own coordinate system, and a vertex passes between those coordinate systems by way of transformation matrices.



This tutorial is referring to one of the final stages in the pipeline that vertices exist in just prior to rasterization, called homogeneous screen space (but note that it also has other names). The purpose of this coordinate space is to scale all the vertex data down into a known, normalized range so that it can be mapped to the pixel space of the target window. Typical ranges for axes in this space in various graphics APIs are -1 to 1 or 0 to 1, because both are simple to then map to the 0-to-some-large-integer pixel space of the target window.



Normally you normally configure the model, view and projection matrices (as well as the viewport transform if needed) appropriately for your scene. The tutorial you linked omits these steps, perhaps for brevity. If you don't explicitly do any transformation of vertex data in your shaders or the like, the net result is as if all your transformations were identity matrices, which means all transformations that do end up done by the pipeline automatically also end up essentially be no-ops (except the viewport transform) and you do effectively have a visible range that matches the ranges of homogeneous screen space.




how do I work around that when modeling?




You don't. Model your objects however you need, and then adapt Direct3D to those needs, as above, rather than the other way around. While ultimately it is true that your coordinates will end up in some normalized, -1-to-1 type of form just prior to drawing, you can configure D3D to transform and scale any coordinate range you want into that penultimate range.






share|improve this answer



















  • 11




    This is also not at all unique to DirectX. It's true for OpenGL and other 3D graphics APIs. You might find this older article useful. The API instructions are out of date because they are Direct3D 9, but the concepts are all correct.
    – Chuck Walbourn
    Dec 4 at 17:17












  • @ChuckWalbourn Thanks for the article, I'll for sure give it a read.
    – larssonmartin
    Dec 4 at 17:32






  • 2




    TL:DR when the tutorial author writes "In its default state...", he means "If you don't set scene/camera transform", what hopefully gets explained in further parts.
    – IMil
    Dec 4 at 23:00






  • 13




    Wow, the statement in OP's tutorial seems about as accurate as telling someone "here's a camera on a tripod - you'll only get a decent photo if you set your object at a location in front of it, at the right distance to be in focus" -- it's not wrong, but leaving out the idea of moving the camera or adjusting its focus, even for the sake of simplicity, kind of leads to exactly OP's confusion...
    – A C
    Dec 5 at 2:20












  • @lMil Alright, which I do in my vertex shader based off of the vertices positions, got it, thanks!
    – larssonmartin
    Dec 5 at 6:35















up vote
16
down vote













While technically true, the statement in that tutorial is phrased somewhat misleadingly and in a bit of an alarmist fashion.



Generally speaking you do not need to worry about this.




Anyway, what does it mean that it'll draw polygons within that range?




Model vertices in the graphics pipeline go through several different stages as they pass through the graphics pipeline. Each of these stages is its own coordinate system, and a vertex passes between those coordinate systems by way of transformation matrices.



This tutorial is referring to one of the final stages in the pipeline that vertices exist in just prior to rasterization, called homogeneous screen space (but note that it also has other names). The purpose of this coordinate space is to scale all the vertex data down into a known, normalized range so that it can be mapped to the pixel space of the target window. Typical ranges for axes in this space in various graphics APIs are -1 to 1 or 0 to 1, because both are simple to then map to the 0-to-some-large-integer pixel space of the target window.



Normally you normally configure the model, view and projection matrices (as well as the viewport transform if needed) appropriately for your scene. The tutorial you linked omits these steps, perhaps for brevity. If you don't explicitly do any transformation of vertex data in your shaders or the like, the net result is as if all your transformations were identity matrices, which means all transformations that do end up done by the pipeline automatically also end up essentially be no-ops (except the viewport transform) and you do effectively have a visible range that matches the ranges of homogeneous screen space.




how do I work around that when modeling?




You don't. Model your objects however you need, and then adapt Direct3D to those needs, as above, rather than the other way around. While ultimately it is true that your coordinates will end up in some normalized, -1-to-1 type of form just prior to drawing, you can configure D3D to transform and scale any coordinate range you want into that penultimate range.






share|improve this answer



















  • 11




    This is also not at all unique to DirectX. It's true for OpenGL and other 3D graphics APIs. You might find this older article useful. The API instructions are out of date because they are Direct3D 9, but the concepts are all correct.
    – Chuck Walbourn
    Dec 4 at 17:17












  • @ChuckWalbourn Thanks for the article, I'll for sure give it a read.
    – larssonmartin
    Dec 4 at 17:32






  • 2




    TL:DR when the tutorial author writes "In its default state...", he means "If you don't set scene/camera transform", what hopefully gets explained in further parts.
    – IMil
    Dec 4 at 23:00






  • 13




    Wow, the statement in OP's tutorial seems about as accurate as telling someone "here's a camera on a tripod - you'll only get a decent photo if you set your object at a location in front of it, at the right distance to be in focus" -- it's not wrong, but leaving out the idea of moving the camera or adjusting its focus, even for the sake of simplicity, kind of leads to exactly OP's confusion...
    – A C
    Dec 5 at 2:20












  • @lMil Alright, which I do in my vertex shader based off of the vertices positions, got it, thanks!
    – larssonmartin
    Dec 5 at 6:35













up vote
16
down vote










up vote
16
down vote









While technically true, the statement in that tutorial is phrased somewhat misleadingly and in a bit of an alarmist fashion.



Generally speaking you do not need to worry about this.




Anyway, what does it mean that it'll draw polygons within that range?




Model vertices in the graphics pipeline go through several different stages as they pass through the graphics pipeline. Each of these stages is its own coordinate system, and a vertex passes between those coordinate systems by way of transformation matrices.



This tutorial is referring to one of the final stages in the pipeline that vertices exist in just prior to rasterization, called homogeneous screen space (but note that it also has other names). The purpose of this coordinate space is to scale all the vertex data down into a known, normalized range so that it can be mapped to the pixel space of the target window. Typical ranges for axes in this space in various graphics APIs are -1 to 1 or 0 to 1, because both are simple to then map to the 0-to-some-large-integer pixel space of the target window.



Normally you normally configure the model, view and projection matrices (as well as the viewport transform if needed) appropriately for your scene. The tutorial you linked omits these steps, perhaps for brevity. If you don't explicitly do any transformation of vertex data in your shaders or the like, the net result is as if all your transformations were identity matrices, which means all transformations that do end up done by the pipeline automatically also end up essentially be no-ops (except the viewport transform) and you do effectively have a visible range that matches the ranges of homogeneous screen space.




how do I work around that when modeling?




You don't. Model your objects however you need, and then adapt Direct3D to those needs, as above, rather than the other way around. While ultimately it is true that your coordinates will end up in some normalized, -1-to-1 type of form just prior to drawing, you can configure D3D to transform and scale any coordinate range you want into that penultimate range.






share|improve this answer














While technically true, the statement in that tutorial is phrased somewhat misleadingly and in a bit of an alarmist fashion.



Generally speaking you do not need to worry about this.




Anyway, what does it mean that it'll draw polygons within that range?




Model vertices in the graphics pipeline go through several different stages as they pass through the graphics pipeline. Each of these stages is its own coordinate system, and a vertex passes between those coordinate systems by way of transformation matrices.



This tutorial is referring to one of the final stages in the pipeline that vertices exist in just prior to rasterization, called homogeneous screen space (but note that it also has other names). The purpose of this coordinate space is to scale all the vertex data down into a known, normalized range so that it can be mapped to the pixel space of the target window. Typical ranges for axes in this space in various graphics APIs are -1 to 1 or 0 to 1, because both are simple to then map to the 0-to-some-large-integer pixel space of the target window.



Normally you normally configure the model, view and projection matrices (as well as the viewport transform if needed) appropriately for your scene. The tutorial you linked omits these steps, perhaps for brevity. If you don't explicitly do any transformation of vertex data in your shaders or the like, the net result is as if all your transformations were identity matrices, which means all transformations that do end up done by the pipeline automatically also end up essentially be no-ops (except the viewport transform) and you do effectively have a visible range that matches the ranges of homogeneous screen space.




how do I work around that when modeling?




You don't. Model your objects however you need, and then adapt Direct3D to those needs, as above, rather than the other way around. While ultimately it is true that your coordinates will end up in some normalized, -1-to-1 type of form just prior to drawing, you can configure D3D to transform and scale any coordinate range you want into that penultimate range.







share|improve this answer














share|improve this answer



share|improve this answer








edited Dec 4 at 16:44

























answered Dec 4 at 16:39









Josh

91.6k16205322




91.6k16205322








  • 11




    This is also not at all unique to DirectX. It's true for OpenGL and other 3D graphics APIs. You might find this older article useful. The API instructions are out of date because they are Direct3D 9, but the concepts are all correct.
    – Chuck Walbourn
    Dec 4 at 17:17












  • @ChuckWalbourn Thanks for the article, I'll for sure give it a read.
    – larssonmartin
    Dec 4 at 17:32






  • 2




    TL:DR when the tutorial author writes "In its default state...", he means "If you don't set scene/camera transform", what hopefully gets explained in further parts.
    – IMil
    Dec 4 at 23:00






  • 13




    Wow, the statement in OP's tutorial seems about as accurate as telling someone "here's a camera on a tripod - you'll only get a decent photo if you set your object at a location in front of it, at the right distance to be in focus" -- it's not wrong, but leaving out the idea of moving the camera or adjusting its focus, even for the sake of simplicity, kind of leads to exactly OP's confusion...
    – A C
    Dec 5 at 2:20












  • @lMil Alright, which I do in my vertex shader based off of the vertices positions, got it, thanks!
    – larssonmartin
    Dec 5 at 6:35














  • 11




    This is also not at all unique to DirectX. It's true for OpenGL and other 3D graphics APIs. You might find this older article useful. The API instructions are out of date because they are Direct3D 9, but the concepts are all correct.
    – Chuck Walbourn
    Dec 4 at 17:17












  • @ChuckWalbourn Thanks for the article, I'll for sure give it a read.
    – larssonmartin
    Dec 4 at 17:32






  • 2




    TL:DR when the tutorial author writes "In its default state...", he means "If you don't set scene/camera transform", what hopefully gets explained in further parts.
    – IMil
    Dec 4 at 23:00






  • 13




    Wow, the statement in OP's tutorial seems about as accurate as telling someone "here's a camera on a tripod - you'll only get a decent photo if you set your object at a location in front of it, at the right distance to be in focus" -- it's not wrong, but leaving out the idea of moving the camera or adjusting its focus, even for the sake of simplicity, kind of leads to exactly OP's confusion...
    – A C
    Dec 5 at 2:20












  • @lMil Alright, which I do in my vertex shader based off of the vertices positions, got it, thanks!
    – larssonmartin
    Dec 5 at 6:35








11




11




This is also not at all unique to DirectX. It's true for OpenGL and other 3D graphics APIs. You might find this older article useful. The API instructions are out of date because they are Direct3D 9, but the concepts are all correct.
– Chuck Walbourn
Dec 4 at 17:17






This is also not at all unique to DirectX. It's true for OpenGL and other 3D graphics APIs. You might find this older article useful. The API instructions are out of date because they are Direct3D 9, but the concepts are all correct.
– Chuck Walbourn
Dec 4 at 17:17














@ChuckWalbourn Thanks for the article, I'll for sure give it a read.
– larssonmartin
Dec 4 at 17:32




@ChuckWalbourn Thanks for the article, I'll for sure give it a read.
– larssonmartin
Dec 4 at 17:32




2




2




TL:DR when the tutorial author writes "In its default state...", he means "If you don't set scene/camera transform", what hopefully gets explained in further parts.
– IMil
Dec 4 at 23:00




TL:DR when the tutorial author writes "In its default state...", he means "If you don't set scene/camera transform", what hopefully gets explained in further parts.
– IMil
Dec 4 at 23:00




13




13




Wow, the statement in OP's tutorial seems about as accurate as telling someone "here's a camera on a tripod - you'll only get a decent photo if you set your object at a location in front of it, at the right distance to be in focus" -- it's not wrong, but leaving out the idea of moving the camera or adjusting its focus, even for the sake of simplicity, kind of leads to exactly OP's confusion...
– A C
Dec 5 at 2:20






Wow, the statement in OP's tutorial seems about as accurate as telling someone "here's a camera on a tripod - you'll only get a decent photo if you set your object at a location in front of it, at the right distance to be in focus" -- it's not wrong, but leaving out the idea of moving the camera or adjusting its focus, even for the sake of simplicity, kind of leads to exactly OP's confusion...
– A C
Dec 5 at 2:20














@lMil Alright, which I do in my vertex shader based off of the vertices positions, got it, thanks!
– larssonmartin
Dec 5 at 6:35




@lMil Alright, which I do in my vertex shader based off of the vertices positions, got it, thanks!
– larssonmartin
Dec 5 at 6:35












up vote
7
down vote













The range [-1 ; 1] x [-1 ; 1] x [0 ; 1] mentioned in the tutorial refers to the canonical view volume. It is the final coordinate space vertex data gets mapped to before everything is rasterized to your screen. To understand what exactly this means, it helps to take a look at what a rendering pipeline typically looks like.



Coordinate spaces



A coordinate space refers to the coordinate system you use to define the positions of vertices within it. As a real-life example, imagine you have a desk with a keyboard on top of it and you want to express the position of the keyboard. You could define the front left corner of the desk to be position (0, 0, 0) — this is called the origin —, the X-axis to be along the length of the desk (left to right), the Y-axis to be along the depth of the desk (near to far), and the Z-axis to be vertically upward from the desk. If your keyboard is located 50 centimeters to the right of this corner, and 10 centimeters away from the nearest edge, it is located at position (50, 10, 0).



Alternatively, you could define the corner of your room to be position (0, 0, 0). Lets say your desk is located 200 cm from the left wall, 300 cm from the front wall, and the desk is 70cm in height. In this case your desk's top is located at position (200, 300, 70), and your keyboard is located at (250, 310, 70).



These are two examples of different coordinate spaces, and how they affect the position coordinates of the objects within them. Similarly, vertex data in a 3D rendering pipeline is transformed across various coordinate spaces before it ends up on your screen.



Coordinate spaces in a 3D rendering pipeline



Individual objects are modelled in 3D software such as Autodesk Maya, Blender ... . They are often modeled centered around the origin. This coordinate space is called model space. If you were to render several objects in model space together, they would all be piled up centered around the origin.



Instead a new coordinate space called world space is defined. Think of this as your game world, with the origin being the center of the world. When transforming model space to world space coordinates, translations, rotations, scaling and other operations are performed. For example if you want to render a keyboard at position (250, 310, 70) of your world, you would offset all its vertices by this vector. Mathematically speaking, this is done using a transformation matrix. You can apply a different transformation to each individual object to place objects in your game world.



You now have a big pile of vertices where every objects is placed in the correct position. You now need to define what part of the world you want to look at. This is done by moving all vertex data to camera space. An often-employed convention is have the camera positioned in the origin of camera space, to have it look towards the positive Z-axis (the eye-vector) and to have the positive Y-axis point upward (the up-vector). When converting from world space to model space, we thus want to move and rotate all vertex data so that our objects of focus are near the origin and have positive Z-coordinates.



When you look at objects in real life, you will notice a phenomenon called foreshortening. This means objects near you appear bigger (i.e. take up more of your view), while objects far away from you appear smaller (i.e. take up less of your view). We simulate this by applying a perspective transformation, which moves our camera space vertex coordinates to projected space.



Finally, note that we have 3D vertex data, that needs to be rendered on a 2D screen (e.g. 1920 by 1080 pixels). The vertex data in camera space is therefore transformed to screen space. Your graphics API takes care of rendering the screen space vertex data to your screen. The process of converting vertex data to pixels on your screen is called rasterization. But what vertex coordinates end up where on your screen? This is where the canonical view volume comes into play.



DirectX specifies that the X-coordinate of the vertex is mapped to the horizontal position on the screen. Specifically: range [-1 ; 1] is mapped to [0 ; 1920] (in case of a 1920 x 1080 screen). The Y-coordinate of the vertex is mapped to the vertical position on the screen. Specifically: range [-1 ; 1] is mapped to [0 ; 1080] (in case of a 1920 x 1080 screen). The Z-coordinate is used to determine what vertices need to be rendered in front or behind each other. Specifically, vertices near 0 are near the camera and should be rendered in front. Vertices near 1 are far away from the canera and rendered behind. Vertices with a Z-coordinate smaller than 0 are behind the camera and thus clipped — i.e. not rendered. Vertices with a Z-coordinate larger than 1 are too far away and are clipped as well.



Your perspective transform thus needs to move all vertices you want visible on your screen inside this canonical view volume. In the tutorial you followed, all of these transformations are omitted to keep the tutorial simple. You are thus directly rendering to the canonical view volume. This is why the author says anything outside of range [-1 ; 1] x [-1 ; 1] x [0 ; 1] is not visible.



References



For an article with images to illustrate these various coordinate spaces, see World, View and Projection Transformation Matrices by CodingLabs.






share|improve this answer

















  • 1




    I've read a lot of math and theory behind the rendering pipelines, during school courses and at home out of curiosity, however, your simple explanation gave more than long articles have done, it's easier to dive deep into articles when you understand the basic concepts, I had never before heard of the Canonical view volume before, that explains my confusion around the small range for the vertices. Thanks for taking your time with this!
    – larssonmartin
    Dec 5 at 6:33















up vote
7
down vote













The range [-1 ; 1] x [-1 ; 1] x [0 ; 1] mentioned in the tutorial refers to the canonical view volume. It is the final coordinate space vertex data gets mapped to before everything is rasterized to your screen. To understand what exactly this means, it helps to take a look at what a rendering pipeline typically looks like.



Coordinate spaces



A coordinate space refers to the coordinate system you use to define the positions of vertices within it. As a real-life example, imagine you have a desk with a keyboard on top of it and you want to express the position of the keyboard. You could define the front left corner of the desk to be position (0, 0, 0) — this is called the origin —, the X-axis to be along the length of the desk (left to right), the Y-axis to be along the depth of the desk (near to far), and the Z-axis to be vertically upward from the desk. If your keyboard is located 50 centimeters to the right of this corner, and 10 centimeters away from the nearest edge, it is located at position (50, 10, 0).



Alternatively, you could define the corner of your room to be position (0, 0, 0). Lets say your desk is located 200 cm from the left wall, 300 cm from the front wall, and the desk is 70cm in height. In this case your desk's top is located at position (200, 300, 70), and your keyboard is located at (250, 310, 70).



These are two examples of different coordinate spaces, and how they affect the position coordinates of the objects within them. Similarly, vertex data in a 3D rendering pipeline is transformed across various coordinate spaces before it ends up on your screen.



Coordinate spaces in a 3D rendering pipeline



Individual objects are modelled in 3D software such as Autodesk Maya, Blender ... . They are often modeled centered around the origin. This coordinate space is called model space. If you were to render several objects in model space together, they would all be piled up centered around the origin.



Instead a new coordinate space called world space is defined. Think of this as your game world, with the origin being the center of the world. When transforming model space to world space coordinates, translations, rotations, scaling and other operations are performed. For example if you want to render a keyboard at position (250, 310, 70) of your world, you would offset all its vertices by this vector. Mathematically speaking, this is done using a transformation matrix. You can apply a different transformation to each individual object to place objects in your game world.



You now have a big pile of vertices where every objects is placed in the correct position. You now need to define what part of the world you want to look at. This is done by moving all vertex data to camera space. An often-employed convention is have the camera positioned in the origin of camera space, to have it look towards the positive Z-axis (the eye-vector) and to have the positive Y-axis point upward (the up-vector). When converting from world space to model space, we thus want to move and rotate all vertex data so that our objects of focus are near the origin and have positive Z-coordinates.



When you look at objects in real life, you will notice a phenomenon called foreshortening. This means objects near you appear bigger (i.e. take up more of your view), while objects far away from you appear smaller (i.e. take up less of your view). We simulate this by applying a perspective transformation, which moves our camera space vertex coordinates to projected space.



Finally, note that we have 3D vertex data, that needs to be rendered on a 2D screen (e.g. 1920 by 1080 pixels). The vertex data in camera space is therefore transformed to screen space. Your graphics API takes care of rendering the screen space vertex data to your screen. The process of converting vertex data to pixels on your screen is called rasterization. But what vertex coordinates end up where on your screen? This is where the canonical view volume comes into play.



DirectX specifies that the X-coordinate of the vertex is mapped to the horizontal position on the screen. Specifically: range [-1 ; 1] is mapped to [0 ; 1920] (in case of a 1920 x 1080 screen). The Y-coordinate of the vertex is mapped to the vertical position on the screen. Specifically: range [-1 ; 1] is mapped to [0 ; 1080] (in case of a 1920 x 1080 screen). The Z-coordinate is used to determine what vertices need to be rendered in front or behind each other. Specifically, vertices near 0 are near the camera and should be rendered in front. Vertices near 1 are far away from the canera and rendered behind. Vertices with a Z-coordinate smaller than 0 are behind the camera and thus clipped — i.e. not rendered. Vertices with a Z-coordinate larger than 1 are too far away and are clipped as well.



Your perspective transform thus needs to move all vertices you want visible on your screen inside this canonical view volume. In the tutorial you followed, all of these transformations are omitted to keep the tutorial simple. You are thus directly rendering to the canonical view volume. This is why the author says anything outside of range [-1 ; 1] x [-1 ; 1] x [0 ; 1] is not visible.



References



For an article with images to illustrate these various coordinate spaces, see World, View and Projection Transformation Matrices by CodingLabs.






share|improve this answer

















  • 1




    I've read a lot of math and theory behind the rendering pipelines, during school courses and at home out of curiosity, however, your simple explanation gave more than long articles have done, it's easier to dive deep into articles when you understand the basic concepts, I had never before heard of the Canonical view volume before, that explains my confusion around the small range for the vertices. Thanks for taking your time with this!
    – larssonmartin
    Dec 5 at 6:33













up vote
7
down vote










up vote
7
down vote









The range [-1 ; 1] x [-1 ; 1] x [0 ; 1] mentioned in the tutorial refers to the canonical view volume. It is the final coordinate space vertex data gets mapped to before everything is rasterized to your screen. To understand what exactly this means, it helps to take a look at what a rendering pipeline typically looks like.



Coordinate spaces



A coordinate space refers to the coordinate system you use to define the positions of vertices within it. As a real-life example, imagine you have a desk with a keyboard on top of it and you want to express the position of the keyboard. You could define the front left corner of the desk to be position (0, 0, 0) — this is called the origin —, the X-axis to be along the length of the desk (left to right), the Y-axis to be along the depth of the desk (near to far), and the Z-axis to be vertically upward from the desk. If your keyboard is located 50 centimeters to the right of this corner, and 10 centimeters away from the nearest edge, it is located at position (50, 10, 0).



Alternatively, you could define the corner of your room to be position (0, 0, 0). Lets say your desk is located 200 cm from the left wall, 300 cm from the front wall, and the desk is 70cm in height. In this case your desk's top is located at position (200, 300, 70), and your keyboard is located at (250, 310, 70).



These are two examples of different coordinate spaces, and how they affect the position coordinates of the objects within them. Similarly, vertex data in a 3D rendering pipeline is transformed across various coordinate spaces before it ends up on your screen.



Coordinate spaces in a 3D rendering pipeline



Individual objects are modelled in 3D software such as Autodesk Maya, Blender ... . They are often modeled centered around the origin. This coordinate space is called model space. If you were to render several objects in model space together, they would all be piled up centered around the origin.



Instead a new coordinate space called world space is defined. Think of this as your game world, with the origin being the center of the world. When transforming model space to world space coordinates, translations, rotations, scaling and other operations are performed. For example if you want to render a keyboard at position (250, 310, 70) of your world, you would offset all its vertices by this vector. Mathematically speaking, this is done using a transformation matrix. You can apply a different transformation to each individual object to place objects in your game world.



You now have a big pile of vertices where every objects is placed in the correct position. You now need to define what part of the world you want to look at. This is done by moving all vertex data to camera space. An often-employed convention is have the camera positioned in the origin of camera space, to have it look towards the positive Z-axis (the eye-vector) and to have the positive Y-axis point upward (the up-vector). When converting from world space to model space, we thus want to move and rotate all vertex data so that our objects of focus are near the origin and have positive Z-coordinates.



When you look at objects in real life, you will notice a phenomenon called foreshortening. This means objects near you appear bigger (i.e. take up more of your view), while objects far away from you appear smaller (i.e. take up less of your view). We simulate this by applying a perspective transformation, which moves our camera space vertex coordinates to projected space.



Finally, note that we have 3D vertex data, that needs to be rendered on a 2D screen (e.g. 1920 by 1080 pixels). The vertex data in camera space is therefore transformed to screen space. Your graphics API takes care of rendering the screen space vertex data to your screen. The process of converting vertex data to pixels on your screen is called rasterization. But what vertex coordinates end up where on your screen? This is where the canonical view volume comes into play.



DirectX specifies that the X-coordinate of the vertex is mapped to the horizontal position on the screen. Specifically: range [-1 ; 1] is mapped to [0 ; 1920] (in case of a 1920 x 1080 screen). The Y-coordinate of the vertex is mapped to the vertical position on the screen. Specifically: range [-1 ; 1] is mapped to [0 ; 1080] (in case of a 1920 x 1080 screen). The Z-coordinate is used to determine what vertices need to be rendered in front or behind each other. Specifically, vertices near 0 are near the camera and should be rendered in front. Vertices near 1 are far away from the canera and rendered behind. Vertices with a Z-coordinate smaller than 0 are behind the camera and thus clipped — i.e. not rendered. Vertices with a Z-coordinate larger than 1 are too far away and are clipped as well.



Your perspective transform thus needs to move all vertices you want visible on your screen inside this canonical view volume. In the tutorial you followed, all of these transformations are omitted to keep the tutorial simple. You are thus directly rendering to the canonical view volume. This is why the author says anything outside of range [-1 ; 1] x [-1 ; 1] x [0 ; 1] is not visible.



References



For an article with images to illustrate these various coordinate spaces, see World, View and Projection Transformation Matrices by CodingLabs.






share|improve this answer












The range [-1 ; 1] x [-1 ; 1] x [0 ; 1] mentioned in the tutorial refers to the canonical view volume. It is the final coordinate space vertex data gets mapped to before everything is rasterized to your screen. To understand what exactly this means, it helps to take a look at what a rendering pipeline typically looks like.



Coordinate spaces



A coordinate space refers to the coordinate system you use to define the positions of vertices within it. As a real-life example, imagine you have a desk with a keyboard on top of it and you want to express the position of the keyboard. You could define the front left corner of the desk to be position (0, 0, 0) — this is called the origin —, the X-axis to be along the length of the desk (left to right), the Y-axis to be along the depth of the desk (near to far), and the Z-axis to be vertically upward from the desk. If your keyboard is located 50 centimeters to the right of this corner, and 10 centimeters away from the nearest edge, it is located at position (50, 10, 0).



Alternatively, you could define the corner of your room to be position (0, 0, 0). Lets say your desk is located 200 cm from the left wall, 300 cm from the front wall, and the desk is 70cm in height. In this case your desk's top is located at position (200, 300, 70), and your keyboard is located at (250, 310, 70).



These are two examples of different coordinate spaces, and how they affect the position coordinates of the objects within them. Similarly, vertex data in a 3D rendering pipeline is transformed across various coordinate spaces before it ends up on your screen.



Coordinate spaces in a 3D rendering pipeline



Individual objects are modelled in 3D software such as Autodesk Maya, Blender ... . They are often modeled centered around the origin. This coordinate space is called model space. If you were to render several objects in model space together, they would all be piled up centered around the origin.



Instead a new coordinate space called world space is defined. Think of this as your game world, with the origin being the center of the world. When transforming model space to world space coordinates, translations, rotations, scaling and other operations are performed. For example if you want to render a keyboard at position (250, 310, 70) of your world, you would offset all its vertices by this vector. Mathematically speaking, this is done using a transformation matrix. You can apply a different transformation to each individual object to place objects in your game world.



You now have a big pile of vertices where every objects is placed in the correct position. You now need to define what part of the world you want to look at. This is done by moving all vertex data to camera space. An often-employed convention is have the camera positioned in the origin of camera space, to have it look towards the positive Z-axis (the eye-vector) and to have the positive Y-axis point upward (the up-vector). When converting from world space to model space, we thus want to move and rotate all vertex data so that our objects of focus are near the origin and have positive Z-coordinates.



When you look at objects in real life, you will notice a phenomenon called foreshortening. This means objects near you appear bigger (i.e. take up more of your view), while objects far away from you appear smaller (i.e. take up less of your view). We simulate this by applying a perspective transformation, which moves our camera space vertex coordinates to projected space.



Finally, note that we have 3D vertex data, that needs to be rendered on a 2D screen (e.g. 1920 by 1080 pixels). The vertex data in camera space is therefore transformed to screen space. Your graphics API takes care of rendering the screen space vertex data to your screen. The process of converting vertex data to pixels on your screen is called rasterization. But what vertex coordinates end up where on your screen? This is where the canonical view volume comes into play.



DirectX specifies that the X-coordinate of the vertex is mapped to the horizontal position on the screen. Specifically: range [-1 ; 1] is mapped to [0 ; 1920] (in case of a 1920 x 1080 screen). The Y-coordinate of the vertex is mapped to the vertical position on the screen. Specifically: range [-1 ; 1] is mapped to [0 ; 1080] (in case of a 1920 x 1080 screen). The Z-coordinate is used to determine what vertices need to be rendered in front or behind each other. Specifically, vertices near 0 are near the camera and should be rendered in front. Vertices near 1 are far away from the canera and rendered behind. Vertices with a Z-coordinate smaller than 0 are behind the camera and thus clipped — i.e. not rendered. Vertices with a Z-coordinate larger than 1 are too far away and are clipped as well.



Your perspective transform thus needs to move all vertices you want visible on your screen inside this canonical view volume. In the tutorial you followed, all of these transformations are omitted to keep the tutorial simple. You are thus directly rendering to the canonical view volume. This is why the author says anything outside of range [-1 ; 1] x [-1 ; 1] x [0 ; 1] is not visible.



References



For an article with images to illustrate these various coordinate spaces, see World, View and Projection Transformation Matrices by CodingLabs.







share|improve this answer












share|improve this answer



share|improve this answer










answered Dec 4 at 17:31









Jelle van Campen

1,4801516




1,4801516








  • 1




    I've read a lot of math and theory behind the rendering pipelines, during school courses and at home out of curiosity, however, your simple explanation gave more than long articles have done, it's easier to dive deep into articles when you understand the basic concepts, I had never before heard of the Canonical view volume before, that explains my confusion around the small range for the vertices. Thanks for taking your time with this!
    – larssonmartin
    Dec 5 at 6:33














  • 1




    I've read a lot of math and theory behind the rendering pipelines, during school courses and at home out of curiosity, however, your simple explanation gave more than long articles have done, it's easier to dive deep into articles when you understand the basic concepts, I had never before heard of the Canonical view volume before, that explains my confusion around the small range for the vertices. Thanks for taking your time with this!
    – larssonmartin
    Dec 5 at 6:33








1




1




I've read a lot of math and theory behind the rendering pipelines, during school courses and at home out of curiosity, however, your simple explanation gave more than long articles have done, it's easier to dive deep into articles when you understand the basic concepts, I had never before heard of the Canonical view volume before, that explains my confusion around the small range for the vertices. Thanks for taking your time with this!
– larssonmartin
Dec 5 at 6:33




I've read a lot of math and theory behind the rendering pipelines, during school courses and at home out of curiosity, however, your simple explanation gave more than long articles have done, it's easier to dive deep into articles when you understand the basic concepts, I had never before heard of the Canonical view volume before, that explains my confusion around the small range for the vertices. Thanks for taking your time with this!
– larssonmartin
Dec 5 at 6:33


















draft saved

draft discarded




















































Thanks for contributing an answer to Game Development Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fgamedev.stackexchange.com%2fquestions%2f165895%2fdirectx-will-only-draw-polygons-with-an-x-y-from-1-1-to-1-1-and-with-a%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Biblatex bibliography style without URLs when DOI exists (in Overleaf with Zotero bibliography)

ComboBox Display Member on multiple fields

Is it possible to collect Nectar points via Trainline?