DirectX 12 Final brings Xbox Collection X options to PC gaming

0
19

  • DirectX Ray Tracing became available in early 2018. DX12 Ultimate offers new features that were not available in DXR1.0.


    Nvidia

  • With VRS (Variable Rate Shading), games can focus more details on parts of scenes that need it and less on background areas (or fast moving areas) where the lack is not noticed.


    Nvidia

  • Mesh shaders (and the amplification shader) reinvent the DirectX geometry pipeline, increase parallelism, and potentially allow fewer round trips from the GPU to the CPU.


    Nvidia

  • Sampler feedback enables higher quality without longer processing times because developers can only load textures when they are needed.


    Nvidia

Microsoft today announced a new version of its DirectX gaming and multimedia API platform. The new version, DirectX 12 Ultimate, largely combines Windows PCs with the upcoming Xbox Series X platform and offers Windows players with supporting graphics cards the new precision rendering functions of the platform.

Many of the new features have more to do with the software side of development than with hardware. The new DirectX 12 Ultimate API calls not only provide access to new hardware features, but also provide deeper, lower-level, and potentially more efficient access to existing hardware features and resources.

Currently, the new features are primarily intended for Nvidia cards, with "full GeForce RTX support" – the presentation you see slides is from Nvidia itself, not from Microsoft. In the meantime, AMD has announced that the upcoming RDNA 2 GPUs will "fully support" the DirectX 12 Ultimate API – but not earlier generations of AMD cards. (AMD takes the opportunity to remind players that the same RDNA 2 architecture powers both Microsoft's Xbox Series X and PlayStation 5 consoles.)

Some of the new calls are reminiscent of the work that AMD has done independently in Radeon drivers. For example, variable rate shading is similar to AMD's Radeon Boost system, which dynamically lowers image resolution when panning quickly. Although these traits are certainly not the same, they are so similar in concept that we know that AMD has at least thought in a similar direction.

This video produced by Nvidia provides visual examples of the new features of DirectX 12 Ultimate.

DirectX Raytracing

 Although this image was not created using DirectX real-time ray tracing, it is a good example of what technology can do. The key concept is to follow a beam of light that is reflected from or refracted by multiple objects. "Src =" https://cdn.arstechnica.net/wp-content/uploads/2020/03/1280px-Glasses_800_edit-980x735. png "width =" 980 "height =" 735 "/> Enlarge <span class= / Although this image was not created with DirectX real-time ray tracing, it is a good example of what the technology can do is said to follow a beam of light that is reflected or refracted by multiple objects.

DirectX Ray Tracing, also known as DXR, is not entirely new – DXR1.0 was introduced two years ago, but DirectX 12 Ultimate introduces several new ones Functions under a DXR1.1 version scheme None of the functions of DXR1.1 require new hardware – existing Ray-Tracing-capable GPUs only need driver support to activate them.

Currently only Nvidia offers customer-oriented PC graphics cards with hardware ray tracing. The Xbox Series X, however, offers ray tracing on its custom Radeon GPU hardware – and AMD CEO Lisa Su expects discrete Radeon graphics cards with ray tracing support "how we're going to 2020" at CES2020 in January.

Inline ray tracing

Inline ray tracing is an alternative API that allows developers to access the ray tracing pipeline at a lower level than DXR1.0's dynamic shader based ray tracing. Instead of replacing dynamic shader ray tracing, there is inline ray tracing as an alternative model that allows developers to make inexpensive ray tracing calls that do not carry the full weight of a dynamic shader call. Examples include restricted shadow calculations, querying shaders that do not support dynamic shader beams, or simple recursive beams.

There is no easy answer as to when inline ray tracing is more appropriate than dynamic. Developers need to experiment to find the best balance between using both tool sets.

DispatchRays () via ExecuteIndirect ()

Shaders that run on the GPU can now generate a list of DispatchRays () calls, including their individual parameters. This can significantly reduce latency for scenarios that prepare and immediately trigger ray tracing work on the GPU, as a round trip to the CPU and back is avoided.

Growing status objects via AddToStateObject ()

If developers under DXR1.0 want to add a new shader to an existing ray tracing pipeline, they must instantiate a completely new pipeline with an additional shader and copy the existing shaders together with the new one into the new pipeline. This required the system to analyze and validate both existing and new shaders when the new pipeline was instantiated.

AddToStateObject () eliminates this waste by doing exactly what it sounds like: giving developers the ability to extend an existing ray tracing pipeline, only analyzing and validating the new shader. The efficiency increase should be obvious here: A pipeline with 1,000 shaders that has to add a single new shader now only has to validate one shader instead of 1,001.

GeometryIndex () in ray tracing shaders

With GeometryIndex () shaders can distinguish geometries within acceleration structures at the lowest level without having to change data in the shader data records for each geometry. In other words, all geometries in an acceleration structure at the lowest level can now use the same shader data set. If necessary, shaders can use GeometryIndex () to index into the app's own data structures.

Skipping primitive instantiation with configuration changes

Developers can optimize ray tracing pipelines by skipping unnecessary basic elements. For example, DXR1.0 offers RAY_FLAG_SKIP_CLOSEST_HIT_SHADER, RAY_FLAG_CULL_NON_OPAQUE and RAY_FLAG_ACCEPT_FIRST_HIT_AND_END_SEARCH.

DXR1.1 adds additional options for RAY_FLAG_SKIP_TRIANGLES and RAY_FLAG_SKIP_PROCEDURAL_PRIMITIVES.

Shading with a variable rate

 One page of this image is rendered 14 percent faster due to the use of VRS to avoid unnecessary detail rendering. Can you say what is what "src =" https://cdn.arstechnica.net/wp-content/uploads/2020/03/variable-rate-shading-980x551.png "width =" 980 "height =" 551 "/> enlarge <span class= / One page of this image renders 14 percent faster due to the use of VRS to avoid unnecessary detail rendering, can you tell which one is?

Nvidia

VRS (Variable Rate Shading) itself as a "scalpel in the world of sledgehammers." With VRS, developers can choose the shading rate for parts of frames independently, with most of the details – and rendering workload – focused on the parts they actually need, and background or other information leaves visually unimportant elements to render faster.

There are two levels of hardware for VRS support. Tier 1 hardware can implement shading rates per drawing that would allow developers to draw large, distant, or hidden assets with lower shading details, and then draw detailed assets with higher shading details.

If you know that a first-person shooter player pays more attention to his crosshairs than anywhere else, you can get maximum shading details in this area, which gradually fall to the lowest shading details in the peripheral view.

A real-time strategy or role-play developer could instead choose to focus additional shading details on edge boundaries where aliasing artifacts are more likely to be visually uncomfortable.

Per-primitive VRS goes one step further by allowing developers to specify the shading rate per triangle. Games with motion blur effects are an obvious use case – why bother to render detailed shadows on distant objects when you know you will blur them anyway?

Screenspace and per-primitive variable rate shading can be mixed and matched with VRS combiners within the same scene.

Mesh and amplification shaders

  • Mesh shaders allow a greater parallelization of the shading pipeline.


    Nvidia

  • An amplification shader is essentially a collection of mesh shaders with shared access to the data of the parent object.


    Nvidia

Mesh shaders parallelize mesh processing using a computer programming model. Parts of the entire network are divided into "meshlets", which typically consist of 200 or fewer vertices. The individual meshlets can then be processed at the same time rather than one after the other.

Mesh shaders send out a series of thread groups, each of which processes a different meshlet. Each thread group can access the shared group memory, but can output vertices and primitives that do not have to correlate with a specific thread in the group.

This significantly reduces render latency, especially for geometries with linear bottlenecks. It also enables developers to have much more detailed control over individual parts of the entire mesh, rather than having to treat the entire geometry as a whole.

Amplification shaders are essentially collections of mesh shaders that are managed and instantiated as one. An amplification shader sends thread groups of mesh shaders, each of which has access to the data of the amplification shader.

Sampler Feedback

 The separation of shading and screening enables more efficient and less frequent runs through lighting calculation routines. "Src =" https://cdn.arstechnica.net/wp-content/uploads/2020/03/texture-space-shading- 980 x 583] Sampler feedback essentially makes it easier for developers to find out which one Level of detail textures can be rendered during operation. This function allows a shader to query which part of a texture is needed to meet a sampling requirement without actually having to execute it. This enables games to render larger, more detailed textures while using less video memory. </p>
<p> Texture Spacing Shading extends sampler feedback technology by allowing the game to apply shading effects to a texture regardless of the object around which the texture is wrapped. For example, a cube with only three visible surfaces does not need any lighting effects that are applied to the rear three surfaces. </p>
<p> With TSS, light effects can only be applied to the visible parts of the texture. In our cube example, this may mean that only the part that covers the three visible areas in the computing room is illuminated. This can be done before and independently of the screening, which reduces aliasing and minimizes the computational effort for the lighting effects. </p>
<p> Collection picture of Nvidia </p>
</pre>
</pre>
        </div>

 
        <footer>
                        
            <div class=

Previous articleHow the Virus Acquired Out
Next articleStormi Webster’s Quarantine Model Contains Cute Leopard Costume – Pic – trendzhq

LEAVE A REPLY

Please enter your comment!
Please enter your name here