direct - The Z value in the depth channel directly controls blur.For example, some programs use higher values to denote further away, while in others they mean closer to the camera: Specifies how the depth channel is used to calculate the distance between the camera and an object. This looks wrong, as it suggests that those edge pixels are floating somewhere between the objects. If it is, pixels along an edge between two objects can be assigned a depth that is in-between the depth of the front and back objects. Note:The depth map should not be anti-aliased. Specifies the input channel containing the depth map information. See Nuke 14 Release Notes for more information on the GPUs Nuke supports. You should also select this if you wish to render from the command line with the -gpu option. Note:Enabling this option with no local GPU allows the script to run on the GPU whenever the script is opened on a machine that does have a GPU available. When enabled, rendering occurs on the Local GPU specified, if available, rather than the CPU. Note: Selecting a different GPU requires you to restart Nuke before the change takes effect. You can select a different GPU, if available, by navigating to the Preferences and selecting an alternative from the default blink device dropdown. it was not possible to create a context for processing on the selected GPU, such as when there is not enough free memory available on the GPU.no suitable GPU was found on your system.Use CPU is selected as the default blink device in the Preferences.If you set this to something other than all or none, you can use the checkboxes on the right to select individual channels.ĭisplays the GPU used for rendering when Use GPU if available is enabled. The effect is only applied to these channels. If you cannot see the mask input, ensure that the mask control is disabled or set to none. By default, the blur is limited to the non-black areas of the mask.Īt first, the mask input appears as triangle on the right side of the node, but when you drag it, it turns into an arrow labeled mask. This should also contain the depth map channel.Īn optional image to use as a mask. The image sequence to receive the blur effect. You don’t necessarily need to crop the filter image to a smaller size, as Fast Fourier Transforms are used to speed up convolutions with large filter images. For example, if you want to add color fringing to your out-of-focus highlights to simulate chromatic aberration, you can use the Flare node to easily create a suitable filter image. The filter image can also be a color image. You can create a filter image using the Roto node ( Draw > Roto) or the Flare node ( Draw > Flare), for example. As the clip in the image input is blurred, any out-of-focus highlights (’bokeh’) in the clip assume the shape of the filter image. It represents the shape and size of the camera aperture used to shoot the input footage. This allows it to preserve the ordering of objects in the image. After ZDefocus has processed all the layers, it blends them together from the back to the front of the image, with each new layer going over the top of the previous ones. In order to defocus the image, ZDefocus splits the image up into layers, each of which is assigned the same depth value everywhere and processed with a single blur size. This allows you to simulate depth-of-field (DOF) blurring. Blurs the image according to a depth map channel.
0 Comments
Leave a Reply. |