sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops package¶
Submodules¶
sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.bias_act module¶
Custom PyTorch ops for efficient bias and activation.
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.bias_act._bias_act_cuda(dim=1, act='linear', alpha=None, gain=None, clamp=None)[source]¶
Fast CUDA implementation of bias_act() using custom ops.
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.bias_act._init()[source]¶
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.bias_act.bias_act(x, b=None, dim=1, act='linear', alpha=None, gain=None, clamp=None, impl='cuda')[source]¶
Fused bias and activation function.
Adds bias b to activation tensor x, evaluates activation function act, and scales the result by gain. Each of the steps is optional. In most cases, the fused op is considerably more efficient than performing the same calculation using standard PyTorch ops. It supports first and second order gradients, but not third order gradients.
- Parameters:
x – Input activation tensor. Can be of any shape.
b – Bias vector, or None to disable. Must be a 1D tensor of the same type as x. The shape must be known, and it must match the dimension of x corresponding to dim.
dim – The dimension in x corresponding to the elements of b. The value of dim is ignored if b is not specified.
act – Name of the activation function to evaluate, or “linear” to disable. Can be e.g. “relu”, “lrelu”, “tanh”, “sigmoid”, “swish”, etc. See activation_funcs for a full list. None is not allowed.
alpha – Shape parameter for the activation function, or None to use the default.
gain – Scaling factor for the output tensor, or None to use default. See activation_funcs for the default scaling of each activation function. If unsure, consider specifying 1.
clamp – Clamp the output values to [-clamp, +clamp], or None to disable the clamping (default).
impl – Name of the implementation to use. Can be “ref” or “cuda” (default).
- Returns:
Tensor of the same shape and datatype as x.
sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.conv2d_gradfix module¶
Custom replacement for torch.nn.functional.conv2d that supports arbitrarily high order gradients with zero performance penalty.
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.conv2d_gradfix._conv2d_gradfix(transpose, weight_shape, stride, padding, output_padding, dilation, groups)[source]¶
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.conv2d_gradfix._should_use_custom_op(input)[source]¶
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.conv2d_gradfix._tuple_of_ints(xs, ndim)[source]¶
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.conv2d_gradfix.conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)[source]¶
sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.conv2d_resample module¶
2D convolution with optional up/downsampling.
sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.fma module¶
Fused multiply-add, with slightly faster gradients than torch.addcmul().
- class sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.fma._FusedMultiplyAdd(*args, **kwargs)[source]¶
Bases:
Function- _backward_cls¶
alias of
_FusedMultiplyAddBackward
- static backward(ctx, dout)[source]¶
Define a formula for differentiating the operation with backward mode automatic differentiation.
This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the
vjpfunction.)It must accept a context
ctxas the first argument, followed by as many outputs as theforward()returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs toforward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_gradas a tuple of booleans representing whether each input needs gradient. E.g.,backward()will havectx.needs_input_grad[0] = Trueif the first input toforward()needs gradient computed w.r.t. the output.
- static forward(ctx, a, b, c)[source]¶
Define the forward of the custom autograd Function.
This function is to be overridden by all subclasses. There are two ways to define forward:
Usage 1 (Combined forward and ctx):
@staticmethod def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any: pass
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
See combining-forward-context for more details
Usage 2 (Separate forward and ctx):
@staticmethod def forward(*args: Any, **kwargs: Any) -> Any: pass @staticmethod def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None: pass
The forward no longer accepts a ctx argument.
Instead, you must also override the
torch.autograd.Function.setup_context()staticmethod to handle setting up thectxobject.outputis the output of the forward,inputsare a Tuple of inputs to the forward.See extending-autograd for more details
The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with
ctx.save_for_backward()if they are intended to be used inbackward(equivalently,vjp) orctx.save_for_forward()if they are intended to be used for injvp.
sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.grid_sample_gradfix module¶
Custom replacement for torch.nn.functional.grid_sample that supports arbitrarily high order gradients between the input and output. Only works on 2D images and assumes mode=’bilinear’, padding_mode=’zeros’, align_corners=False.
- class sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.grid_sample_gradfix._GridSample2dBackward(*args, **kwargs)[source]¶
Bases:
Function- _backward_cls¶
alias of
_GridSample2dBackwardBackward
- static backward(ctx, grad2_grad_input, grad2_grad_grid)[source]¶
Define a formula for differentiating the operation with backward mode automatic differentiation.
This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the
vjpfunction.)It must accept a context
ctxas the first argument, followed by as many outputs as theforward()returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs toforward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_gradas a tuple of booleans representing whether each input needs gradient. E.g.,backward()will havectx.needs_input_grad[0] = Trueif the first input toforward()needs gradient computed w.r.t. the output.
- static forward(ctx, grad_output, input, grid)[source]¶
Define the forward of the custom autograd Function.
This function is to be overridden by all subclasses. There are two ways to define forward:
Usage 1 (Combined forward and ctx):
@staticmethod def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any: pass
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
See combining-forward-context for more details
Usage 2 (Separate forward and ctx):
@staticmethod def forward(*args: Any, **kwargs: Any) -> Any: pass @staticmethod def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None: pass
The forward no longer accepts a ctx argument.
Instead, you must also override the
torch.autograd.Function.setup_context()staticmethod to handle setting up thectxobject.outputis the output of the forward,inputsare a Tuple of inputs to the forward.See extending-autograd for more details
The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with
ctx.save_for_backward()if they are intended to be used inbackward(equivalently,vjp) orctx.save_for_forward()if they are intended to be used for injvp.
- class sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.grid_sample_gradfix._GridSample2dForward(*args, **kwargs)[source]¶
Bases:
Function- _backward_cls¶
alias of
_GridSample2dForwardBackward
- static backward(ctx, grad_output)[source]¶
Define a formula for differentiating the operation with backward mode automatic differentiation.
This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the
vjpfunction.)It must accept a context
ctxas the first argument, followed by as many outputs as theforward()returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs toforward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_gradas a tuple of booleans representing whether each input needs gradient. E.g.,backward()will havectx.needs_input_grad[0] = Trueif the first input toforward()needs gradient computed w.r.t. the output.
- static forward(ctx, input, grid)[source]¶
Define the forward of the custom autograd Function.
This function is to be overridden by all subclasses. There are two ways to define forward:
Usage 1 (Combined forward and ctx):
@staticmethod def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any: pass
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
See combining-forward-context for more details
Usage 2 (Separate forward and ctx):
@staticmethod def forward(*args: Any, **kwargs: Any) -> Any: pass @staticmethod def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None: pass
The forward no longer accepts a ctx argument.
Instead, you must also override the
torch.autograd.Function.setup_context()staticmethod to handle setting up thectxobject.outputis the output of the forward,inputsare a Tuple of inputs to the forward.See extending-autograd for more details
The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with
ctx.save_for_backward()if they are intended to be used inbackward(equivalently,vjp) orctx.save_for_forward()if they are intended to be used for injvp.
sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.upfirdn2d module¶
Custom PyTorch ops for efficient resampling of 2D images.
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.upfirdn2d._get_filter_size(f)[source]¶
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.upfirdn2d._init()[source]¶
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.upfirdn2d._parse_padding(padding)[source]¶
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.upfirdn2d._parse_scaling(scaling)[source]¶
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.upfirdn2d._upfirdn2d_cuda(up=1, down=1, padding=0, flip_filter=False, gain=1)[source]¶
Fast CUDA implementation of upfirdn2d() using custom ops.
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.upfirdn2d.downsample2d(x, f, down=2, padding=0, flip_filter=False, gain=1, impl='cuda')[source]¶
Downsample a batch of 2D images using the given 2D FIR filter.
By default, the result is padded so that its shape is a fraction of the input. User-specified padding is applied on top of that, with negative values indicating cropping. Pixels outside the image are assumed to be zero.
- Parameters:
x – Float32/float64/float16 input tensor of the shape [batch_size, num_channels, in_height, in_width].
f – Float32 FIR filter of the shape [filter_height, filter_width] (non-separable), [filter_taps] (separable), or None (identity).
down – Integer downsampling factor. Can be a single int or a list/tuple [x, y] (default: 1).
padding – Padding with respect to the input. Can be a single number or a list/tuple [x, y] or [x_before, x_after, y_before, y_after] (default: 0).
flip_filter – False = convolution, True = correlation (default: False).
gain – Overall scaling factor for signal magnitude (default: 1).
impl – Implementation to use. Can be ‘ref’ or ‘cuda’ (default: ‘cuda’).
- Returns:
Tensor of the shape [batch_size, num_channels, out_height, out_width].
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.upfirdn2d.filter2d(x, f, padding=0, flip_filter=False, gain=1, impl='cuda')[source]¶
Filter a batch of 2D images using the given 2D FIR filter.
By default, the result is padded so that its shape matches the input. User-specified padding is applied on top of that, with negative values indicating cropping. Pixels outside the image are assumed to be zero.
- Parameters:
x – Float32/float64/float16 input tensor of the shape [batch_size, num_channels, in_height, in_width].
f – Float32 FIR filter of the shape [filter_height, filter_width] (non-separable), [filter_taps] (separable), or None (identity).
padding – Padding with respect to the output. Can be a single number or a list/tuple [x, y] or [x_before, x_after, y_before, y_after] (default: 0).
flip_filter – False = convolution, True = correlation (default: False).
gain – Overall scaling factor for signal magnitude (default: 1).
impl – Implementation to use. Can be ‘ref’ or ‘cuda’ (default: ‘cuda’).
- Returns:
Tensor of the shape [batch_size, num_channels, out_height, out_width].
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.upfirdn2d.setup_filter(f, device=device(type='cpu'), normalize=True, flip_filter=False, gain=1, separable=None)[source]¶
Convenience function to setup 2D FIR filter for upfirdn2d().
- Parameters:
f – Torch tensor, numpy array, or python list of the shape [filter_height, filter_width] (non-separable), [filter_taps] (separable), [] (impulse), or None (identity).
device – Result device (default: cpu).
normalize – Normalize the filter so that it retains the magnitude for constant input signal (DC)? (default: True).
flip_filter – Flip the filter? (default: False).
gain – Overall scaling factor for signal magnitude (default: 1).
separable – Return a separable filter? (default: select automatically).
- Returns:
Float32 tensor of the shape [filter_height, filter_width] (non-separable) or [filter_taps] (separable).
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.upfirdn2d.upfirdn2d(x, f, up=1, down=1, padding=0, flip_filter=False, gain=1, impl='cuda')[source]¶
Pad, upsample, filter, and downsample a batch of 2D images.
Performs the following sequence of operations for each channel:
Upsample the image by inserting N-1 zeros after each pixel (up).
Pad the image with the specified number of zeros on each side (padding). Negative padding corresponds to cropping the image.
Convolve the image with the specified 2D FIR filter (f), shrinking it so that the footprint of all output pixels lies within the input image.
Downsample the image by keeping every Nth pixel (down).
This sequence of operations bears close resemblance to scipy.signal.upfirdn(). The fused op is considerably more efficient than performing the same calculation using standard PyTorch ops. It supports gradients of arbitrary order.
- Parameters:
x – Float32/float64/float16 input tensor of the shape [batch_size, num_channels, in_height, in_width].
f – Float32 FIR filter of the shape [filter_height, filter_width] (non-separable), [filter_taps] (separable), or None (identity).
up – Integer upsampling factor. Can be a single int or a list/tuple [x, y] (default: 1).
down – Integer downsampling factor. Can be a single int or a list/tuple [x, y] (default: 1).
padding – Padding with respect to the upsampled image. Can be a single number or a list/tuple [x, y] or [x_before, x_after, y_before, y_after] (default: 0).
flip_filter – False = convolution, True = correlation (default: False).
gain – Overall scaling factor for signal magnitude (default: 1).
impl – Implementation to use. Can be ‘ref’ or ‘cuda’ (default: ‘cuda’).
- Returns:
Tensor of the shape [batch_size, num_channels, out_height, out_width].
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.upfirdn2d.upsample2d(x, f, up=2, padding=0, flip_filter=False, gain=1, impl='cuda')[source]¶
Upsample a batch of 2D images using the given 2D FIR filter.
By default, the result is padded so that its shape is a multiple of the input. User-specified padding is applied on top of that, with negative values indicating cropping. Pixels outside the image are assumed to be zero.
- Parameters:
x – Float32/float64/float16 input tensor of the shape [batch_size, num_channels, in_height, in_width].
f – Float32 FIR filter of the shape [filter_height, filter_width] (non-separable), [filter_taps] (separable), or None (identity).
up – Integer upsampling factor. Can be a single int or a list/tuple [x, y] (default: 1).
padding – Padding with respect to the output. Can be a single number or a list/tuple [x, y] or [x_before, x_after, y_before, y_after] (default: 0).
flip_filter – False = convolution, True = correlation (default: False).
gain – Overall scaling factor for signal magnitude (default: 1).
impl – Implementation to use. Can be ‘ref’ or ‘cuda’ (default: ‘cuda’).
- Returns:
Tensor of the shape [batch_size, num_channels, out_height, out_width].