sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils package¶
Subpackages¶
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops package
- Submodules
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.bias_act module
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.conv2d_gradfix module
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.conv2d_resample module
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.fma module
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.grid_sample_gradfix module
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.ops.upfirdn2d module
- Module contents
Submodules¶
sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.custom_ops module¶
sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.misc module¶
- class sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.misc.InfiniteSampler(dataset, rank=0, num_replicas=1, shuffle=True, seed=0, window_size=0.5)[source]¶
Bases:
Sampler
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.misc.assert_shape(tensor, ref_shape)[source]¶
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.misc.check_ddp_consistency(module, ignore_regex=None)[source]¶
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.misc.constant(value, shape=None, dtype=None, device=None, memory_format=None)[source]¶
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.misc.copy_params_and_buffers(src_module, dst_module, require_all=False)[source]¶
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.misc.ddp_sync(module, sync)[source]¶
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.misc.named_params_and_buffers(module)[source]¶
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.misc.params_and_buffers(module)[source]¶
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.misc.print_memory_diagnostics(device)[source]¶
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.misc.print_module_summary(module, inputs, max_nesting=3, skip_redundant=True)[source]¶
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.misc.profiled_function(fn)[source]¶
sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.persistence module¶
Facilities for pickling Python code alongside other data.
The pickled code is automatically imported into a separate Python module during unpickling. This way, any previously exported pickles will remain usable even if the original code is no longer available, or if the current version of the code is not consistent with what was originally pickled.
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.persistence._check_pickleable(obj)[source]¶
Check that the given object is pickleable, raising an exception if it is not. This function is expected to be considerably more efficient than actually pickling the object.
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.persistence._module_to_src(module)[source]¶
Query the source code of a given Python module.
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.persistence._reconstruct_persistent_obj(meta)[source]¶
Hook that is called internally by the pickle module to unpickle a persistent object.
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.persistence._src_to_module(src)[source]¶
Get or create a Python module for the given source code.
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.persistence.import_hook(hook)[source]¶
Register an import hook that is called whenever a persistent object is being unpickled. A typical use case is to patch the pickled source code to avoid errors and inconsistencies when the API of some imported module has changed.
The hook should have the following signature:
hook(meta) -> modified meta
meta is an instance of dnnlib.EasyDict with the following fields:
type: Type of the persistent object, e.g. ‘class’. version: Internal version number of torch_utils.persistence. module_src Original source code of the Python module. class_name: Class name in the original Python module. state: Internal state of the object.
Example
@persistence.import_hook def wreck_my_network(meta):
- if meta.class_name == ‘MyNetwork’:
print(‘MyNetwork is being imported. I will wreck it!’) meta.module_src = meta.module_src.replace(“True”, “False”)
return meta
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.persistence.is_persistent(obj)[source]¶
Test whether the given object or class is persistent, i.e., whether it will save its source code when pickled.
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.persistence.persistent_class(orig_class)[source]¶
Class decorator that extends a given class to save its source code when pickled.
Example
from torch_utils import persistence
@persistence.persistent_class class MyNetwork(torch.nn.Module):
- def __init__(self, num_inputs, num_outputs):
super().__init__() self.fc = MyLayer(num_inputs, num_outputs) …
@persistence.persistent_class class MyLayer(torch.nn.Module):
…
When pickled, any instance of MyNetwork and MyLayer will save its source code alongside other internal state (e.g., parameters, buffers, and submodules). This way, any previously exported pickle will remain usable even if the class definitions have been modified or are no longer available.
The decorator saves the source code of the entire Python module containing the decorated class. It does not save the source code of any imported modules. Thus, the imported modules must be available during unpickling, also including torch_utils.persistence itself.
It is ok to call functions defined in the same module from the decorated class. However, if the decorated class depends on other classes defined in the same module, they must be decorated as well. This is illustrated in the above example in the case of MyLayer.
It is also possible to employ the decorator just-in-time before calling the constructor. For example:
cls = MyLayer if want_to_make_it_persistent:
cls = persistence.persistent_class(cls)
layer = cls(num_inputs, num_outputs)
As an additional feature, the decorator also keeps track of the arguments that were used to construct each instance of the decorated class. The arguments can be queried via obj.init_args and obj.init_kwargs, and they are automatically pickled alongside other object state. A typical use case is to first unpickle a previous instance of a persistent class, and then upgrade it to use the latest version of the source code:
sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.training_stats module¶
Facilities for reporting and collecting training statistics across multiple processes and devices. The interface is designed to minimize synchronization overhead as well as the amount of boilerplate in user code.
- class sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.training_stats.Collector(regex='.*', keep_previous=True)[source]¶
Bases:
objectCollects the scalars broadcasted by report() and report0() and computes their long-term averages (mean and standard deviation) over user-defined periods of time.
The averages are first collected into internal counters that are not directly visible to the user. They are then copied to the user-visible state as a result of calling update() and can then be queried using mean(), std(), as_dict(), etc. Calling update() also resets the internal counters for the next round, so that the user-visible state effectively reflects averages collected between the last two calls to update().
- Parameters:
regex – Regular expression defining which statistics to collect. The default is to collect everything.
keep_previous – Whether to retain the previous averages if no scalars were collected on a given round (default: True).
- _get_delta(name)[source]¶
Returns the raw moments that were accumulated for the given statistic between the last two calls to update(), or zero if no scalars were collected.
- as_dict()[source]¶
Returns the averages accumulated between the last two calls to update() as an dnnlib.EasyDict. The contents are as follows:
- dnnlib.EasyDict(
NAME = dnnlib.EasyDict(num=FLOAT, mean=FLOAT, std=FLOAT), …
)
- mean(name)[source]¶
Returns the mean of the scalars that were accumulated for the given statistic between the last two calls to update(), or NaN if no scalars were collected.
- names()[source]¶
Returns the names of all statistics broadcasted so far that match the regular expression specified at construction time.
- num(name)[source]¶
Returns the number of scalars that were accumulated for the given statistic between the last two calls to update(), or zero if no scalars were collected.
- std(name)[source]¶
Returns the standard deviation of the scalars that were accumulated for the given statistic between the last two calls to update(), or NaN if no scalars were collected.
- update()[source]¶
Copies current values of the internal counters to the user-visible state and resets them for the next round.
If keep_previous=True was specified at construction time, the operation is skipped for statistics that have received no scalars since the last update, retaining their previous averages.
This method performs a number of GPU-to-CPU transfers and one torch.distributed.all_reduce(). It is intended to be called periodically in the main training loop, typically once every N training steps.
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.training_stats._sync(names)[source]¶
Synchronize the global cumulative counters across devices and processes. Called internally by Collector.update().
- sketchkit.stylization.brushstrokesengine.thirdparty.stylegan2_ada_pytorch.torch_utils.training_stats.init_multiprocessing(rank, sync_device)[source]¶
Initializes torch_utils.training_stats for collecting statistics across multiple processes.
This function must be called after torch.distributed.init_process_group() and before Collector.update(). The call is not necessary if multi-process collection is not needed.
- Parameters:
rank – Rank of the current process.
sync_device – PyTorch device to use for inter-process communication, or None to disable multi-process collection. Typically torch.device(‘cuda’, rank).