sketchkit.image2sketch package

Subpackages

Module contents

image2sketch model integrations.

This module exposes a unified API for converting raster images into vector sketches. It mirrors the layout of sketchkit.vectorization so that new image2sketch methods can be added incrementally without changing the public entry points.

class sketchkit.image2sketch.HEDModel(*, checkpoint_dir: str | PathLike | None = None, auto_download: bool = True, postprocess: bool = True, postprocess_threshold: float = 0.09803921568627451, postprocess_small_edge: int = 5)[source]

Bases: object

High level wrapper around the original HED Caffe model.

MEAN_BGR = (104.00698793, 116.66876762, 122.67891434)
MODEL_FILENAME = 'hed_pretrained_bsds.caffemodel'
MODEL_SHA1 = '2c5d7842f25f880eec62fc610b500c5cf2aa351d'
MODEL_URL = 'https://vcl.ucsd.edu/hed/hed_pretrained_bsds.caffemodel'
static _download_weights(url: str, destination: Path, expected_sha1: str) None[source]
_infer_single(net: Any, image: Image, size: Tuple[int, int]) Image[source]
_initialise_net() Any[source]
static _to_pil_image(image: Image | ndarray | str | PathLike) Image[source]
static _validate_sha1(path: Path, expected_hex: str) bool[source]
property checkpoint_dir: Path
ensure_assets() None[source]

Ensure the pretrained weights are available locally.

generate(image: Image | ndarray | str | PathLike | Sequence[Image | ndarray | str | PathLike], *, size: int | Tuple[int, int] | None = None) Image | List[Image][source]

Generate edge maps for the provided image(s).

class sketchkit.image2sketch.Image2SketchModel(*args, **kwargs)[source]

Bases: Protocol

Protocol describing models capable of turning images into sketches.

_abc_impl = <_abc._abc_data object>
_is_protocol = True
generate(image: Image | ndarray | str | Sequence[Image | ndarray | str], *, size: int | Tuple[int, int] = 512) Sketch | Image | list[Sketch] | list[Image][source]

Produce sketch or raster results for the provided image input(s).

class sketchkit.image2sketch.PhotoSketchModel(*, checkpoint_dir: str | PathLike | None = None, model_name: str = 'pretrained', auto_download: bool = True, device: str | device | None = None, input_size: int = 256)[source]

Bases: object

High level wrapper around the PhotoSketch Pix2Pix generator.

DEFAULT_CHECKPOINT_DIR = PosixPath('/home/euruson/.cache/sketchkit/image2sketch/photosketch')
DEFAULT_INPUT_SIZE = 256
DEFAULT_MODEL_NAME = 'pretrained'
GENERATOR_FILENAME = 'latest_net_G.pth'
GENERATOR_SHA1 = '5968e8f007c650008a265c11f2d2a3887e5840d4'
INPUT_CHANNELS = 3
MODEL_ARCHIVE_NAME = 'photosketch_pretrained.zip'
MODEL_DOWNLOAD_URL = 'https://drive.google.com/uc?export=download&id=1TQf-LyS8rRDDapdcTnEgWzYJllPgiXdj'
MODEL_DRIVE_ID = '1TQf-LyS8rRDDapdcTnEgWzYJllPgiXdj'
NUM_FILTERS = 64
OUTPUT_CHANNELS = 1
RESNET_BLOCKS = 9
_download_and_extract() None[source]
_extract_archive(archive_path: Path) None[source]
_initialise_generator() Module[source]
_install_from_local_archive() bool[source]
_install_from_local_file() bool[source]
_prepare_tensor(image: Image) Tensor[source]
static _resolve_device(device: str | device | None) device[source]
static _resolve_size(size: int | Tuple[int, int]) Tuple[int, int][source]
_tensor_to_image(tensor: Tensor, size: Tuple[int, int]) Image[source]
static _to_pil_image(image: Image | ndarray | str | PathLike) Image[source]
static _validate_sha1(path: Path, expected: str) bool[source]
property checkpoint_dir: Path
ensure_assets() None[source]

Ensure that the pretrained PhotoSketch weights exist locally.

generate(image: Image | ndarray | str | PathLike | Sequence[Image | ndarray | str | PathLike], *, size: int | Tuple[int, int] | None = None) Image | List[Image][source]

Generate sketch images for the provided image input(s).

generate_batch(images: Sequence[Image | ndarray | str | PathLike], *, size: int | Tuple[int, int] | None = None) List[Image][source]

Generate sketches for a batch of images.

class sketchkit.image2sketch.SketchGenerator(method: str = 'SwiftSketch')[source]

Bases: object

A class for image-to-sketch generation.

run(input: Image | ndarray | str | Sequence[Image | ndarray | str], size: int | Tuple[int, int] | None = None) Sketch | Image | list[Sketch] | list[Image][source]

Generate sketch outputs from an input image.

class sketchkit.image2sketch.SwiftSketchModel(*, device: str | device | None = None, use_refine: bool = True, guidance_param: float = 2.5, fix_scale: bool = True, checkpoint_dir: str | PathLike | None = None, auto_download: bool = True, mask_model_factory: Callable[[device], Module] | None = None, feature_extractor_factory: Callable[[device, str], Module] | None = None, diffusion_factory: Callable[[SimpleNamespace], Tuple[Module, object]] | None = None, refine_model_factory: Callable[[SimpleNamespace], Module] | None = None)[source]

Bases: object

High level wrapper around the SwiftSketch diffusion pipeline.

static _control_points_to_sketch(control_points: Tensor, canvas_size: Tuple[int, int]) Sketch[source]
_default_diffusion_factory(args: SimpleNamespace) Tuple[Module, object][source]
_default_feature_extractor_factory(device: device, image_features_type: str) Module[source]
_default_mask_model_factory(device: device) Module[source]
static _device_index(device: device) int[source]
_ensure_archive(archive_path: Path, url: str) None[source]
static _find_model_member(zf: ZipFile, folder: str) str[source]
_initialise_pipeline() None[source]
static _load_args(archive_path: Path, folder: str) SimpleNamespace[source]
_load_state_dict(archive_path: Path, folder: str) dict[source]
static _resolve_device(device: str | device | None) device[source]

Determine which device SwiftSketch should run on.

By default, a CUDA-capable GPU is required. Users can override the selection via the SKETCHKIT_SWIFTSKETCH_DEVICE environment variable or by explicitly passing a device argument. When neither is supplied and CUDA is unavailable, the model refuses to run unless the SKETCHKIT_ALLOW_CPU_FALLBACK environment variable is set to "1". This makes it possible to exercise the pipeline on CPU for testing while keeping the production default focused on GPU runtimes.

static _resolve_size(size: int | Tuple[int, int]) Tuple[int, int][source]
static _to_pil_image(image: Image | ndarray | str | PathLike) Image[source]
property checkpoint_dir: Path
property device: device
ensure_assets() None[source]

Ensure that the model archives are available locally.

generate(image: Image | ndarray | str | PathLike | Sequence[Image | ndarray | str | PathLike], size: int | Tuple[int, int] | None = None) Sketch | List[Sketch][source]

Generate sketches for the provided image(s).

generate_batch(images: Sequence[Image | ndarray | str | PathLike], *, size: int | Tuple[int, int] | None = None) List[Sketch][source]
property model_archive_path: Path
property refine_archive_path: Path
sketchkit.image2sketch.available_methods() list[str][source]

List the names of registered image2sketch methods.

sketchkit.image2sketch.create_model(method: str = 'swiftsketch') Image2SketchModel[source]

Instantiate a model for the requested method.

sketchkit.image2sketch.get_method(name: str) Callable[[...], Image2SketchModel][source]

Return the factory registered for the given method name.

sketchkit.image2sketch.image2sketch(image: Image | ndarray | str | Sequence[Image | ndarray | str], *, method: str = 'swiftsketch', size: int | Tuple[int, int] | None = None) Sketch | Image | list[Sketch] | list[Image][source]

Generate sketch or raster outputs for the provided image using the requested method.

sketchkit.image2sketch.register_method(name: str, factory: Callable[[...], Image2SketchModel]) None[source]

Register a new image2sketch method.

Parameters:
  • name – Canonical name for the method. Names are case sensitive.

  • factory – Callable returning an object that implements Image2SketchModel.