Skip to content

Create Magic Mask

Documentation

  • Class name: CreateMagicMask
  • Category: KJNodes/masking/generate
  • Output node: False

This node specializes in generating dynamic masks based on a combination of parameters such as frames, transitions, depth, distortion, and seed. It is designed to create visually complex and customizable masks for use in various multimedia applications, offering a unique way to enhance visual content with magical or mystical effects.

Input types

Required

  • frames
    • Specifies the number of frames for the mask animation, affecting the length and fluidity of the generated mask sequence.
    • Comfy dtype: INT
    • Python dtype: int
  • depth
    • Determines the perceived depth of the mask, adding a sense of three-dimensionality to the generated visual effect.
    • Comfy dtype: INT
    • Python dtype: int
  • distortion
    • Controls the level of distortion applied to the mask, allowing for the creation of more abstract or surreal visual effects.
    • Comfy dtype: FLOAT
    • Python dtype: float
  • seed
    • Sets a seed value for random number generation, ensuring reproducibility of the mask patterns.
    • Comfy dtype: INT
    • Python dtype: int
  • transitions
    • Defines the types and characteristics of transitions between mask frames, contributing to the visual complexity and dynamism of the mask.
    • Comfy dtype: INT
    • Python dtype: int
  • frame_width
    • Specifies the width of each frame in the mask animation, defining the horizontal dimension of the generated masks.
    • Comfy dtype: INT
    • Python dtype: int
  • frame_height
    • Specifies the height of each frame in the mask animation, defining the vertical dimension of the generated masks.
    • Comfy dtype: INT
    • Python dtype: int

Output types

  • mask
    • Comfy dtype: MASK
    • The primary output mask generated by the node, offering a dynamic visual effect based on the input parameters.
    • Python dtype: torch.Tensor
  • mask_inverted
    • Comfy dtype: MASK
    • An inverted version of the primary mask, providing an alternative visual effect that can be used in various applications.
    • Python dtype: torch.Tensor

Usage tips

  • Infra type: CPU
  • Common nodes: unknown

Source code

class CreateMagicMask:

    RETURN_TYPES = ("MASK", "MASK",)
    RETURN_NAMES = ("mask", "mask_inverted",)
    FUNCTION = "createmagicmask"
    CATEGORY = "KJNodes/masking/generate"

    @classmethod
    def INPUT_TYPES(s):
        return {
            "required": {
                 "frames": ("INT", {"default": 16,"min": 2, "max": 4096, "step": 1}),
                 "depth": ("INT", {"default": 12,"min": 1, "max": 500, "step": 1}),
                 "distortion": ("FLOAT", {"default": 1.5,"min": 0.0, "max": 100.0, "step": 0.01}),
                 "seed": ("INT", {"default": 123,"min": 0, "max": 99999999, "step": 1}),
                 "transitions": ("INT", {"default": 1,"min": 1, "max": 20, "step": 1}),
                 "frame_width": ("INT", {"default": 512,"min": 16, "max": 4096, "step": 1}),
                 "frame_height": ("INT", {"default": 512,"min": 16, "max": 4096, "step": 1}),
        },
    } 

    def createmagicmask(self, frames, transitions, depth, distortion, seed, frame_width, frame_height):
        from ..utility.magictex import coordinate_grid, random_transform, magic
        rng = np.random.default_rng(seed)
        out = []
        coords = coordinate_grid((frame_width, frame_height))

        # Calculate the number of frames for each transition
        frames_per_transition = frames // transitions

        # Generate a base set of parameters
        base_params = {
            "coords": random_transform(coords, rng),
            "depth": depth,
            "distortion": distortion,
        }
        for t in range(transitions):
        # Generate a second set of parameters that is at most max_diff away from the base parameters
            params1 = base_params.copy()
            params2 = base_params.copy()

            params1['coords'] = random_transform(coords, rng)
            params2['coords'] = random_transform(coords, rng)

            for i in range(frames_per_transition):
                # Compute the interpolation factor
                alpha = i / frames_per_transition

                # Interpolate between the two sets of parameters
                params = params1.copy()
                params['coords'] = (1 - alpha) * params1['coords'] + alpha * params2['coords']

                tex = magic(**params)

                dpi = frame_width / 10
                fig = plt.figure(figsize=(10, 10), dpi=dpi)

                ax = fig.add_subplot(111)
                plt.subplots_adjust(left=0, right=1, bottom=0, top=1)

                ax.get_yaxis().set_ticks([])
                ax.get_xaxis().set_ticks([])
                ax.imshow(tex, aspect='auto')

                fig.canvas.draw()
                img = np.array(fig.canvas.renderer._renderer)

                plt.close(fig)

                pil_img = Image.fromarray(img).convert("L")
                mask = torch.tensor(np.array(pil_img)) / 255.0

                out.append(mask)

        return (torch.stack(out, dim=0), 1.0 - torch.stack(out, dim=0),)