Skip to content

Mikey Sampler Tiled

Documentation

  • Class name: Mikey Sampler Tiled
  • Category: Mikey/Sampling
  • Output node: False

The Mikey Sampler Tiled node is designed for generating tiled samples in a structured manner, leveraging advanced sampling techniques to ensure consistency and quality across tiles. It focuses on creating seamless and coherent outputs by efficiently managing the sampling process across different sections of an image or pattern.

Input types

Required

  • base_model
    • Specifies the base model used for generating the initial samples. This model forms the foundation of the sampling process, influencing the overall quality and characteristics of the generated tiles.
    • Comfy dtype: MODEL
    • Python dtype: str
  • refiner_model
    • Defines the model used for refining the initial samples. The refiner model enhances the details and coherence of the tiles, ensuring a seamless integration across the tiled output.
    • Comfy dtype: MODEL
    • Python dtype: str
  • samples
    • Represents the latent representations of the initial samples. These latents are the starting point for the sampling process, serving as the basis for generating and refining the tiles.
    • Comfy dtype: LATENT
    • Python dtype: torch.Tensor
  • vae
    • The variational autoencoder used in conjunction with the base and refiner models. It plays a crucial role in encoding and decoding the samples, facilitating the generation and refinement of tiles.
    • Comfy dtype: VAE
    • Python dtype: torch.nn.Module
  • positive_cond_base
    • The positive conditioning applied to the base model. This conditioning guides the base model towards generating samples that align with the desired attributes.
    • Comfy dtype: CONDITIONING
    • Python dtype: str
  • negative_cond_base
    • The negative conditioning applied to the base model. It steers the base model away from generating samples with undesired attributes, ensuring the quality of the initial tiles.
    • Comfy dtype: CONDITIONING
    • Python dtype: str
  • positive_cond_refiner
    • The positive conditioning for the refiner model. It influences the refinement process, encouraging the model to enhance the tiles in a way that aligns with the desired outcomes.
    • Comfy dtype: CONDITIONING
    • Python dtype: str
  • negative_cond_refiner
    • The negative conditioning for the refiner model. It helps in avoiding the enhancement of undesired attributes during the refinement process, maintaining the integrity of the tiles.
    • Comfy dtype: CONDITIONING
    • Python dtype: str
  • model_name
    • The name of the model used for upscaling the tiles. This parameter allows for the selection of a specific model tailored to the requirements of the upscaling process.
    • Comfy dtype: COMBO[STRING]
    • Python dtype: List[str]
  • seed
    • A seed value for random number generation, ensuring reproducibility of the sampling process. It allows for consistent generation of tiles across different runs.
    • Comfy dtype: INT
    • Python dtype: int
  • upscale_by
    • The factor by which the tiles are upscaled. This parameter controls the resolution enhancement of the tiles, affecting the clarity and detail of the final output.
    • Comfy dtype: FLOAT
    • Python dtype: float
  • tiler_denoise
    • The denoising factor applied during the tiling process. It helps in reducing noise and artifacts in the tiles, improving the overall quality of the output.
    • Comfy dtype: FLOAT
    • Python dtype: float
  • tiler_model
    • Specifies whether the base or refiner model is used for tiling. This choice affects the characteristics and quality of the generated tiles.
    • Comfy dtype: COMBO[STRING]
    • Python dtype: str

Output types

  • tiled_image
    • Comfy dtype: IMAGE
    • The final tiled image generated by the node. It represents a coherent and seamless assembly of individually processed tiles.
    • Python dtype: torch.Tensor
  • upscaled_image
    • Comfy dtype: IMAGE
    • The upscaled version of the tiled image. This output showcases the enhanced resolution and detail achieved through the upscaling process.
    • Python dtype: torch.Tensor

Usage tips

  • Infra type: GPU
  • Common nodes: unknown

Source code

class MikeySamplerTiled:
    @classmethod
    def INPUT_TYPES(s):

        return {"required": {"base_model": ("MODEL",), "refiner_model": ("MODEL",), "samples": ("LATENT",), "vae": ("VAE",),
                             "positive_cond_base": ("CONDITIONING",), "negative_cond_base": ("CONDITIONING",),
                             "positive_cond_refiner": ("CONDITIONING",), "negative_cond_refiner": ("CONDITIONING",),
                             "model_name": (folder_paths.get_filename_list("upscale_models"), ),
                             "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
                             "upscale_by": ("FLOAT", {"default": 1.0, "min": 0.1, "max": 10.0, "step": 0.1}),
                             "tiler_denoise": ("FLOAT", {"default": 0.25, "min": 0.0, "max": 1.0, "step": 0.05}),
                             "tiler_model": (["base", "refiner"], {"default": "base"}),}}

    RETURN_TYPES = ('IMAGE', 'IMAGE',)
    RETURN_NAMES = ('tiled_image', 'upscaled_image',)
    FUNCTION = 'run'
    CATEGORY = 'Mikey/Sampling'

    def phase_one(self, base_model, refiner_model, samples, positive_cond_base, negative_cond_base,
                  positive_cond_refiner, negative_cond_refiner, upscale_by, model_name, seed, vae):
        image_scaler = ImageScale()
        vaedecoder = VAEDecode()
        uml = UpscaleModelLoader()
        upscale_model = uml.load_model(model_name)[0]
        iuwm = ImageUpscaleWithModel()
        # step 1 run base model
        sample1 = common_ksampler(base_model, seed, 30, 6.5, 'dpmpp_3m_sde_gpu', 'exponential', positive_cond_base, negative_cond_base, samples,
                                  start_step=0, last_step=14, force_full_denoise=False)[0]
        # step 2 run refiner model
        sample2 = common_ksampler(refiner_model, seed, 32, 3.5, 'dpmpp_3m_sde_gpu', 'exponential', positive_cond_refiner, negative_cond_refiner, sample1,
                                  disable_noise=True, start_step=15, force_full_denoise=True)[0]
        # step 3 upscale image using a simple AI image upscaler
        pixels = vaedecoder.decode(vae, sample2)[0]
        org_width, org_height = pixels.shape[2], pixels.shape[1]
        img = iuwm.upscale(upscale_model, image=pixels)[0]
        upscaled_width, upscaled_height = int(org_width * upscale_by // 8 * 8), int(org_height * upscale_by // 8 * 8)
        img = image_scaler.upscale(img, 'nearest-exact', upscaled_width, upscaled_height, 'center')[0]
        return img, upscaled_width, upscaled_height

    def run(self, seed, base_model, refiner_model, vae, samples, positive_cond_base, negative_cond_base,
            positive_cond_refiner, negative_cond_refiner, model_name, upscale_by=1.0, tiler_denoise=0.25,
            upscale_method='normal', tiler_model='base'):
        # phase 1: run base, refiner, then upscaler model
        img, upscaled_width, upscaled_height = self.phase_one(base_model, refiner_model, samples, positive_cond_base, negative_cond_base,
                                                              positive_cond_refiner, negative_cond_refiner, upscale_by, model_name, seed, vae)
        # phase 2: run tiler
        img = tensor2pil(img)
        if tiler_model == 'base':
            tiled_image = run_tiler(img, base_model, vae, seed, positive_cond_base, negative_cond_base, tiler_denoise)
        else:
            tiled_image = run_tiler(img, refiner_model, vae, seed, positive_cond_refiner, negative_cond_refiner, tiler_denoise)
        return (tiled_image, img)