Skip to content

EasyCascadeLoader

Documentation

  • Class name: easy cascadeLoader
  • Category: EasyUse/Loaders
  • Output node: False

The easy cascadeLoader node is designed to facilitate the loading and management of cascading models, specifically focusing on integrating multiple model stages for enhanced processing capabilities. It abstracts the complexity of handling various model stages, making it easier to implement sophisticated model cascading strategies within the ComfyUI framework.

Input types

Required

  • stage_c
    • Specifies the combined list of filenames from the 'unet' and 'cascade' model directories. This input is crucial for determining the sequence and composition of models to be loaded for cascading operations, directly influencing the loader's behavior and the resulting model integration.
    • Comfy dtype: COMBO[STRING]
    • Python dtype: List[str]
  • stage_b
    • Represents the filenames for the second stage of the cascading models, playing a critical role in the sequential processing and integration of model stages.
    • Comfy dtype: COMBO[STRING]
    • Python dtype: List[str]
  • stage_a
    • Denotes the filenames for the initial stage of the cascading models, essential for setting up the foundational processing layer in the cascading strategy.
    • Comfy dtype: COMBO[STRING]
    • Python dtype: List[str]
  • clip_name
    • Specifies the filenames for the CLIP models to be used in conjunction with the cascading models, enhancing the model's understanding and processing capabilities.
    • Comfy dtype: COMBO[STRING]
    • Python dtype: List[str]
  • lora_name
    • Specifies the LoRA model names available for enhancing the model's capabilities, particularly in terms of adaptability and fine-tuning.
    • Comfy dtype: COMBO[STRING]
    • Python dtype: List[str]
  • lora_model_strength
    • Determines the strength of the LoRA model adjustments, influencing the model's performance and adaptability.
    • Comfy dtype: FLOAT
    • Python dtype: float
  • lora_clip_strength
    • Sets the strength of the CLIP adjustments when LoRA models are used, affecting the overall model performance.
    • Comfy dtype: FLOAT
    • Python dtype: float
  • resolution
    • Defines the resolution for the model's output, impacting the quality and detail of the generated images.
    • Comfy dtype: COMBO[STRING]
    • Python dtype: str
  • empty_latent_width
    • Specifies the width of the empty latent space, crucial for initializing the model's processing capabilities.
    • Comfy dtype: INT
    • Python dtype: int
  • empty_latent_height
    • Determines the height of the empty latent space, essential for the model's initial setup and processing.
    • Comfy dtype: INT
    • Python dtype: int
  • compression
    • Sets the level of compression for the model's output, balancing between quality and efficiency.
    • Comfy dtype: INT
    • Python dtype: int
  • positive
    • Captures positive prompts or keywords to guide the model's generation process towards desired outcomes.
    • Comfy dtype: STRING
    • Python dtype: str
  • negative
    • Includes negative prompts or keywords to steer the model away from undesired elements in the generation process.
    • Comfy dtype: STRING
    • Python dtype: str
  • batch_size
    • Specifies the number of instances to be processed in a single batch, affecting the model's efficiency and throughput.
    • Comfy dtype: INT
    • Python dtype: int

Optional

  • optional_lora_stack
    • Allows for the optional inclusion of a stack of LoRA models, offering enhanced customization and fine-tuning capabilities.
    • Comfy dtype: LORA_STACK
    • Python dtype: List[str]

Output types

  • pipe
    • Comfy dtype: PIPE_LINE
    • The pipeline object that orchestrates the flow and integration of various model stages.
    • Python dtype: Pipeline
  • model_c
    • Comfy dtype: MODEL
    • The model object for the final stage in the cascading process, ready for further operations or analysis.
    • Python dtype: Model
  • latent_c
    • Comfy dtype: LATENT
    • The latent representation generated by the final stage model, capturing the processed information.
    • Python dtype: LatentRepresentation
  • vae
    • Comfy dtype: VAE
    • The VAE model used in the cascading process, contributing to the generation or processing of data.
    • Python dtype: VAEModel

Usage tips

  • Infra type: CPU
  • Common nodes: unknown

Source code

class cascadeLoader:
    def __init__(self):
        pass

    @classmethod
    def INPUT_TYPES(s):

        return {"required": {
            "stage_c": (folder_paths.get_filename_list("unet") + folder_paths.get_filename_list("checkpoints"),),
            "stage_b": (folder_paths.get_filename_list("unet") + folder_paths.get_filename_list("checkpoints"),),
            "stage_a": (["Baked VAE"]+folder_paths.get_filename_list("vae"),),
            "clip_name": (["None"] + folder_paths.get_filename_list("clip"),),

            "lora_name": (["None"] + folder_paths.get_filename_list("loras"),),
            "lora_model_strength": ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01}),
            "lora_clip_strength": ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01}),

            "resolution": (resolution_strings, {"default": "1024 x 1024"}),
            "empty_latent_width": ("INT", {"default": 1024, "min": 16, "max": MAX_RESOLUTION, "step": 8}),
            "empty_latent_height": ("INT", {"default": 1024, "min": 16, "max": MAX_RESOLUTION, "step": 8}),
            "compression": ("INT", {"default": 42, "min": 32, "max": 64, "step": 1}),

            "positive": ("STRING", {"default":"", "placeholder": "Positive", "multiline": True}),
            "negative": ("STRING", {"default":"", "placeholder": "Negative", "multiline": True}),

            "batch_size": ("INT", {"default": 1, "min": 1, "max": 64}),
        },
            "optional": {"optional_lora_stack": ("LORA_STACK",), },
            "hidden": {"prompt": "PROMPT", "my_unique_id": "UNIQUE_ID"}
        }

    RETURN_TYPES = ("PIPE_LINE", "MODEL", "LATENT", "VAE")
    RETURN_NAMES = ("pipe", "model_c", "latent_c", "vae")

    FUNCTION = "adv_pipeloader"
    CATEGORY = "EasyUse/Loaders"

    def is_ckpt(self, name):
        is_ckpt = False
        path = folder_paths.get_full_path("checkpoints", name)
        if path is not None:
            is_ckpt = True
        return is_ckpt

    def adv_pipeloader(self, stage_c, stage_b, stage_a, clip_name, lora_name, lora_model_strength, lora_clip_strength,
                       resolution, empty_latent_width, empty_latent_height, compression,
                       positive, negative, batch_size, optional_lora_stack=None,prompt=None,
                       my_unique_id=None):

        vae: VAE | None = None
        model_c: ModelPatcher | None = None
        model_b: ModelPatcher | None = None
        clip: CLIP | None = None
        can_load_lora = True
        pipe_lora_stack = []

        # Clean models from loaded_objects
        easyCache.update_loaded_objects(prompt)

        # Create Empty Latent
        samples = sampler.emptyLatent(resolution, empty_latent_width, empty_latent_height, batch_size, compression)

        if self.is_ckpt(stage_c):
            model_c, clip, vae_c, clip_vision = easyCache.load_checkpoint(stage_c)
        else:
            model_c = easyCache.load_unet(stage_c)
            vae_c = None
        if self.is_ckpt(stage_b):
            model_b, clip, vae_b, clip_vision = easyCache.load_checkpoint(stage_b)
        else:
            model_b = easyCache.load_unet(stage_b)
            vae_b = None

        if optional_lora_stack is not None and can_load_lora:
            for lora in optional_lora_stack:
                lora = {"lora_name": lora[0], "model": model_c, "clip": clip, "model_strength": lora[1], "clip_strength": lora[2]}
                model_c, clip = easyCache.load_lora(lora)
                lora['model'] = model_c
                lora['clip'] = clip
                pipe_lora_stack.append(lora)

        if lora_name != "None" and can_load_lora:
            lora = {"lora_name": lora_name, "model": model_c, "clip": clip, "model_strength": lora_model_strength,
                    "clip_strength": lora_clip_strength}
            model_c, clip = easyCache.load_lora(lora)
            pipe_lora_stack.append(lora)

        model = (model_c, model_b)
        # Load clip
        if clip_name != 'None':
            clip = easyCache.load_clip(clip_name, "stable_cascade")
        # Load vae
        if stage_a not in ["Baked VAE", "Baked-VAE"]:
            vae_b = easyCache.load_vae(stage_a)

        vae = (vae_c, vae_b)
        # 判断是否连接 styles selector
        is_positive_linked_styles_selector = is_linked_styles_selector(prompt, my_unique_id, 'positive')
        is_negative_linked_styles_selector = is_linked_styles_selector(prompt, my_unique_id, 'negative')

        log_node_warn("正在处理提示词...")
        positive_seed = find_wildcards_seed(my_unique_id, positive, prompt)
        # Translate cn to en
        if has_chinese(positive):
            positive = zh_to_en([positive])[0]
        model_c, clip, positive, positive_decode, show_positive_prompt, pipe_lora_stack = process_with_loras(positive,
                                                                                                           model_c, clip,
                                                                                                           "positive",
                                                                                                           positive_seed,
                                                                                                           can_load_lora,
                                                                                                           pipe_lora_stack,
                                                                                                           easyCache)
        positive_wildcard_prompt = positive_decode if show_positive_prompt or is_positive_linked_styles_selector else ""
        negative_seed = find_wildcards_seed(my_unique_id, negative, prompt)
        # Translate cn to en
        if has_chinese(negative):
            negative = zh_to_en([negative])[0]
        model_c, clip, negative, negative_decode, show_negative_prompt, pipe_lora_stack = process_with_loras(negative,
                                                                                                           model_c, clip,
                                                                                                           "negative",
                                                                                                           negative_seed,
                                                                                                           can_load_lora,
                                                                                                           pipe_lora_stack,
                                                                                                           easyCache)
        negative_wildcard_prompt = negative_decode if show_negative_prompt or is_negative_linked_styles_selector else ""

        tokens = clip.tokenize(positive)
        cond, pooled = clip.encode_from_tokens(tokens, return_pooled=True)
        positive_embeddings_final = [[cond, {"pooled_output": pooled}]]

        tokens = clip.tokenize(negative)
        cond, pooled = clip.encode_from_tokens(tokens, return_pooled=True)
        negative_embeddings_final = [[cond, {"pooled_output": pooled}]]

        image = easySampler.pil2tensor(Image.new('RGB', (1, 1), (0, 0, 0)))

        log_node_warn("处理结束...")
        pipe = {
            "model": model,
            "positive": positive_embeddings_final,
            "negative": negative_embeddings_final,
            "vae": vae,
            "clip": clip,

            "samples": samples,
            "images": image,
            "seed": 0,

            "loader_settings": {
                "vae_name": stage_a,
                "lora_name": lora_name,
                "lora_model_strength": lora_model_strength,
                "lora_clip_strength": lora_clip_strength,
                "lora_stack": pipe_lora_stack,

                "positive": positive,
                "positive_token_normalization": 'none',
                "positive_weight_interpretation": 'comfy',
                "negative": negative,
                "negative_token_normalization": 'none',
                "negative_weight_interpretation": 'comfy',
                "resolution": resolution,
                "empty_latent_width": empty_latent_width,
                "empty_latent_height": empty_latent_height,
                "batch_size": batch_size,
                "compression": compression
            }
        }

        return {"ui": {"positive": positive_wildcard_prompt, "negative": negative_wildcard_prompt},
                "result": (pipe, model_c, model_b, vae)}