Mikey Sampler¶
Documentation¶
- Class name:
Mikey Sampler
- Category:
Mikey/Sampling
- Output node:
False
The Mikey Sampler node is designed for sampling operations within the Mikey framework, focusing on generating or manipulating data samples. It abstracts complex sampling processes, providing a streamlined interface for various sampling techniques.
Input types¶
Required¶
base_model
- The 'base_model' parameter specifies the primary model used for sampling, serving as the foundation for generating or refining samples.
- Comfy dtype:
MODEL
- Python dtype:
torch.nn.Module
refiner_model
- The 'refiner_model' parameter specifies an additional model used to refine the samples generated by the base model, enhancing their quality or characteristics.
- Comfy dtype:
MODEL
- Python dtype:
torch.nn.Module
samples
- The 'samples' parameter represents the initial data samples that are to be processed or generated by the sampling operation.
- Comfy dtype:
LATENT
- Python dtype:
torch.Tensor
vae
- The 'vae' parameter specifies the variational autoencoder used in the sampling process, often for encoding or decoding operations.
- Comfy dtype:
VAE
- Python dtype:
torch.nn.Module
positive_cond_base
- The 'positive_cond_base' parameter defines the positive conditioning applied to the base model, guiding the sampling towards desired characteristics.
- Comfy dtype:
CONDITIONING
- Python dtype:
str
negative_cond_base
- The 'negative_cond_base' parameter defines the negative conditioning applied to the base model, steering the sampling away from undesired characteristics.
- Comfy dtype:
CONDITIONING
- Python dtype:
str
positive_cond_refiner
- The 'positive_cond_refiner' parameter defines the positive conditioning applied to the refiner model, enhancing the refinement process towards desired outcomes.
- Comfy dtype:
CONDITIONING
- Python dtype:
str
negative_cond_refiner
- The 'negative_cond_refiner' parameter defines the negative conditioning applied to the refiner model, preventing the refinement from incorporating undesired features.
- Comfy dtype:
CONDITIONING
- Python dtype:
str
model_name
- The 'model_name' parameter specifies the name of the model being used, which can influence the sampling process by selecting specific model configurations or versions.
- Comfy dtype:
COMBO[STRING]
- Python dtype:
str
seed
- The 'seed' parameter ensures reproducibility of the sampling results by initializing the random number generator used in the sampling process.
- Comfy dtype:
INT
- Python dtype:
int
upscale_by
- The 'upscale_by' parameter determines the factor by which the samples are upscaled, affecting the resolution or size of the output.
- Comfy dtype:
FLOAT
- Python dtype:
float
hires_strength
- The 'hires_strength' parameter adjusts the strength of high-resolution features in the samples, allowing for finer control over the detail level in the output.
- Comfy dtype:
FLOAT
- Python dtype:
float
Output types¶
latent
- Comfy dtype:
LATENT
- The output is a latent representation of the sampled data, which can be further processed or used for generating final outputs.
- Python dtype:
torch.Tensor
- Comfy dtype:
Usage tips¶
- Infra type:
GPU
- Common nodes: unknown
Source code¶
class MikeySampler:
@classmethod
def INPUT_TYPES(s):
return {"required": {"base_model": ("MODEL",), "refiner_model": ("MODEL",), "samples": ("LATENT",), "vae": ("VAE",),
"positive_cond_base": ("CONDITIONING",), "negative_cond_base": ("CONDITIONING",),
"positive_cond_refiner": ("CONDITIONING",), "negative_cond_refiner": ("CONDITIONING",),
"model_name": (folder_paths.get_filename_list("upscale_models"), ),
"seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
"upscale_by": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.1}),
"hires_strength": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 2.0, "step": 0.1}),}}
RETURN_TYPES = ('LATENT',)
FUNCTION = 'run'
CATEGORY = 'Mikey/Sampling'
def adjust_start_step(self, image_complexity, hires_strength=1.0):
image_complexity /= 24
if image_complexity > 1:
image_complexity = 1
image_complexity = min([0.55, image_complexity]) * hires_strength
return min([16, 16 - int(round(image_complexity * 16,0))])
def run(self, seed, base_model, refiner_model, vae, samples, positive_cond_base, negative_cond_base,
positive_cond_refiner, negative_cond_refiner, model_name, upscale_by=1.0, hires_strength=1.0,
upscale_method='normal'):
image_scaler = ImageScale()
vaeencoder = VAEEncode()
vaedecoder = VAEDecode()
uml = UpscaleModelLoader()
upscale_model = uml.load_model(model_name)[0]
iuwm = ImageUpscaleWithModel()
# common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent, denoise=1.0,
# disable_noise=False, start_step=None, last_step=None, force_full_denoise=False)
# step 1 run base model
sample1 = common_ksampler(base_model, seed, 25, 6.5, 'dpmpp_2s_ancestral', 'simple', positive_cond_base, negative_cond_base, samples,
start_step=0, last_step=18, force_full_denoise=False)[0]
# step 2 run refiner model
sample2 = common_ksampler(refiner_model, seed, 30, 3.5, 'dpmpp_2m', 'simple', positive_cond_refiner, negative_cond_refiner, sample1,
disable_noise=True, start_step=21, force_full_denoise=True)
# step 3 upscale
if upscale_by == 0:
return sample2
else:
sample2 = sample2[0]
pixels = vaedecoder.decode(vae, sample2)[0]
org_width, org_height = pixels.shape[2], pixels.shape[1]
img = iuwm.upscale(upscale_model, image=pixels)[0]
upscaled_width, upscaled_height = int(org_width * upscale_by // 8 * 8), int(org_height * upscale_by // 8 * 8)
img = image_scaler.upscale(img, 'nearest-exact', upscaled_width, upscaled_height, 'center')[0]
if hires_strength == 0:
return (vaeencoder.encode(vae, img)[0],)
# Adjust start_step based on complexity
image_complexity = calculate_image_complexity(img)
#print('Image Complexity:', image_complexity)
start_step = self.adjust_start_step(image_complexity, hires_strength)
# encode image
latent = vaeencoder.encode(vae, img)[0]
# step 3 run base model
out = common_ksampler(base_model, seed, 16, 9.5, 'dpmpp_2m_sde', 'karras', positive_cond_base, negative_cond_base, latent,
start_step=start_step, force_full_denoise=True)
return out