Comfyui batch upscale images reddit. More info: https://rtech.
Comfyui batch upscale images reddit Each upscale model has a specific scaling factor (2x, 3x, 4x, ) that it is optimized to work with. png, tga, gif anything PIL can load). The latent upscale in ComfyUI is crude as hell, basically just a "stretch this image" type of upscale. So, I've used the simple tiles custom nodes to break it up and process each tile one at a time, there is a batch-list switch you can toggle to do it all as a Welcome to the unofficial ComfyUI subreddit. json. Can't believe this is not a thing yet. This works but I want (-->>memory issues) to save this 1 image a time! When I start with a batch of 10, then Comfyui does his thing and then saves 10 upscaled ones. I am trying to figure out the best way to use the image picker as a waypoint where I can send images off in different final workflows. You should now be able to load the workflow, which is here. This tutorial will guide you through how to build your own state of In this tutorial, we will use ComfyUI to upscale stable diffusion images to any resolution we want! We will be using a custom node pack called "Impact", which comes with many useful nodes. To highres certain chosen images, later. The center image flashes through the 64 I am new to ComfyUI and I am already in love with it. My colab is set up to do batch img2img. I'm creating a new workflow for image upscaling. Hires fix with add detail lora. Loads all images from a folder, upscales them to your preferred settings, saves to a subfolder. 9 , euler 20K subscribers in the comfyui community. \ComfyUI\venv\Scripts\activate. Then use sd upscale to split it to tiles and denoise each one using your parameters, that way you will get a grid with your images. But I'm having great results with batch counts alone, but batch size would speed thing. I think CHAINNER and ComfyUI would be worth learning to make such custom workflows. I want to upscale my image with a model, and then select the final size of it. What I want: generate with model A 512x512 -> upscale -> regenerate with model A higher Welcome to the unofficial ComfyUI subreddit. - now change the first sampler's state to 'hold' (from 'sample') and unmute the second sampler - queue the prompt again - this will now I would like to load a batch of images (which I can already do) and with this batch of images randomly select an image in each request. They should allow you to load a folder and put all images through the same process Get the Reddit app Scan this QR code to download the app now. com) Comfy will process the images one by one. If you have such a node but your images aren't being saved, make sure the node is connected to the rest of the workflow and not disabled. Also it's possible to share 24K subscribers in the comfyui community. The final node is where comfyui take those images and turn it into a video. this The workflow has different upscale flow that can upscale up to 4x and in my recent version I added a more complex flow that is meant to add details to a generated image. In Automatic it is quite easy and the picture at the end is also clean, color gradients are smoth, details on the body like the veins are not so strongly emphasized. a batch upscale that use the prompt in exif is what i'm looking for too (latent or not) like a sort of HighRes fix, but as a separate function. In 1111 using image to image, you can batch load all frames of a video, batch load control net Thanks again for these nodes. Generate initial image at 512x768 Upscale x1. Here's a very bad workaround that i haven't tried myself yet because i just thought about it now while taking a dump and reading your question: create a 1 step new giant image filled with latent noise. A lot of people are just discovering this technology, and want to show off what they created. e. You've changed the batch size on the "pipeLoader - Base" node to be greater than 1 -> Change it to 1 and try again. 0. You can get all files in the directory and subdirectory or instead These comparisons are done using ComfyUI with default node settings and fixed seeds. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. We introduced a Freedom parameter that will drive how much new detail will be introduced in the upscaled image. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. More info: https://rtech. The only approach I've seen so far is using a the Hires fix node, where its latent input comes from AI upscale > downscale image, nodes. You could try to pp your denoise at the start of an iterative upscale at say . 5, euler, sgm_uniform or CNet strength 0. Latent of batch size 2 of 512x768 is 229 kb. Comfyui batch img2img workflow upvote r That might me a great upscale if you want semi-cartoony output, but it's nowhere near realistic. I find that setting my width and height to 1/2 makes a 2x2 grid per frame which with LCM can be quick and adds a good amount of detail. /r/StableDiffusion is back open after the protest of Reddit killing open Hey guys, I've generated around 10. As an input I use various image sizes and find I have to manually enter the image size in the Empty Latent Image node that leads to the KSampler each time I work on a new image. I'm looking for a workflow for ComfyUI that can take an uploaded image and generate an identical one, but upscaled using Ultimate SD Upscaling. . I would love to know if there is any way to process a folder of images, with a list of pre-created prompt for each image? I am currently using webui for such things however ComfyUI has given me a lot of creative flexibility compared to what’s possible with webui, so I would like to know. There are apps and nodes which can read in generation data, but they fail for complex ComfyUI node setups. Open menu Open navigation Go to Reddit Home. Add the standard "Load Image" node Right click it, "Convert Widget to Input" -> "Convert Image to Input" second pic. Setting the focalpoint produces the following rescaled images to 800x200 and 512x768 respectively. Using the Load Image Batch node from the WAS Suite repository, I can sequentially load all the images from a folder, but for upscale I also need the prompt Then restart ComfyUI. Get ComfyUI Manager to start: and I think that C image is the best so I want to upscale C image only. If that fails, it will attempt to load video file "paths" so they can be fed to VHS nodes, and/or batch nodes that Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP Welcome to the unofficial ComfyUI subreddit. Thank absolutely. Ensure that you use this node and not Load Image Batch From Dir. One does an image upscale and the other a latent upscale. It seemed like a smaller tile would add more detail, and a larger tile would add less. Even at that, there's room for bottlenecks because there's no worker management so for each generation it has to wait for the other card to finish. Or check it out in the app stores does anyone know if there is an open source program I can use in order to upscale a batch of images (let's say 100-200) the same way some online AI upscales (including the resize by x2/x4/x8) do? Welcome to the unofficial ComfyUI /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. There are a few options. It is intended to upscale and enhance your input images. WAS node suite has a Load Image Batch node. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. Nearest-exact is a crude image upscaling algorithm that, when combined with your low denoise strength and step count in the KSampler, means you are basically doing nothing to the image when you denoise it, leaving all the jagged pixels Welcome to the unofficial ComfyUI subreddit. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode and sample it again. I'm trying to find a way of upscaling the SD video up from its 1024x576. Think about mass producing stuff, like game assets. I've come quite far but Probs the best use case is a high batch upscale. I liked the ability in MJ, to choose an image from the batch and upscale just that image. Right now upscaling through automatic1111s Extras > Batch from Directory is extremly slow and my cpu and gpu don't even go close to leveraging 5% of the availabel resources. 0 + Refiner) again. 6 denoise and either: Cnet strength 0. Pro-tip: Insert a WD-14 or a BLIP Interrogation node after it to automate the prompting for each image. Upscaled by ultrasharp 4x upscaler. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. You end up with images anyway after ksampling so you can use those upscale node. X values) if you want to benefit from the higher res processing. ComfyUI has several types of sequential processing, but by using the "list" format, you can process them one by one. jpg to select only JPEG images in the directory specified. bat python app. A new Image2Image function: choose an existing image, or a batch of images from a folder, and pass it through the Hand Detailer, Face Detailer, Upscaler, or Face Swapper functions. For instance in one testing workflow, I generate a 512px image and upscale it 2x with LatentUpscaleBy. You can look at the EXIF data to get the SEED used. A new FreeU v2 node to test the updated implementation* of the Free Lunch technique. Upscalers. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. There are also "face detailer" workflows for faces specifically. Is there a way to get it to randomize per-image even Welcome to the unofficial ComfyUI subreddit. I don't want any changes or additions to the image, just a straightforward upscale and quality enhancement. Please share your tips, tricks, and workflows for using this software to create your AI art. Then I am choosing which images to upscale using Preview Chooser node. I am trying to create a workflow which currently create batch of images which each have different prompt. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of But more useful is that you can now right-click an image in the `Preview for Image Chooser` and select `Progress this image` - which is the same as selecting it's number and pressing go. 5 for the moment) /r/StableDiffusion Welcome to the unofficial ComfyUI subreddit. ThinkDiffusion Merge_2_Images. CUI can do a batch of 4 and stay within the 12 GB. Sort by: Have a lowish denoise upscale pass using ad again after facedetailer, it smooths it out :) the Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. Doing 4 images in batch sizes is faster than doing four queues of 1. And Also Bypass the AnimateDiff Loader model to Original Model loader in the To Basic Pipe node else It will give you Noise on the face (as AnimateDiff loader dont work on single image, you need 4 atleast maybe and facedetailer can handle only 1 ) Only Drawback is there will be no The workflow isn't attached to this image you'll have to download from the G-drive link. I try to use comfyUI to upscale (use SDXL 1. or update using the batch file in the update folder in comfyUI directory. New to Comfyui, so not an expert. They are images of workflows, if you download those workflow images and drag them to comfyUI, it will display the workflow. now i have made a workflow that has a upscaler in it and it works fine only thing is that it upscales everything and that is not worth the wait with most outputs. There is making a batch using the Empty Latent Image node, batch_size widget, and there is making a batch in the control panel. I'm still learning so any input on how I could improve these workflows is appreciated, though keep in mind my goal is to balance the complexity with the ease of use for end users. 4, but use a relevent to your image control net so you don't lose to much of your original image, and combining that with the iterative upscaler and concat a secondary posative telling the model to Welcome to the unofficial ComfyUI subreddit. I want to use the x4-UltraSharp Upscaler and need GFPGAN and Codeformer while doing it. eg: batch index 2, Length 2 would send image number 3 and 4 to preview img in this example. New Look, Sharpening Model, Batch Mode and more! The resulting image will be then passed to the Upscalers (if enabled) To use the Hand Detailer you must enable it in the “Functions” section. 3 denoise strength. A lot of people are just discovering this Welcome to the unofficial ComfyUI subreddit. VAE decoded png image of 512x768 is 500+kb, AND it takes like half a minute to convert! Hello, It’s always nice to have new tips being shared and thanks for that but from what I see I think you still need to work on your workflow. You can upscale your image, generated by the SDXL Base+Refiner models, the Base/Fine-Tuned SDXL model, or the ReVision model, with one or two upscalers in sequence. , and then re-enable once I make my selections. One recommendation I saw a long time ago was to use a tile width that matched the width of the upscaled output. 2x upscale using Ultimate SD Upscale and TileE Controlnet. It works beautifully to select images from a batch, but only if I have everything enabled when I I used this but it did'nt worked on batch images ComfyUI-Impact-Pack - Tutorial #2: FaceDetailer. Ugh. What it's great for: Hi everyone, I’m new to ComfyUI, transitioning from A1111, and exploring its automation capabilities for image generation. pattern is a glob that allows you to do things like **/* to get all files in the directory and subdirectory or things like *. The moon is the important feature at 600,100 in the source image. If that doesn't give you the seed used to recreate the image, you need to find the original seed. Based on what you have in the image, it will attempt to look inside D:\folder for any files that are loadable as images (*. No attempts to fix jpg artifacts, etc. 000 image I want to upscale by2. Heres an example with some math to double the original images resolution Welcome to the unofficial ComfyUI subreddit. xy plots are great to visualize small quality improvements in Loras or overcooking signs. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. If you don't have a Save Image node in your workflow, add one. A lot of I am switching from Automatic to Comfy and am currently trying to upscale. So starting with a batch of 1000 frames does not work, memory shortage Thanks, Dynamic Prompts was the node I was using but it no longer works properly, so am calling the wildcards with another node. So instead of one girl in an image you got 10 tiny girls stitch into one giant upscale image. The aspect ratio of 16:9 is the same from the empty latent and anywhere else that image sizes are used. You've possibly messed the noodles up on the "Get latent size" node under the Ultimate SD Upscale node -> It should use the Two INT outputs. Even when it was working, I can't recall it providing more than one prompt at a time for batch size greater than one. 9, end_percent 0. Open Sauce Programs to Upscale Images in Batch/Bulk? Looking for software . repeat until you have an image you like, that you want to upscale. eg - to save, to upscale, to face detailer, The issue I think people run into is that they think the latent upscale is the same as the Latent Upscale from Auto1111. All ready to use. At the end, when you open and zoom on your image, it’s quite noticeable that your upscale generated visible seams between the upscales tiles. I don't have proper tutorials yet, but you can see a demo of the workflow to batch upres 96 tiles. If you have previously generated images you want to upscale, you'd modify the HiRes to include the IMG2IMG nodes. Overall: - image upscale is less detailed, but more faithful to the Here is my current layout, its commented pretty well if you wanna learn a bit, it doesn't do anything SUPER complex, but it is set up to make high quality upscales , batch a bunch of images out and then plug the image you want in I think ComfyUI is good for those who wish to do a reproducible workflow which then can be used to output multiple images of the same kind with the same steps. is there a way to display multiple images if you're generating a batch of 3, for example? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app However, when attempting to upscale images from a folder containing approximately 35,000 images, the upscaling just doesnt happens but they do being saved in destination folder. A lot of people are just discovering this A new Image2Image function: choose an existing image, or a batch of images from a folder, and pass it through the Hand Detailer, Face Detailer, Upscaler, or Face Swapper functions. And above all, BE NICE. (you can just copy from the batch image preview if you want Drop the image back into ComfyUI to load the one you liked. There are a bunch of useful extensions for ComfyUI that will make your life easier. On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. Share Add a Comment. I have. py--port=7888 --extra_paths f:/ComfyUI/output f: Adding a bit of the noised latents during denoising helps guide the diffusion toward a similarly composed resulting image, but at a higher scale. /r/StableDiffusion is back open after the protest of Reddit killing open Welcome to the unofficial ComfyUI subreddit. Do the same comparison with images that are much more detailed, with characters and patterns. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. go up by 4x then downscale to your desired resolution using image upscale. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper Welcome to the unofficial ComfyUI subreddit. ControlNets help align things even more, as does IPAdapter etc. There should be a Save image node in the default workflow, which will save the generated image to the output directory in the ComfyUI directory. /r/StableDiffusion is back open after the protest of Welcome to the unofficial ComfyUI subreddit. If you go above or below that scaling factor, a standard resizing method will be used (in the case I tried installing the ComfyUI-Image-Selector plugin, which claims that I can simple mute or disconnect the Save Image node, etc. You can use the Control Net Tile + LCM to be efficient. The extra options from the control panel, from what I can see, have a batch count (no batch size) option; the only thing the option does, I think, is queueing up a number of batches of size 1 one after the other (basically the same thing as if I clicked on Latent of size 512x768 is 134 kb. Please keep posted images SFW. vertical rescale using the pick up prompt, go thru CN to sampler and produce new image (or same as og if no parts were masked) upscale result 4x What would you change and/or improve ? I have attached image of workflow, produced image (with workflow attached in image) and also og image. /r/StableDiffusion is back open after the protest of Reddit That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. Generates a SD1. Once you build this you can choose an output from it using static seeds to get specific images or you can split up larger batches of images to reduce ram usage and stop running out of ram errors when you are working with batches of larger images. Contains txt-to-img, img-to-img, inpainting, outpainting, Latent Upscale and Image upscale. Check comfyUI image examples in the link. There's "latent upscale by", but I don't want to upscale the latent image. 14 KB. Load Batch Images Increment images in a folder, or fetch a single image out of a batch. and remove from batch (image) Hi all! I found a nice workflow on civtai to craete a model card using a selected checkpoint, an optional lightning lora and an optional other lora. This is not the case. Do you want to save the image? choose a save image node and you'll find the outputs in the folders or you can right click and save that way too. Author bash-j (Account age: 4196days) Extension Mikey Nodes Latest Updated Put ImageBatchToImageList > Face Detailer > ImageListToImageBatch > Video Combine. so if your image is 512x512, and then you upscale to 2048x2045, then run facedetailer, its going to render the face in the same resolution as the original render, not the upscale, and then just basic Welcome to the unofficial ComfyUI subreddit. You can find the workflows and more image examples below: ComfyUI SUPIR Upscale Workflow. Here I run into an issue since I need to fetch each prompt for selected image to be You could add a latent upscale in the middle of the process then a image downscale in pixel space at the end (use upscale node with 0. That is using an actual SD model to do the upscaling that, afaik, doesn't yet exist in ComfyUI. If you have it working, I'd be interested to know how you did that as it has been broken for me since that major ComfyUI change chat So I made a upscale test workflow that uses the exact same latent input and destination size. Use this workflow in parallel to Photoshop! Just use copy-paste to switch between the two. Note: You can then activate 'Extra Options' in your Upscale Process and set the batch count equal to the number of your images under 'Queue Prompt'. But I probably wouldn't upscale by 4x at all if fidelity is important. IC-Light - For manipulating the illumination of images, GitHub repo and ComfyUI node by kijai (only SD1. I use them in my workflow regularly. The different from original Video Helper Suite node is that the original loads all images as batch. Even fewer wires thanks to u/receyuki’s much-enhanced SD Prompt Generator node. This is done after the refined image is upscaled and encoded into a latent. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality). That way you could just stick the same image in a batch and have it iterate over the same image multiple times. ComfyUI Node: Batch Resize Image for SDXL (Mikey) Class Name Batch Resize Image for SDXL Category Mikey/Image. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with Model, Tile Controlnet, Tiled KSampler, Tiled VAE Decode and colour matching. You guys have been very supportive, so I'm posting here first. That node will try to send all the images in at once, usually leading to 'out of memory' issues. Still working on the the whole thing but I got the idea down Is there a custom node or a way to replicate the A111 ultimate upscale ext in ComfyUI? Skip to main content. The workflow is kept very simple for this test; Load image Upscale Save image. I use the defaults and a 1024x1024 tile. 0 Alpha + SD XL Refiner 1. /r/StableDiffusion is back open after the protest of Reddit killing open API access A homogenous image like that doesn't tell the whole story though ^^. Original image The batch image node takes single images and makes them into a batch. My problem is that my I switched from a1111 to comfy ui and i couldn't find a workflow that allows me to batch img2img (from folder) and use controlnets on them. Before. TLDR: Is there a way in comfyUI to break up your huge directory of If you want to upscale your images with ComfyUI then look no further! The above image shows upscaling by 2 times to enhance the quality of your image. In Comfy all the more so, the image simply looks unnatural after the upscaling. the prompt gets parsed once and then you move on to the rest of the steps). The correct way to define the batch size is in the empty latent image node where you set the resolution. At the same time is gonna be a tall order, but you can batch upscale if you want. Belittling their efforts will get you banned. Does such a workflow exist? If so, could you guide me on how to set it up? Thanks in advance! I use Load Images [Deprecated], index load cap 0, start index 0, then an Ultimate Upscale, then an Image Save. how do I get the stable-diffusion-webui from Automatic1111 to work with batch upscale for Remacri? I can get it to show up in the normal txt to image's built in After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. 17K subscribers in the comfyui community. Please try ClearPixel if you need to upscale images, works fine for me. The "Upscale and Add Details" part splits the generated image, upscales each part individually, adds details using a new sampling step and after that stiches the parts together Welcome to the unofficial ComfyUI subreddit. Or check it out in the app stores Testing a workflow with Batch images and IPAdapter also a new way to upscale with tiles. Generate from Comfy and paste the Along with normal image preview other methods are: Latent Upscaled 2x Hires fix 2x(two pass img) Upscaled img 4x using nearest-exact upscale method. Take the output batch of images from SVD and run them through Ultimate SD Upscale nodes. CUI is also faster. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper The problem here is the step after your image loading, where you scale up the image using the "Image Scale to Side" node. If you don’t want the distortion, decode the latent, upscale image by, then encode it for whatever This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. It definitely alters the image a lot more, even making the flying car kind of blend in with the buildings, but it also GREATLY adds interesting, clear lettering to the signs If I want to make a batch of images, then upscale, and regenerate at higher resolution with the same model, it doesn't seem to be possible. Will reset it's place if the path, or pattern is changed. I’m trying to set Extremely EASY to use upscaler/detailer that uses lighting fast LCM and produces highly detailed results that remain faithful to the original image. It becomes more challenging when So even if there is a solution to batch, i am wondering what the cost would be in terms of stutters in your final end animation. They will then behave the same as if you have generated a batch of images using a ksampler. support/docs/meta Welcome to the unofficial ComfyUI subreddit. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. upscale image - these can be used to downscale by setting either a direct resolution or going under 1 on the 'upscale image by' node. An IPAdapter takes in the first image to condition a model that is fed into the mixing KSampler, guiding the mixing even more along the semantic lines of the first image. Its 100 images! It just saves time I guess. NO PROMPT NEEDED - It just works!! As AI tools continue to improve, image upscalers have become a necessary aid for anyone working with images. I think I have a reasonable workflow, that allows If all the images you wish to upscale have the same prompt etc, only different seeds, then it is very easy, just use the Load Image Batch from WAS Suite. Workflow. Welcome to the unofficial ComfyUI subreddit. Almost exaggerated. I'm used to the a1111 img2img batch for my ai videos Any help please? This way, the image can be resized without distorting or cropping the important feature of the original image. ComfyUI Fooocus Inpaint with Segmentation Workflow Welcome to the unofficial ComfyUI subreddit. But it's weird. For reference, my PC specifications are as follows: RTX 4090 Aorus Master, i9-14900K processor, 32GB of RAM (running at 6800MHz CL32), and an Asus Prime Z790P motherboard. I hope this is due to your settings or because this is a WIP, since otherwise I'll stay away. horizontal rescale using the focalpoint. WAS Node Suite has a Load Image Batch that accesses the filename_text. subreddit was born from subreddit stable diffusion due to many posts about ai wars on the main stable diff sub I loaded it up and input an image (the same image fyi) into the two image loaders and pointed the batch loader at a folder of random images and it produced an interesting but not usable result. Produced Image. what i do like is i can batch upscale images to sizes that would cause memory issues in comfyui or auto1111 If you want to go for more complex options and Welcome to the unofficial ComfyUI subreddit. How to iterate over files in a folder? : comfyui (reddit. (you can load and choose your desired image from batch with Load Latent -> Latent from Batch). it seems that every image in a batch end up with that same prompt (i. 2x upscale using lineart controlnet. All hair strands are super thick and contrasty, the lips look plastic and the upscale couldn't deal with her weird mouth expression because she was singing. After borrowing many ideas, and learning ComfyUI. This requires all of the images to be the same size, and processing batches can hurt VRAM, and not all nodes support batches. I haven't been able to replicate this in Comfy. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). Hi reddit! I am fairly new to ComfyUI and was thinking if anyone could help me out. This is the node you are looking for. Automate batch resizing of images with upscale methods and crop options for AI artists, saving time and ensuring quality. The big current advantage of ComfyUI over Automatic1111 is it appears to handle VRAM much better. so my question is, is there a way to upscale a already existing image in comfy or do i need to do that in a1111? upscale by model will take you up to like 2x or 4x or whatever. Get the Reddit app Scan this QR code to download the app now. Batch-processing images by folder on ComfyUI . The workflow is saved as a json file. Scaling latents instead of an image is also less lossy, because each time you go from latent space to image space and back again, the image data is undergoing lossy compression. Sometimes you need to do 10x10 (weights x epochs). FYI, values closer to 1 will stick to your input image more, while value closer to 10 allows more creative freedom but may introduce unwanted elements in your new Currently, I use "Load image batch from Dir (inspired)" to work on SUPIR. I also have prompt templates that it can cycle through & you can choose to go through Outdated custom nodes -> Fetch Updates and Update in ComfyUI Manager. Also, if this is new and exciting to you, feel free to Looks for ways to easily upscale a large number of images at once, without having to shell out for something like Topaz Gigapixel. More info Welcome to the unofficial ComfyUI subreddit. In order to recreate Auto1111 in ComfyUI, you need those encode++ nodes, but you also need to get the noise that is generated by ComfyUI, to be made by the GPU(this is how auto1111 makes noise), along with getting ComfyUI to give each latent its own seed, instead of splitting a single seed across the batch. So, while I can't answer you what the largest batch size on my rig, it's a big enough number I can't imagine wanting to run a batch size larger than 60 (and if I did I my guess is that I would do it through the "API" interface, since I'm likely If you can afford to put all the images in ComfyUI's "input" folder, a simple "native" way to do it is: As mentioned, put all the images you want to work on in ComfyUI's "input" folder. The name for that is Load Image Batch. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. Instead, I use Tiled KSampler with 0. hey folks, latly if have been getting in to the whole comfyui thing and trying different things out. A new FreeU v2 node to test the updated Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). Load Image List From Dir (Inspire). So generate a batch, and then right click the one So, I just 4x upscaled the original pic with 0. The image blank can be used to copy (clipspace) to both the load image nodes, then from there you just paint your masks, set your prompts (only the Depending on the noise and strength it end up treating each square as an individual image. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers I have a workflow I use fairly often where I convert or upscale images using ControlNet. After. jnsofh ftcb shct qskfkt nkmwa mvlzdyfh evnv lsaydlm aamsil ogqf