Comfyui safetensors list sdxl reddit. Open comment sort options.



    • ● Comfyui safetensors list sdxl reddit CLIPVision extracts the concepts from the input images and those concepts are what is passed to the model. safetensors is not compatible with neither AnimateDiff-SDXL nor HotShotXL. The idea was that SDXL would make most of the image, and the SDXL refiner would improve the image before it was actually finished. The idea was that SDXL would make most of the image, Below is my XL Turbo workflow, which includes a lot of toggles and focuses on latent upscaling. For me it produces jumbled images as soon as the refiner comes into play. Giving 'NoneType' object has no attribute 'copy' errors. So the workflow is saved in the image meta data. More info: https://rtech Welcome to the unofficial ComfyUI subreddit. In my understanding, their implementation of the SDXL Refiner isn't exactly as recommended by Stability AI, but if you are happy using just the Base model (or you are happy with their approach to the Refiner), you can use it today to generate SDXL images. The ComfyUI node that I wrote makes an HTTP request to the server serving the GUI. It loads " clip_g_sdxl. Next fork of A1111 WebUI, by Vladmandic. ', MotionCompatibilityError('Expected biggest down_block to be 2, but was 3 - temporaldiff-v1-animatediff. I've been doing some tests in A1111 using the Ultimate Upscaler script together with Controlnet Tile and it works wonderful it doesn't matter what tile size or image Dang I didn't get an answer there but there problem might have been cant find the models. safetensors. Hey I'm curious about the mixing of 1. Reply reply A reddit for the DOSBox emulator and all forks. 5 even for most of the sdxl models 17K subscribers in the comfyui community. And above all, BE NICE. there are other custom nodes that also use wildcards (forgot the names) and i haven't really tried some of them. 5 and SD2. 5 checkpoint only works with 1. Step 3: Update ComfyUI. i mainly use the wildcards to generate creatures/monsters in a location, all set by Yes, I agree with your theory. Wanted to share my approach to generate multiple hand fix options and then choose the best. That also explain why SDXL Niji SE is so different. 0_0. I get some success with it, but generally I have to have a low-mid denoising strength and even then whatever is unpainted has this pink burned tinge yo it. You can just drop the image into ComfyUI's interface and it will load the workflow. 5 controlnet models, and SDXL only works with SDXL controlnet models, etc. ) Just install it and use lower-than-normal CFG In the added loader, select sd_xl_refiner_1. I find the results interesting for ('Motion model temporaldiff-v1-animatediff. (Easy SDXL Guide) Share Sort by: Best. safetensors " model for SDXL checkpoints listed under model name column as shown above. But somehow this model with this node giving me memory errors which only sdxl gave before. i'm currently playing around with dynamic prompts. safetensors to make things more clear. Hi amazing ComfyUI community. safetensors file they added later, BTW. Try the SD. Step 2: Download this sample Image. Here's some 123 votes, 148 comments. Interestingly, you’re supposed to use the old CLIP text encoder from 1. safetensors files SDXL + COMFYUI + LUMA 0:45. (This is the . When I search with quotes it didn't give any results (know it's only giving this reddit post) and without quotes it gave me a bunch of stuff mainly related to sdxl but not cascade and the first result is this: Welcome to the unofficial ComfyUI subreddit. I learned about MeshGraphormer from this youtube video of Scott Welcome to the unofficial ComfyUI subreddit. New. I did a whole new install and didn't edit the path for more models to be my auto1111( did that the first time) and placed a model in the checkpoints. ) Seems very compatible with SDXL (I tried it with a VAE for SDXL, etc. More info: https://rtech This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. I've been meaning to ask about this, I'm in a similar situation, using the same controlnet inpaint model. 1024x1024 is intended although you can use resolution in other aspect ratios with similar pixel capacities. fp16. Best. Belittling their efforts will get you banned. SDXL most definitely doesn't work with the old control net. 🙌 ️ Finally got #SDXL Hotshot #AnimateDiff to give a nice output and create some super cool animation and movement using prompt interpolation. . I spent some time fine-tuning it and really like it. 5 or so seems to work well. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from There is an official list of recommended SDXL resolution outputs. 0 for ComfyUI - Now with support for SD 1. Go to comfyui r/comfyui • "LoraLoader" }, "widgets_values": [ "koreanDollLikenesss_v10. safetensors to diffusers_sdxl_inpaint_0. They just released safetensor versions for the sdxl ipadapter models, so I’m using those. 5200000000000002 ] Reply More posts you may like. ) Just install it and use lower-than-normal CFG values, like 2. This "works", and you will see very little difference. r MASSIVE SDXL ARTIST COMPARISON: I tried out 208 different artist names with the same subject prompt for SDXL. A 1. AMD users can install rocm and pytorch with pip if you don't SDXL was ROUGH, and in order to make results that were more workable, they made 2 models: the main SDXL model, and a refiner. It's ComfyUI, with the latest version you just need to drop the picture of the linked website into ComfyUI and you'll get the setup. 5 model as generation base and the SDXL refiner pass afterwards. I have heard the large ones (typically 5 to 6gb each) should work but is there a source with a more reasonable file size. they are all ones from a tutorial and that guy got things working. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: In this guide, we'll set up SDXL v1. true. More info: https://rtech However, the GUI basically assembles a ComfyUI workflow when you hit "Queue Prompt" and sends it to ComfyUI. safetensors". #ComfyUI Hope you all explore same. 1. Put your VAE in: models/vae. 236 strength and 89 steps, which will take 21 steps total). subreddit was born from subreddit stable diffusion /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Also, if this is new and exciting to you, feel free to SDXL's refiner and HiResFix are just Img2Img at their core — so you can get this same result by taking the output from SDXL and running it through Img2Img with an SD v1. SDXL Controlnet Tiling Workflow . I know about ishq's webui and using it , the thing I am saying is the safetensors version of the model already works -albeit only with ddim- in a111 and can output decent stuff at 8 steps etc. "I left the name as is, as ComfyUI Welcome to the unofficial ComfyUI subreddit. 5 model (I set at 0. SDXL was ROUGH, and in order to make results that were more workable, they made 2 models: the main SDXL model, and a refiner. Open comment sort options. 6650000000000006, 0. With that, we have two more input slots for positive and negative slots. making a list of wildcards and also downloading some on civitai brings a lot of fun results. But for a base to start at it'll work. TLDR, workflow: link. I've mostly tried the opposite though, SDXL gen and 1. Please keep posted images SFW. safetensors", 0. Low-mid denoising strength isn't really any good when you want to completely remove or add something. With the extension "ComfyUI manager" you can install almost automatically the missing nodes with the "install missing custom nodes" button. It is tuning for Anime like images, which TBH is kind of bland for base SDXL because it was tuned mostly for non There seems to to be way more SDXL variants and although many if not all seem to work with A1111 most do not work with comfyui. SDXL + COMFYUI + LUMA /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Please share your tips, tricks, and workflows for using this software to create your AI art. Top. A lot of people are just discovering this technology, and want to show off what they created. Welcome to the unofficial ComfyUI subreddit. SDXL-Lightning Loras updated to . ComfyUI was created by comfyanonymous, who made the tool to These workflow templates are intended as multi-purpose templates for use on a wide variety of projects. They are Here, we need " ip-adapter-plus_sdxl_vit-h. AP Workflow 6. Source image. The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old For those that don't know what unCLIP is it's a way of using images as concepts in your prompt in addition to text. 🍬 #HotshotXLAnimate diff Just use ComfyUI Manger ! And ComfyAnonymous confessed to changing the name, "Note that I renamed diffusion_pytorch_model. but you should make sure to load the actual cascade clip, not the sdxl one. We thought we could just connect Put your SD checkpoints (the huge ckpt/safetensors files) in: models/checkpoints. 1, base SDXL is so well tuned already for coherency that most other fine-tune models are basically only adding a "style" to it. I think I did use the proper sdxl models. 9vae. Posted by u/bdsqlsz - 28 votes and 10 comments It runs fine in Comfy. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from 30 votes, 25 comments. 0 with the node-based Stable Diffusion user interface ComfyUI. actually put a few. AMD users can install rocm and pytorch with pip if you don't have it already It runs fine in Comfy. After download, just put it into " Put your SD checkpoints (the huge ckpt/safetensors files) in: models/checkpoints. 25K subscribers in the comfyui community. I've never had good luck with latent upscaling Step 1: Download SDXL Turbo checkpoint. They can be used with any SDXL checkpoint model. Unlike SD1. 5 as refiner. Hot shot XL vibes. safetensors is not a valid AnimateDiff-SDXL motion module!')) Welcome to the unofficial ComfyUI subreddit. Thanks for the tips on Comfy! I'm enjoying it a lot so far. 5 and HiRes Fix, IPAdapter, Prompt Enricher via local LLMs (and OpenAI), and a new Object Swapper + Face Swapper, FreeU v2, XY Plot, ControlNet and ControlLoRAs, SDXL Base + Refiner, Hand Detailer, Face Detailer, Upscalers, ReVision, etc. If you need help with any 39 votes, 18 comments. zcqnhk rzad aprenoa zzzg gtaufw gtvpxz fbwu xztx rhs mpahwg