Comfyui load workflow from image reddit

Comfyui load workflow from image reddit. This workflow chains together multiple IPAdapters, which allows you to change one piece of the AI Avatar's clothing individually. Now the problem I am facing is that it starts like already morphed between the 2 I guess because it happens so quickly. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. Also notice that you can download that image and drag'n'drop it to your comfyui to load that workflow and you can also drag'n'drop images to Load Image node to quicker load them And another general difference is that A1111 when you set 20 steps 0. Notice that Face Swapper can work in conjunction with the Upscaler. Drag and drop doesn't work for . Is there a common place to download these? Nome of the reddit images I find work as they all seem to be jpg or webp. You can save the workflow as json file and load it again from that file. As someone relatively new to AI imagery, I started off with Automatic 1111 but was tempted by the flexibility of ComfyUI but felt a bit overwhelmed. I'm sorry, I'm not at the computer at the moment or I'd get a screen cap. Activate the Face Swapper via the auxiliary switch in the Functions section of the workflow. It's simple and straight to the point. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. So dragging an image made with Comfy onto the UI loads the entire workflow used to make it, which is awesome, but is there a way to make it load just the prompt info and keep my workflow otherwise? Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. PNG into ComfyUI. Ensure that you use this node and not Load Image Batch From Dir. I tend to agree with NexusStar: as opposed to having some uber-workflow thingie, it's easy enough to load specialised workflows just by dropping a wkfl-embedded . 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. Load Image Node. This is the node you are looking for. 2. My 2nd Attempt, i thought to myself, I will go as basic and as easy as possible, I will limit the models I am using to only large popular models, I will try to stick to basic ComfyUI nodes as possible, meaning I have none except for Manager and Workflow Spaces, thats it. They are completely separate from the main workflow. Get a quick introduction about how powerful ComfyUI Hidden Faces. 0. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. the diagram doesn't load into comfyui so I can't test it out. A quick question for people with more experience with ComfyUI than me. This is what it looks like, second pic. I can load workflows from the example images through localhost:8188, this seems to work fine. Please share your tips, tricks, and workflows for using this software to create your AI art. Thanks. I had to load the image into the mask node after saving it to my hard drive. load your image to be inpainted into the mask node then right click on it and go to edit mask. Flux Schnell is a distilled 4 step model. The images above were all created with this method. this is just a simple node build off what's given and some of the newer nodes that have come out. enjoy. I thought it was cool anyway, so here. Maybe a useful tool to some people. Unfortunately, the file names are often unhelpful for identifying the contents of the images. Aug 7, 2023 ยท Workflows can only be loaded from images that contain the actual workflow metadata created by ComfyUI, and stored in each image COmfyUI creates. Your efforts are much appreciated. 168. This workflow allows you to load images of an AI Avatar's face, shirt, pants and shoes and pose generates a fashion image based on your prompt. I have like 20 different ones made in my "web" folder, haha. and spit it out in some shape or form. Thank you very much! I understand that I have to put the downloaded JSONs into the custom nodes folder and load them from there. [DOING] Clone public workflow by Git and load them more easily. Please keep posted images SFW. That image would have the complete workflow, even with 2 extra nodes. If you are still interested - basically I added 2 nodes to the workflow of the image (image load and save image). Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. Experimental Functions. Upcoming tutorial - SDXL Lora + using 1. I can load the comfyui through 192. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. The TL;DR version is this: it makes a image from your prompt without a LoRA, runs it through ControlNet, and uses that to make a new image with the LoRA. will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion - this creats a very basic image from a simple prompt and sends it as a source. Just load your image, and prompt and go. And above all, BE NICE. Browse and manage your images/videos/workflows in the output folder. Thanks a lot for sharing the workflow. Images created with anything else do not contain this data. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. Add your workflows to the collection so that you can switch and manage them more easily. I tried the load methods from Was-nodesuite-comfyUI and ComfyUI-N-Nodes on comfyUI, but they seem to load all of my images into RAM at once. Load Image List From Dir (Inspire). With for instance a graph like this one you can tell it to: Load this model, put these bits of text into the CLIP encoder, make an empty latent image, use the model loaded with the embedded text and noisy latent to sample the image, now save the resulting image. Hello there. Welcome to the unofficial ComfyUI subreddit. . You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. 1:8188 but when i try to load a flow through one of the example images it just does nothing. Get Started with ComfyUI - Drag and Drop Workflows from an Image! Run Diffusion. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. === How to prompt this workflow === Main Prompt ----- The subject of the image in natural language Example: a cat with a hat in a grass field Secondary Prompt ----- A list of keywords derived from the main prompts, at the end references to artists Example: cat, hat, grass field, style of [artist name] and [artist name] Style and References It is necessary to give the last generated image as it does load image locally. My goal is that I start the ComfyUI workflow and the workflow loads the latest image in a given directory and works with it. I'm trying to get dynamic prompts to work with comfyui but the random prompt string won't link with clip text encoder as is indicated on the diagram I have here from the Github page. Pixels and VAE. Initial Input block - I cant load workflows from the example images using a second computer. That's how I made and shared this. 75K subscribers. That node will try to send all the images in at once, usually leading to 'out of memory' issues. ComfyUI/web folder is where you want to save/load . And images that are generated using ComfyBox will also embed the whole workflow, so it should be possible to just load it from an image. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. it's nothing spectacular but gives good consistent results without Starting workflow. 5. I'm not really checking my notifications. 5, then uses Grounding Dino to mask portions of the image to animate with AnimateLCM. In either case, you must load the target image in the I2I section of the workflow. a search of the subreddit Didn't turn up any answers to my question. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. I hope you like it. There's a node called VAE Encode with two inputs. Those images have to contain a workflow, so one you've generated yourself for example. I want to load an image in comfyui and the workflow appear, just as it does when I load a saved image from my own work. 5 by using XL in comfy. To be fair, I ran into a similar issue trying to load a generated image as an input image for a mask, but I haven't exhaustively looked for a solution. I have to 2nd the comments here that this workflow is great. How to solve the problem of looping? I had an idea to just write an analog of two-in-one Save image, Load image in one node, that would save the last result to a file and then output it at the next rendering queue. 8K views 11 months ago. I'm using the ComfyUI notebook from their repo, using it remotely in Paperspace. Is there a way to load each image in a video (or a batch) to save memory? Welcome to the unofficial ComfyUI subreddit. I have a video and I want to run SD on each frame of that video. Sync your collection everywhere by Git. 5 checkpoint in combination with a Tiled ControlNet to feed an Ultimate SD Upscale node for a more detailed upscale. So, I just made this workflow ComfyUI. The graph that contains all of this information is refered to as a workflow in comfy. Ending Workflow. I liked the ability in MJ, to choose an image from the batch and upscale just that image. You can Load these images in ComfyUI to get the full workflow. The prompt for the first couple for example is this: Basically, I want a simple workflow (with as few custom nodes as possible) that uses an SDXL checkpoint to create an initial image and then passes that to a separate "upscale" section that uses a SD1. The one I've been mucking around with includes poses (from OpenPose) now, and I'm going to Off-Screen all nodes that I don't actually change parameters on. where did you extract the frames zip file if you are following along with the tutorial) image_load_cap will load every frame if it is set to 0, otherwise it will load however many frames you choose which will determine the length of the animation Update ComfyUI and all your custom nodes first and if the issue remains disable all custom nodes except for the ComfyUI manager and then test a vanilla default workflow. Nobody needs all that, LOL. About a week or so ago, I've began to notice a weird bug - If I load my workflow by dragging the image into the site, it'll put the wrong positive prompt. You need to select the directory your frames are located in (ie. This workflow generates an image with SD1. Pretty Comfy, Right? ComfyUI is a popular tool that allow you to create stunning images and animations with Stable Diffusion. A lot of people are just discovering this technology, and want to show off what they created. I am trying to understand how it works and created an animation morphing between 2 image inputs. All the adapters that loads images from directories that I found (Inspire Pack and WAS Node Suite) seem to sort the files by name and don't give me an option to sort them by anything else. If it's a . Hey all- I'm attempting to replicate my workflow from 1111 and SD1. json file location, open it that way. 82. this will open the live painting thing you are looking for. After borrowing many ideas, and learning ComfyUI. AP Workflow v5. Details on how to use the workflow are in the workflow link. And you need to drag them into an empty spot, not a load image node or something. However, it is not for the faint hearted and can be somewhat intimidating if you are new to ComfyUI. 0 includes the following experimental functions: Then I fix the seed to that specific image and use it's latent in the next step of the process. You can then load or drag the following image in ComfyUI to get the workflow: Welcome to the unofficial ComfyUI subreddit. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. If that works out, you can start re-enabling your custom nodes until you find the bad one or hopefully find out the problem resolved itself. here Tip, for speed, you can load image using the (clipspace) method using right click on images you generate. more. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! Basically if you have a really good photo, but no longer have the workflow used to create it, you can just load the image and it'll load the workflow. 8 denoise won't have actually 20 steps but rather decrease that amount to 16. I have made a workflow to enhance my images, but right now I have to load the image I want to enhance, and then, upload the next one, and so on, how can I make my workflow to grab images from a folder and for each queued gen, it loads the 001 image from the folder, and for the next gen, grab the 002 image from the same folder? Thanks in advance! My ComfyUI workflow was created to solve that. I've been using ComfyUI for nearly a year, during which I've accumulated a significant number of images in my input folder through the load image node. This causes my steps to take up a lot of RAM, leading to killed RAM. If this is what you are seeing when you go to choose an image in the image loader, then all you need to do is go to that folder and delete the ones you no longer need. It animates 16 frames and uses the looping context options to make a video that loops. Have fun. this is like copy paste basically and doesnt save the files to disk. json file hit the "load" button and locate the . No need to put in image size, and has a 3 stack lora with a Refiner. json files. Are you referring to the Input folder in the Comfyui installation folder? Comfyui runs as a server and the input images are 'uploaded'/copied into that folder. The image you're trying to replicate should be plugged into pixels and the VAE for whatever model is going into Ksampler should also be plugged into the VAE Encode. 1 or not. Hi all! Was wondering, is there any way to load an image into comfyui and read the generation data from it? I know dragging the image into comfyui loads the entire workflow, but I was hoping I could load an image and have a node read the generation data like prompts, steps, sampler etc. You need to load and save edited image. Belittling their efforts will get you banned. These are examples demonstrating how to do img2img. Any ideas on this? Welcome to the unofficial ComfyUI subreddit. In 1111 using image to image, you can batch load all frames of a video, batch load control net images, or even masks, and as long as they share the same name as the main video frames they will be associated with the image when batch processing. Pro-tip: Insert a WD-14 or a BLIP Interrogation node after it to automate the prompting for each image. Made this while investigating the BLIP nodes, it can grab the theme off an existing image and then using concatenate nodes we can add and remove features, this allows us to load old generated images as a part of our prompt without using the image itself as img2img. imcs mzch zdbi sccus orietf wovuq dgzgkz plwvz mrva fbaah