Stable diffusion highres fix settings reddit. This only happens when Highres fix is on.
Stable diffusion highres fix settings reddit (well it was in mid 2023). 1x highres fix you get good enhancement. When using A1111 in the last months, i got around this issue with TiledVAE Sytan's SDXL Offical ComyfUI 1. More info: https://rtech. e. Most of the time, if sd is generating something pretty, highres then comes and changes it to A1111 released a developmental branch of Web-UI this morning that allows the choice of . 5 (so it makes it larger, but not HUGE) Scale: Honestly? I start at 5. fix is activated BEFORE the refiner, so the image generated is too accentuated by the first model, and I lose the realistic effect I get without the hires. Subject matter is subjective, but generally stuff with more subjects or line art appearance (i. Update A1111 and the hires. Stable Diffusion with HiRes fix. I have enabled high res fix with strength of 0. Is it possible to use Hi Skip to main content Open menu Open navigation Go to Reddit Home r/StableDiffusion My generation is stuck at 98% or 99% and wont finish. You've seen the images with double heads etc I'm sure, the high-res fix takes care of that. highres fix is just img2img basically. No touching up or inpainting was done. 4 wile the same picture If you are getting Nan errors, black screens, bad quality output, mutations, missing limbs, color issues artifacts, blurriness, pixelation with SDXL this is likely your problem. 2 denoising strength select an upscaler to your liking or download one from upscale wiki (idk why but it's down for the moment). And soon you’ll be able to make some stellar Stable Diffusion Correct. 6725 (about all my pc can handle) and Restore faces (if there's a face to restore). Here's an example of what Image generation settings in Stable Diffusion Automatic1111 Let’s go through the settings for HiRes. [3] This is a great post. Again, how does it process such a big image? Splits it in tiles and then goes one by one These allow me to actually use 4x-UltraSharp to do 4x upscaling with Highres. Also, using supermerger for sdxl models in A1111 was just By using the highres fix and setting the noise to 0. fix, ControlNet (tile), and Ultimate SD Upscale. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. The first generation Does anyone have some performances issue since the last month's update ? I have a RTX 2060 with 6 gb of Vram, I used to render fine pics with a x1. Same all other settings. 1 and 0. After that I do smaller steps like 0. I need to do more testing with high-res fix. If your denoise STr is high enough (but not higher than . Also works quite well even if you only mask the eyes (that was my Since a few days ago, I noticed straight away that Hires Fix is taking a lot longer than it used to. 5 minutes. Reply reply I never, ever use the face fixing neural networks because they make everything look photoshopped. Now I don't like to use Highres fix because I like to render 4 txt2img in low res first and then pick one I like to send it to img2img. There is (almost) no point in using a 1. 1 I get double mouths Baseline image generated from txt2img is a 1024x1024 image generated using highres-fix (settings in PNG file). You need For iterating a txt2img gen in the img2img tab, playing around with the denoise and other parameters can help. However, that is not the case. Prompts are captioned on each image and here are the settings used: Codeformer Restore Faces: 0. fix is a convenience feature that just The traditional highres fix is actually two passes, where a smaller image is generated, then an img2img pass is run. "NansException: A tensor with all NaNs was produced in Unet. When I look at CMD it says its 100% done. 5 de-noising strength. 0), detailed, 1 young girl, detailed beautiful skin, face focus, detailed eye Workflow Included Share Sort by: Best Best Top New Controversial I've searched online for a while, read Automatic1111's Features (light) manual. ckpts during HiRes Fix. images. I'm not using xFormers (because I've read it's not helpful with a 4090), and have no launch arguments set (other than the basically mandatory no half vae, and that's set in the vladui settings, not as an argument). Simply upscale using the sd upscale script which you can find in the img2img tab, do like 0. I was wondering if some other is better /r/StableDiffusion is back open after the protest of Reddit killing open API /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Latents need at least 0. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Of course, using latent upscale with highres fix can completely skip the If you have a quality prompt you dont have to rely as much on luck to get a good image, you should get something close to what your after in 10-20 generations. I provided the 512x512 at 20 steps and such and such sampler because different resolutions (and the method to presumably 'hires. There is nothing magical happening that you risk negating. Left image: Hires fix x3 Steps: 20, Sampler: DPM++ 2M Karras 2. I am trying to recreate Hi People! So i started using stable diffusion locally, first with easy-diffusion and i generated alot of images, but often times the eyes are wierd same prompt same model little bit different seed, computed in a1111, the If you're using Windows and stable diffusion is a priority for you, I definitely wouldn't recommend an Intel card. Plain upscaling tends to add in weird artifacts I find. (significantly lowers deformities but if you go any bigger then 1280 i think its not stable with the fix either ) Depending on your resolution you might skip the highres fix, go strait to img2img, click the script dropdown menu at the bottom, choose "SD upscale", the select the 4x-Ultra Sharp, use scale factor 2. ) or if I just run The upscale in extras allows upscaling to a specific arbitrary size, so you just need to start with any 16:9 multiple of 64 (like 1024 x576) and then you can upscale directly to 1920x180 using the upscaler of your choice. I could be wrong but it sounds like you may have forgotten to do this Hello. 5 is trained on 512x512 images (while v2 is also trained on 768x768) so it can be difficult for it to output images with a much higher resolution than that. Instead of just a few seconds for a 512x512 image, it's now taking about 30. 0, Comfy UI, Mixed Diffusion, High Res Fix Hi guys. Never complained and kept trying but I think the base I posted this as a comment in another thread, but thought it's worth its own text post. Tick it, put in 512*512 resolution (or something close) on the sliders and select how much you want to upscale the image. (idk why but it's down for the moment). 9), best So as I suspected it turns out that Hi Res fix is using 2 step process where it renders a smaller image first than upscales and makes changes to that image so it is essentially img-2-img scaling to a higher resolution. Please share your tips, tricks, and workflows for using this software to create your AI art. However, I don't know how they interact with each other if at all, and there's only one "generate " High Definition without high res fix. 5 models, stick with 512 x512 or smaller for the initial generation. In my case I tested it with latest Automatic1111 (as of January 3rd 2022) and it work well on PC and don't work on Most of the time I use highres. I also noticed a lot of people recommend using the Now, I started experimenting with additional wildcards and as long as you use them via the styles dropdown - everything will still be as expected. The overlap is probably going into adjacent tiles. Please keep posted images SFW. 6 or else /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. support/docs/meta SD needs better resolution for good face. One major benefit to the old high res fix was I could use it to generate a 768x768, then upscale to a much larger resolution AND a different aspect ratio while preserving the If might become a much more functional quick fix if/when UI's integrate the SD 2. first, denoising 1 is a discutable choice when using hires fix since it will not produce the same image as the first pass, i suggest using something between 0. tldr; no matter what my configuration and Most stable diffusion models were trained with 512x512 images. fix: I have tried many; latents, ESRGAN-4x, 4x-Ultrasharp, Lollypop, No matter what my configuration of 1. 6-0. I really, really tried to squeeze something good from 2. The small one is for your basic generating, and the big one is Instead, what you can do is use the highres fix. 2 denoising strength sampler euler a Posted by u/campingtroll - 29 votes and 3 comments The high-res fix is for the times when you generate images that are taller or wider than the 512 px format. Has anyone figured out the ideal settings for Kohya It's a bit strange that forge is looking for the venv in your A111 directory. Generate images 1024x1024 or greater directly. 5 you’ll get best results using Inpaint controlnet in txt2img tab with high res fix, as your first step. 5, I would use 20-30 steps and then hi-res fix 2x for like half the number of steps and less then 0. 0 upscaler using x4-upscaler-ema. 1x and 0. Part of my workflow involves highres fixing at varing denoise strengths (generally 0. so itd 2 passes. I checked the Benchmarks here Welcome to the official subreddit of the PC Master Race / PCMR! All Highres Fix ON, set to 1. Guys, is there a way I can input a checkpoint on The problem with high res fix is it's random luck what results you get. I'm having some issues with (as the title says) HighRes-Fix Script. For upscaling, you could try the SD Upscale script or the Ultimate SD Upscale extension in the Extensions tab, both activate in the scripts dropdown in img2img. You could generate at a lower res Highres fix: generates at the base resolution > upscales to target resolution with an upscaler model (think waifu2x but much better) (or Latent upscale but that's not as fast to explain) > Let’s go through the settings for HiRes. It is possible to apply high res fix or similar effect to an image with img2img? I dont want to just upscale, I want to apply the high res fix effect /r/StableDiffusion is back open after the protest of Reddit killing open API This is not hires fix, this is kohya Hires fix, this not make your image large, but allow you to do a larger image without those repeating parts. Reply reply ProfessionalArm9317 First of all, sorry if this has been covered before, i did search and nothing came back. manga style) tends to get fractalized or twinned faster than single subjects (like a human face/portrait). I just got stable diffusion yesterday messed a bit around with it and downloaded some models. But generally going Hi experts, I have recently started playing with Hires. 1 denoising, and change tile size to 768 if the With a lot of models the higher the initial image resolution the better result you get. 0 workflow with Mixed Diffusion, and reliable high quality High Res Fix, now officially released! self. Using the same settings The original generation is 832 x 1216 for portraits with SDXL, then 1. deepshrink just does one pass and creates the initial image from scratch already at the very high resolution. fix then upscale through extras tab My current workflow is to get 2x upscale initially with highres fix, then use ultimate SD upscale with 512 tiles, 0. fix" option if you're using Automatic1111. 5 high res fix on, nowadays I instantly get Cuda out of memory The results, even if disappointing, are really good. 25 for minimal The problem is that the height and width of the main setting is what the “first pass” h/w is. Performance is clearly better, but i think i miss out on something. For me the issue is when using highres fix. fix, I tried optimizing the PYTORCH_CUDA_ALLOC_CONF, but I doubt it's the optimal config for 8GB vram. fix' them) and batches are all going to change the resulting seconds per iteration. What were seeing As i recall, if you have GPU below 4000 series xFormers is better but cross-sdp-attention is faster with 4000 series. 15-0. 5 models to be honest. I'm wondering if there's a way to batch-generate different highres fix versions of an image Hey Guys. 5 model unless you're looking for a specific style or you need to be crazy fast. fix got some change. High-res fix You can skip this step if you have a lower-end graphics card and process it with Ultimate SD upscale instead with a denoising strength of ~0. When using SD upscale, the width and height settings (which are normally for image sizing) are instead /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Hires. I've noticed most of the AI youtubers don't use it either and recommend against it, because Img2Img or SD Upscaler just do the Yeah I’m running a 12gb card too. Takes an age between stages and for some reason hangs on the last percent. Otherwise its likely a prompt issue. I would generate at 512x512 or sometimes 512x768 (although would occasionally run into trouble there) and then use highres fix to upsample 2x. (or else it'll get cropped to match the resized output) tl;dr just use the "scale by" slider, keep the "resize width to" and "resize height to" slider at 0 I know that A1111 can be questionable at best regarding inference speeds and VRAM usage for highres fix due to the way it was coded. I'm sure the problems comes from the 2. 20 steps, 1920x1080, default extension settings hires fix: 1m 02s 20 steps (w/ 10 step for hires fix), 800x448 -> 1920x1080 "deep shrink" seems to produce higher quality pixels, but it makes incoherent backgrounds compared I'm trying to understand what the hi-res fix does, and from the settings, my intuition tells me it's just doing img2img with the set resolution and denosing, is that correct? If that's the case, can I do batch hi-res fix on images I've already generated and have it use the same prompt as I used to generate the original image (Get it from the png info thingy) -Can now change upscalers directly from the highres fix menu, no longer need to go back and forth in the Settings. I have a Problem with the new hires. Maybe tick the "highres fix" option as well to reduce the likelihood of mangled images, though this will increase the output time (roughly double on my PC). The first step is a render (512x512 by default), and the second render is an upscale. Anytime I try Something that i don't think a lot of People realize: different samplers require different levels of highres fix denoising strength for optimal results. 5, sometimes 2. for SD 1. 0), (ultra highres:1. So ive been told instead of doing 1920x1080, do 640x512 and upscale 3 times for better and faster results but there are a bunch of different Difference? Upscalers change the size of the image using bilinear, nearest Hi-res fix version looks like ass and ruins the face HiRes fix barely ever does anything useful for me. StableDiffusion upvotes · comments After some testing, it is better, but not perfect. You can also check the bat files in forge directory to Hi. fix. x models. r/StableDiffusion • Fast ~18 steps, 2 seconds images, with Full Workflow Included! No controlnet, No inpainting, No LoRAs, No • • Things really start getting interesting when you use SDXL itself for the HiRez and the refining. As others have said, the "portrait" setting of 512x768 also seems a good compromise - perhaps especially true for Whenever i try to use Hi res fix i run out of memory. Adjust the widh and high to /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. However, if you add a wildcard into the main prompt (for example __locations__ with interesting places) and you keep the __famouspeople__ in the styles If you use automatic1111 with the 512 1. No upscaling or special highres fixes needed. And soon you’ll be able to make some stellar Stable Diffusion images using Automatic1111. You have the denoising set high, but not complete, so it's wiping out 75% of your image with noise and reconstructing from there, this often causes problems. Take a look at this one, it was a portrait image then stretched to landscape format. That is way too many. 2 - 0. From the code I know it's just a regular upscale using the algorithm selected in settings and then img2img. #6248 Automatic1111 settings Going forward, I will use this option as intended, typically using low denoise and only running it on the images I want to enlarge. 0 if the base resolution is not too high) because it allow you to use greater batch sizes, and has virtually no it changing the image is the point. 8) and merging the results together in post. Those extra details it adds, I don't see them in any of the 'amazing' 1. eg just look at the weathering if you're using XL I suggest DreamShaper XL Turbo or a XL turbo model in general. fix process, but my problem is that the hires. fix In my experience, I stopped using hires fix and generate a lower res picture and use the upscaler instead. I tried to recreate an image Skip to main content Open menu Open navigation Go to Reddit Home A chip It's easier to Control than highres fix, and you can balance the creation of new detail with 2 parameters, weight and ending step, as opposed to just denoising strength. The new hires fix is better than older, but need to experimenting with it to get best result. Highres fix is "OKish" at trying to fix issues with cloning and dupes but it often fails. 55) the image I get is super blurry, very noisy or unsharp at all. They might This seems to have started when I updated to the Automatic1111 version where the high-res fix was changed, as in it now includes highres steps and an upscaler that you can choose and such. It, surprisingly, helps to fix highres. -Can see the before and after resolution in the highres fix menu tl;dr easier to understand How to manually You skipped the upscale part. I am already using --medVram but still i have issue. Or you can use inpainting. 5 model you can turn on high res fix. I believe that the results are far better than refiner. I've read several guides on hires fix and upscalers, and most people recommend using a low denoising strength (0. The next step was high-res fix. This minimizes the artifacts/doubles created if the highres was generated directly with an empty latent. I've been using the WebUI Automatic1111 Stable Diffusion on my Mac M1 chip to generate image. 5, and sometimes go up as much as 11 or 12. I think the default for the new hires settings I've been mainly using absolute reality with hires fix (4x ultrasharp, upscale by 2, 0. This only happens when Highres fix is on. So as I suspected it turns out that Hi Res fix is using 2 step process where it renders In my first article I examined the effects of denoising strength and hires steps parameters for the “latent” upscaler. 1 It's fine as long as the input and output have the same ratio. If the image generated is good enough, I then use just a 4x upscaler on it so I can get a high quality 4k image. Upscaling using HiRes. They might suggest using WSL2, but they won't mention the memory leak issue that can crash Windows. I use highres fix on the good images after making a batch of lowres ones, at 2x you usually get good face. fix and see how it works. I Welcome to the unofficial ComfyUI subreddit. Before we start it should be clarified what “HiRes fix” actually does: Stable Diffusion v1. bothering problem with cuda out of memory? had crash? Clarification on it There was another change to 1 of these files today, maybe it is IMHO the best use case for Latent upscalers is in highres fix, with a reasonable upscale (1. Inpainting works fairly well to amputate any unwanted limbs, and there is a built-it face fixing checkmark in A1111 that can also be used with inpainting. Highres. Use the same seed as your base img. Fix Now if you turn on High-Res Fix in A1111, each controlnet will output two different control images: a small one and a large one. (The last 2 gives big artifacts that turn into interesting I think you should reset your tile settings. fix to try to get the most out of my 8Gb GPU My question is: if time isn't a problem, how do Skip to main content Open menu Open navigation Go to Reddit Home A chip Pretty sure the answer is exactly what you fear. Still a bit overloaded with details imo, but pretty to look at. 5x with highres fix, then 3x or 2x with upscale but there's no limit but i find hard to work images above 3000x4000 Reply reply RighteousClaim • I see 11 votes, 14 comments. Happened everytime I used DPM++2M Using "tile controlnet" with highres fix without SD-Upscale: you might think that the image would be produced in tiles due to the name "tile controlnet". fix on automatic1111 web ui the preview picture will look good until the generation hits 80% and then messes up the picture, sometimes the people will look disfigured, other times it completely Highres. Reply reply externallink321 • what exactly is that? how would i go about • Problem with Highres Hi everyone, -----Problem----- I have a problem, when I generated Skip to main content Open menu Open navigation Go to Reddit Home The tutorial I found online said to use highres fix to make the images higher resolution, but when I choose that it stops at around 50%. Upscaler set to "Latent" for Hi-Res Fix appears not only to add detail, it hallucinates bizarre additional fractal From what I've seen, there are three options involved in upscaling in a1111: Hires. A few weeks ago i was able to run hires fix without any issue. fix is quick and skips steps 2 and 3 (meaning also you have to try and do precision work on txt2img, which is harder / RNG) but it will lack the quality when compared to something that you created following the full Reply Highres fix stuck at 50% Question | Help I am relatively new to this whole thing. Please stay tuned as I have plans to release a huge collection of documentation for SDXL 1. fix with the settings descripted in this Post. 5) then it really has a tendency to This is raw Txt2Img with no negative prompt. The gaussian noise from the stable diffusion process gets added *after* it's converted to a latent image. prompt: (masterpiece),(best quality:1. It's never quicker, always slower than if I'd just had a batch count of the same number. This is The next step for Stable Diffusion has to be fixing prompt engineering and applying multimodality. I've had highres fix is img2img. After that I iterate on the image I Then comes the hires fix. I'm playing with Dreamshaper XL without refiner, but using Hires Fix. Thank you for sharing your research and your process. So you use a lot of time generating a high res image where you don't even know the outcome. After the image have upscaled, send it to Extras, there For Stable Diffusion 1. And stick with 2x max for hires. Here's one way to get huge upscales while also adding detail, and you can do it on any graphics card. 3 denoise, 25~30 steps), which gives me pretty good results. - Denoise - this depends on your upscaler. 3 denoising. 5 from 512x512 to 2048x2048. Can someone help me? I'm getting this type of result and idk how to fix it. 3x and 0. So is not an upscale like the normal Hires Fix, and it doesnt increase the generation If you are using SD 1. Im trying to do the same as high res fix, with a model and weight below 0. This is a repost of my findings, but I figure it was This is basically the same as the highres fix in txt2img Reply reply More replies FourOranges • With the script, you have to select an upscaler for it to work. [3] No touching up or inpainting was done. I also like the subject matter you chose to experiment with. I used to use the fork of it, Forge, which had solid speeds but even then inferencing could Image 1 - 1028x1028, 10 steps - Here Prompt S/R replaced 1. I still have this bug with img2img and highres fix (image upscale only, no problem with latent upscale) from time to times : Traceback (most recent call last): File "F For highres fix I do this: Generate image without guidance/with openpose/anything as a baseline, use easynegative Put this image into MultiControlnet Openpose full with default settings and Lineart Realistic with default settings Automatic1111, Windows 10, 64GB RAM, Nvidia 3090 Same seed, same prompt. 5. I have been running highresfix to generate 1024x1024 images flawlessly for a while now but suddenly it gets stuck at 50% (where it starts Here, from 0:16 seconds afterwards, you can see my generated image gets deformed or corrupted, and this happened multiple times. I just upgraded from a 1080 to a 4070 Ti. It is created with a training trick I have been experimenting with to rescale the contextual Posted by u/CalligrapherOk7617 - 2 votes and 12 comments This is easy to do. For 512 x 768, it's nearly 1. Run the same Automatic1111 from google chrome and you won't have problem. Stable Diffusion can 2- highres fix will basically ''destroy'' my image, they become giga blurry/smudge or just crazy glitched colours, not sure how to fix this - cant increase the steps too much since my vram runs out and image could not finish generate Perfect Support for A1111 High-Res. i see many user dislike new Skip to main content Open menu Open navigation Go to Reddit Home A chip I wonder why in A1111 there is no Highres fix in img2img, but I have a few highres options. Before beginning upscaling and before inpainting, it takes my system a few seconds to I just wanted to compare Hires fix vs SD Upscale vs combination of both and from my POV in this particular case third method gives slightly better result. All of the settings for the shipped version of my workflow are geared towards realism gens. Use settings “controlnet has more importance”, and use the crop method on the far right (forget the Agreed, I find highres fix tends to be a lot crisper than upscaling, particularly for fine details. Hi-Res fix simply creates an image (via txt2img) at one resolution, upscales that image to another resolution, and then uses img2img to create a new image using the When I use highres fix it goes to 95~98% in a few seconds, and then hangs for several minutes (anywhere between 2 and 15 minutes) before completing Skip to main content Open menu Open navigation Go to Reddit Home Hires. You /r/StableDiffusion is back open after the protest of Reddit killing open API I use Nickleback for my usual upscaling, but have never switched highres from the default "Latent" option. fix was a thing, you would have to go to extra tab, upscale there, then get the upscaled image into img2img and run another gen there. It is just a convenient option to save you time by not having to load your images manually into img2img to upscale them. However, I've Skip to main content Open menu Open navigation Go to Reddit Home r/StableDiffusion A chip A close There's a checkbox called "highres. If you can't go any larger in your first pass, then your two options are UltimateSD Upscale at as high as high denoise as you can get away with, along with ControlNet Tile, or the MultiDiffusion Upscaler. 25, 1. What do you guys think? Prompt: cinematic photo of (CARACTER wearing a bathrobe:1. Kohya's I can't reproduce the issue. 5 Highres, Fix: Off W x H: 512 x 768 Sampler: Euler A Steps: 25 CFG: 5 I've been experimenting with it a lot lately. So if you set a resolution higher than that, weird things can happen - multiple heads are the most common. And DreamShaper XL Turbo Tried hi res fix on a variety of prompts at a variety of settings to see what worked best. Then rerun at 40 steps (probably overkill) with Highres Fix by 1. my Workflow: 3x3 batch of 512x512px Pictures, Picked the best, Copy seed, Paste seed and klick on hires. These also don't seem to cause a Add a I'm Using a relatively stock install of Vlad's UI, with the only settings I've adjusted being some paths, preview settings, and enabling no half vae in the settings. 4) otherwise the base image changes too much. Im using Discuss all things about StableDiffusion here. Even with same seed and all settings the same, with high res fix [Feature Request]: Return highres fix specific resolution. So if you were generating 1kx1k image now and you put 1kx1k in the height / width However, my typical workflows use Highres fix and Adetailer, and for some reason this leads to slower generation times when using TensorRT. Now we can choose the resolution of the first pass, and select how much the image will change with respect to the original (a value of 0 in Denoising strength simply will rescale the picture and will loose quality, values close to 1 18 votes, 29 comments. What HighRes fix does is that it allows you to upscale that 512x512 latent noise, and this allows the larger image to retain the coherence of the image you would get if you were staying at that It renders the image in two steps instead of one. The img2img step will produce the result image I would recommend generating images at a 1:1 aspect ratio, and if you want good results for larger images, say at 1024x1024 resolution, try playing with the "Highres. - Hi res steps - If you set it to 0, it will use the steps in your original output as the actual number of steps. Now i It's mac UI that is broken. 3 (see step 3). That;ll get you 1024 x 1024 -- you can englarge that more in Extras, later, if you like. HiRes fix generates the lower resolution and then Batch size does have an effect in every test I've carried out. ckpt if that could be selected for highres fix instead, but controlling the upscale yourself is always going to be best. i tried so many combinations but i couldn't solve the problem. I hope this I just started using stable diffusion so my knowledge is limited. 3 emphasis placed across large chunks of the prompt. Check your environment variables and see if the venv there was added directly to your path. The solution is more pixels. 3-1. I've been testing it out and at first it seemed to work quite well. It is very helpful for someone just starting out with AI like me. The emphasis introduced noise that was then turned into texture by the highres. All i can do RN is upscale I'd also like to use the hires. Then click on option "just resize latent upscale". Short version: Basically, you can control your image composition, or even get result usually hard to get, by using a basic result and modifying mid-prompt with the parameters you want, with : [A : B : N]:Change A to B at N steps highres fix and face restoration turned off Used this VAE Used SD upscale script for first 2 images, upscaled initial outputs with topaz gigapixel (standard model) then ran SD upscale with none upscaler selected, for 150 steps at 0. Any idea how to fix that? Generally I'll only start using highres when I'm really pushing resolutions, either to the point that clones start showing up (multiple subjects in a single-subject composition, clearly repeated areas of landscape, etc. fix". If I set the settings in the user bat to no-half and full precision, the issue no longer occurs, but this results in far higher VRAM usage and far worse performance, so if possible I would prefer to have a fix or workaround that doesn't 34 votes, 26 comments. For firstpass width/height I did everything from 768x512, 512x384, 256x265 and 128x128. 99 I get more interesting results. Euler/ancestral for example works best at around 0. Then the high res fix come in and scale up. Before highres. I was running this on a 2080 and was able to do a batch size 8 with hires fix. None of the current highres fix would fix any deformities when the image is 512 or lower, if they could do that then they would be applying that globally to all image. Now I'm learning to use photon, but the recommended hires fix settings are 0. This is NO place to Your problem is the highrez fix and the settings you have it running on. You can use the script: XY plot and use CFG scale to see how it affects things The upscaler is just used to upscale the image. Today, it suddenly goes OOM (i didn't update webui, no changes made). thats better as it fits better into that resolution. I found that even with 1. For SDXL models (specifically, Pony XL V6) HighRes-Fix Script Constantly distorts the image, even with the KSampler's denoise at 0. So it sort of 'cheats' a higher resolution using a 512x512 Scale latent gives you simmilar images with the same seed even if you change the resolution. One way the eyes can be fixed in the initial generation is during your hires fix process, from 512x512 at 2x let’s say. Denoising strength 0. Tested on my Skip to main content Open menu Open navigation Go to Reddit Home r/StableDiffusion Found a pretty good way to get good pics without highres fix : you generate normal 512 img, send it to img2img. support/docs/meta Hi Everyone, when I uspcale an image (hires fix) with the latent upscaler and set a low noise value (0. Try to set this around 10-20 instead. 5x, 2x, hires steps, denoise, I ALWAYS get a CUDA out of memory. 2~0. fix from 512x512 to 1028x1028. I just have no idea what those mean or how to use it well. You can't make this with highres fix, or. The problem is that if I set a high noise value the highres fix is better then any upscaler, even if modify the original image honestly i prefer that. 512px is the lowest you can go and usually comes out blurry-ish.