This Came to Me in a Dream

CHECKPOINT
Reprint


Updated:

1.3K

REPRINT

itCameToMeInADream_Mid is a model merge based on Midkemia, which is actually pretty cool. Okay, that's the basics.

Rapid fire Q&A time:

  • Q: What's with the '(m-da s-tarou:0)' stuff?

  • A: I dunno. I saw phoenician use it on the Midkemia examples page and that stuff looked good so I figured I'd try using it too. I have no idea if it does anything. Might be total placebo. Images look okay to me.

  • Q: Schizo CFG?

  • A: My life has not been the same ever since I started using perturbed attention guidance... So as it turns out the CFG is pretty important when you start using loras. They enhance the power of the lora quite significantly. Since we have CFG++ samplers now, we can really experiment with various CFG powers now that we have samplers that are semi-resistent to CFG artifacting (semi, they are still going to **** up if you really push the CFG). So now we can also slap the model around with CFG if it isn't cooperating.

  • Q: Some of the images aren't replicating.

  • A: I sometimes override the CFG scale for an upscale pass so that may be the hires CFG. If the CFG is set to 7 and you're getting garbage, try pulsing it down to 4 or 2. If that doesn't work it might be your side that has the problem. Also obviously you will need all the extensions/nodes/etc, etc too.

  • Q: Schizo steps?

  • A: Okay, so because basically all diffusion models are highly trained denoisers, each 'step' or denoising pass chips away at more of the noise. Sometimes we do not want the noise to be chipped away too much because we want to do something fucky with the image. In the example images, we're combining high CFG with the lowlight lora and then cutting the step count off early so the rest of the picture never materializes and it's just dark as hell. This might create a hella ton of artifacts but ideally if we do a bunch of upscalings passes on it it should clear it up. Maybe. Hopefully. Probably not.

  • Q: Schedulers?

  • A: Just use Align Your Steps for everything. If Nvidia says it's good enough for them, it's good enough for you. Praise Jensen. (Exponential seems to be better occasionally for Euler sampling methods but it was not consistent from my experience. If you want to experiment and share your results be my guest)

  • Q: What's with the style variance with the same prompt?

  • A: Hires samplers. Euler Neg Dy is really weird about making things all smooth. It's pretty good for fixing hand errors if you don't mind it smooth out everything. You can get it here. CFG++ tends to push the natural qualities of the model even further. Etc, etc. Checkpoint is the same for both, but samplers really matter. My rule of thumb for this one is go Euler Neg Dy if you want to fix details and make it look smoothish, CFG++ if you want it to look more rough and sketchlike.

  • Q: Quality tags?

  • A: Most of the time I just stick to score_9. If you're doing something with backgrounds or you want a more 2.5d-ish look run the full pony quality tag string. I've never seen anything use the rating tags on Midkemai, have no idea if they do anything. Not a believer in quality tags in the negatives anymore because they seem to based more on personal preference than any objective quality score. Source_pony in the negatives seems to help make the image more coherent. I don't like excluding any other other sources, source_anime seems to make every gen look samey. But do whatever you feel like.

  • Q: Any other advice?

  • A: 18 seems to be the magic step count for this model. Well, 17, but I am racist against Prime Numbers so **** 17. DPM++ SDE Heun seems to make the backgrounds and image more coherent but semi normalizes the whole thing. That's high accuracy samplers for you. Euler samplers pop off more often but are more hit-and-miss. Running more step counts is not necessarily better but generally speaking after you hit convergence on certain samplers (and by that I mean the ones that can converge) it will only smooth out the details from there. If you want to go for that approach, 28 is a better step count. Euler never converges so be prepared to **** with step counts. Treat the upscaling pass and the initial pass as separate, sometimes things that look extremely bad become very good in the upscaling pass because the composition was bad even though the details sucked. Sometimes if you get weird red or blue blotches that's a sign that you haven't sampled sufficiently. Hopefully an upscaling pass can fix the colors, otherwise you might want to consider changing seeds or step count.

  • Q: Recommended loras?

  • A: Lowlight is good. I love noise offset and I think it will improve almost any gen but be aware that it will **** with colors. Kazuradrop's kimono changes from green to white, etc. Some style loras work really nice. I like Neisen. Character loras are great. This model is essentially made for maximum compatibility with neclordx's character loras so that's a given. I will link Lowlight and my favorite Neisan lora in the suggested resources. Otherwise just use the civitai linked resources page in the example images and find them yourself.

  • Q: Last words?

  • A: This website is unoptimized to shit and just opening it up in browser eats up so much RAM. There are very nice QOL features like the image gallery but just opening up a website should not chug my browser wtf. It's still better than certain other websites that don't even let you download models though. Alright that's my unrelated rant over. Download and have fun.

Version Detail

Pony

Project Permissions

Model reprinted from : https://civitai.com/models/648684/this-came-to-me-in-a-dream?modelVersionId=858297

Reprinted models are for communication and learning purposes only, not for commercial use. Original authors can contact us to transfer the models through our Discord channel --- #claim-models.

Related Posts

Describe the image you want to generate, then press Enter to send.