Remarkably faithful to the original SDXL model, ParchArt XL. The primary motivation for this model is the substrate: parchment. Most AI, when asked for parchment, generate something that looks usually like a toothless paper stained with tea. I really want something that makes me want to touch and feel it. The annotations and illustration style are secondary elements to this.
V1.2 is trained on a larger base dataset than 1.0 - which has always been a purely synthetic dataset, starting with some old Midjourny v3 images. But this one merges (to a small percentage) a model trained on IRL parchment illustrations and illuminated manuscripts, which helps Eldritch Parchment to achieve a bit more color saturation (if asked for) and coherency in drawing, and legible rendering of explicitly-prompted-for text (i.e., not a block of text, but a bold title).
I have toyed with this model more than any other (even if my oil painting model is a little more dear to me) but this is definitely the best it's been. It follows prompts extremely well. It will give you loads of annotation and scrawled text when you ask for it, but mostly only if you ask for it - prompt 'annotated' or otherwise describe the text blocks you want if you are looking for that. And it can shift between being more illustration-focused or parchment-texture focused really easily based on your prompt.
I won't say I couldn't be happier. I can still imagine it being a little better. But it's pretty darn good!
Prompts and generation data for showcase images can be seen over at the original upload site.