01 Jan 2026, 08:07 AM
Magic Prompt Enhancer Lora
This lora has been trained to enhance image generation prompt and add more details to it.
Example:
Input Prompt: A white girl in a snow background
Enhanced Prompt: A white girl with long, curly hair is standing in a snow-covered field. She is wearing a red coat, a white scarf, and a pair of black boots. The background is filled with snow-covered trees and a few buildings. The sky is overcast, with a few clouds. The girl is looking up at the sky, with a peaceful expression on her face. The image has a soft, dreamy quality to it.
Base Model: Qwen 2.5 3B
How to use:
1. Install pip dependencies
2. Download & unzip LORA file
3. Change base prompt and lora path and run python code
This lora has been trained to enhance image generation prompt and add more details to it.
Example:
Input Prompt: A white girl in a snow background
Enhanced Prompt: A white girl with long, curly hair is standing in a snow-covered field. She is wearing a red coat, a white scarf, and a pair of black boots. The background is filled with snow-covered trees and a few buildings. The sky is overcast, with a few clouds. The girl is looking up at the sky, with a peaceful expression on her face. The image has a soft, dreamy quality to it.
Base Model: Qwen 2.5 3B
How to use:
1. Install pip dependencies
Code:
pip install -U transformers peft2. Download & unzip LORA file
Hidden Content
You have not unlocked this post's content yet. Please reply to this thread to unlock the content.
3. Change base prompt and lora path and run python code
Code:
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoModel, set_seed
from peft import PeftModel
import torch
set_seed(12345) #Optional
prompt = "A white girl in a snow background"
model_name = "Qwen/Qwen2.5-3B"
lora_path = "/home/user/Downloads/prompt_enhancer_qwen2.5_3b"
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name, device_map="auto")
model = PeftModel.from_pretrained(model, lora_path)
text = """### Instruction:
Enhance this stable diffusion prompt. Expand on the given text and add more details to it.
### Input:
{}
### Response:
""".format(prompt)
print(text)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=300
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print("Enhanced prompt:", response)

![[Image: rainbow-diamond.gif]](https://i.postimg.cc/QCBM79jv/rainbow-diamond.gif)