The Editor and Omni Reference System
Vary Region (Inpainting)
Vary Region lets you select a portion of a generated image and regenerate just that area while keeping the rest intact.
- How to access: Click "Vary Region" on any upscaled image.
- Selection tools: Freeform lasso and rectangular selection.
- Prompt override: You can write a new prompt for just the selected region.
- Best for: Fixing hands, changing clothing, swapping backgrounds, removing unwanted elements.
Tips:
- Select slightly larger than the area you want to change — this gives the model context for blending.
- Keep your region prompt simple and specific. Don't re-describe the entire scene.
- Multiple small region edits often work better than one large one.
Pan
Pan extends your image in any direction (left, right, up, down) while maintaining visual consistency.
- How to access: Use the arrow buttons on any upscaled image.
- Use cases: Expanding composition, revealing more environment, adjusting framing.
- Prompt support: You can add a prompt to guide what appears in the new area.
Zoom Out
Zoom Out reveals more of the scene around your existing image, as if pulling the camera back.
- Options: 1.5x and 2x zoom levels, plus custom zoom.
- Best for: Creating wider compositions from tight crops, adding environmental context.
- Custom zoom: Lets you specify an exact zoom level and add a guiding prompt.
Retexture
Retexture keeps the structural composition of your image but regenerates the surface appearance.
- How it works: Preserves shapes, positions, and composition while changing materials, colors, and textures.
- Best for: Exploring color palettes, changing seasons, material studies, style variations.
- Prompt required: Describe the new texture/style you want applied to the existing composition.
Combined Editor Workflow
The real power comes from combining these tools in sequence:
- Generate your base image.
- Zoom Out if you need more environmental context.
- Pan to adjust framing and reveal relevant areas.
- Vary Region to fix specific problem areas (hands, faces, details).
- Retexture to explore alternative color/material treatments.
This iterative approach lets you sculpt a single image through multiple refinement passes rather than relying on re-rolling for perfection.
Omni Reference: Step-by-Step
Omni Reference (--oref) is V7's unified reference system. It intelligently interprets your reference image and applies relevant attributes — style, subject, composition, or all three.
7-Step Process
- Prepare your reference image — Choose a clear, high-quality image that represents what you want to transfer. Upload it to Discord or use a URL.
- Write your scene prompt — Describe the new image you want to create. Be specific about what should change from the reference.
- Add the oref parameter — Append
--oref [image_url]to your prompt. - Set the weight — Add
--ow [value]to control how strongly the reference influences the output. - Generate and evaluate — Run the prompt and assess how well the reference was interpreted.
- Adjust weight — If the reference is too dominant, lower
--ow. If it's being ignored, raise it. - Iterate — Refine your text prompt and weight until the balance between your description and the reference is right.
Omni Reference Weight Guide
--ow Range |
Behavior | When to Use |
|---|---|---|
| 1–25 | Subtle influence. Reference acts as a gentle suggestion. | When you want a hint of the reference style without overpowering your prompt |
| 100 (default) | Balanced. Reference and text prompt share equal influence. | General purpose; good starting point |
| 200–400 | Strong reference adherence. Output closely matches reference aesthetics. | Style transfer, maintaining brand consistency, matching a specific look |
| 400–1000 | Near-literal reproduction. Text prompt becomes secondary. | Character consistency, recreating a specific image in a new context |
Best Practices
- Start at default weight (100) and adjust — Don't guess. Start at 100, evaluate, then move up or down in increments of 50-100.
- Use high-quality, unambiguous references — The clearer your reference, the better the model interprets it. Avoid busy, multi-subject reference images.
- Your text prompt still matters — Even at high weights, the text prompt guides what the model does with the reference. Don't leave it vague.
- Match aspect ratios — If your reference is 3:2, generating at 3:2 produces more consistent results than generating at 1:1.
- Combine with personalization — Oref + your
--pprofile can produce results that feel both referenced AND personal to your aesthetic.
Limitations
- One reference per prompt — You can only use one
--orefimage at a time. For multi-reference workflows, use--sreffor style and--oreffor subject separately. - Not compatible with Vary/Pan/Zoom — Oref cannot be used in combination with editor tools. Generate with oref first, then edit.
- Not compatible with
--draftmode — Omni Reference requires full rendering. Draft mode skips the reference processing. - Not compatible with
--q 4— Ultra-quality mode and oref conflict. Use--q 2or default. - ~2x GPU cost — Using oref approximately doubles the GPU time per generation compared to text-only prompts.
Exercise
Editor Mastery Challenge
- Generate a portrait image you're 70% happy with.
- Use Vary Region to fix one specific issue (hands, clothing detail, background element).
- Use Pan to extend the image in one direction, adding environmental context.
- Use Zoom Out 1.5x to reveal more of the scene.
- Now find a reference image with a completely different mood/color palette. Use
--orefwith that reference and your original prompt to create a stylistic variation. - Compare your original, edited, and oref versions side by side. Which workflow produced the best result?
Inquiry