General AI

Nano Banana 2: 10 Practical Prompting Techniques

Nano Banana 2 has become one of the most interesting image models of 2026. The output quality sits in Midjourney’s tier, inference speed is much faster, and the editing tools (regional prompts, inpainting, character preserve) are the best in market for iterating on near-finished images.

Most of the early prompting advice for Nano Banana came from the original release and doesn’t account for how the v2 model behaves differently. Below are ten techniques that work specifically with Nano Banana 2, drawn from the workflows of creators who use it in production.

1. Lead with the subject, not the style

Nano Banana 2 weights the start of the prompt heavily. Putting your subject first (“a woman in a leather jacket walking down a rain-slick street”) produces sharper results than starting with style descriptors. Style words still work, but they belong toward the end.

2. Use comma-separated style stacks at the end

After the subject and action, add three to five comma-separated style descriptors. Two-word phrases work better than single words. “Soft golden light, shallow depth of field, 35mm film grain” produces a more coherent look than “cinematic, moody, professional.”

3. Trust the literal interpretation

Where Midjourney generalizes loosely, Nano Banana 2 interprets prompts more literally. If you write “she’s holding a coffee cup,” she will be holding a coffee cup. Use this. Specific objects, body language, and compositional details all land more reliably than they would in Midjourney.

4. Use the character preserve flag for serial work

Nano Banana 2’s character preserve mode is the differentiator. When you commit a character image and use the preserve flag on subsequent generations, the model holds the face within tolerance across dozens of shots. The full setup is documented in the Nano Banana 2 Prompting Guide.

5. Regional prompts for compositional control

When you need different content in different parts of an image (a character in the foreground and a specific scene in the background), use regional prompts. Divide the canvas into regions, write a prompt for each, and Nano Banana 2 will compose the result coherently. This works much better than describing the whole image in one paragraph.

6. Use inpainting for almost-perfect generations

The biggest workflow time-saver is the inpaint pass. When a generation is 90% right but has one wrong element (an extra finger, a wrong color, an off facial expression), don’t regenerate from scratch. Mask the wrong region, write a small prompt for what should be there instead, and let the inpainter fix just that part. Most production creators run an inpaint pass on three or four images per project.

7. Outpaint to fix framing

Nano Banana 2’s outpaint mode lets you extend an image beyond its original frame. This is the right tool when a generation looks great but the composition is too tight, or when you need a wider or taller version for a different aspect ratio. Outpainted regions blend more naturally than they used to.

8. Negative prompts work, but use them sparingly

Negative prompts (telling the model what NOT to include) work in Nano Banana 2 but are less load-bearing than in Stable Diffusion. Reserve them for things you keep getting and don’t want: extra fingers, wrong hair colors, watermarks. Don’t list 30 things; list three.

9. Aspect ratio matters more than you think

Nano Banana 2 produces noticeably different results at different aspect ratios. The same prompt at 16:9 versus 9:16 versus 1:1 will yield different compositions even with character preserve on. If you have a target aspect ratio in mind, generate at that ratio rather than generating square and cropping.

10. Chain generations for refinement

The model handles iteration well. If a generation is close, take it back as a reference image, write a refinement prompt (“same subject, but with X changed”), and run again. Three or four refinement passes typically converge on something better than any single first generation. This is faster than starting over and writing a longer prompt.

What changed from Nano Banana v1

A few things to recalibrate if you used the original Nano Banana:

  • Hands and limbs are dramatically better. You no longer need to add “perfect hands” or similar negative prompts. Just describe the action.
  • Text rendering is usable. Short text inside images works much more often. For longer text (paragraphs, posters), Ideogram is still better.
  • Character preserve is new. This didn’t exist in v1; if you skipped Nano Banana before because of character drift, this is the feature to revisit.
  • The model is faster. Generation time has dropped meaningfully, which changes the workflow because iteration is now cheap.

Where Nano Banana 2 still struggles

Two persistent weaknesses to plan around. Very stylized aesthetics: Midjourney still wins for the cinematic AI poster look. Nano Banana tends toward photorealism. Long-form text in images: use Ideogram or Flux 2 if your image needs a paragraph of readable text.

For everything else (character work, fast iteration, polished output, editing), Nano Banana 2 has earned its place in the 2026 creator stack. The teams who use it well are the ones whose workflows exploit the editing tools rather than treating it as a Midjourney clone with worse aesthetics.

If you’re starting fresh, give the model an afternoon of structured experimentation with the techniques above. The learning curve is short, and the workflow speed unlock once you internalize the patterns is real.