Roman Peschke @roman.peschke
Free Guide

How to Try UNI-1 by Luma Labs

Luma's new reasoning-based image model. Free to try, no API key needed.

What you need

UNI-1 is a multimodal reasoning model that generates images. Unlike speed-first models like Nano Banana 2, UNI-1 is built around what Luma calls "Unified Intelligence." It understands intention and responds to direction. Feed it reference images and it maintains the subject. Give it a sketch and it turns it into a finished image. It ranks #1 in human preference (Elo score) for overall quality, style & editing, and reference-based generation.

Try it
1

Open the Luma app

Go to app.lumalabs.ai and create a free account. No API key needed.

2

Generate from text

Type a prompt. UNI-1 handles spatial reasoning and common-sense scene completion out of the box. It's culture-aware too. Manga, cinematic, meme aesthetics, multilingual text rendering, even Morse code.

3

Use reference images

This is where UNI-1 stands out. Upload one or more reference images and it maintains the subject across generations. Luma calls this "source-grounded controls." You can combine multiple people into a scene, transfer a style from one image to another, or keep a character consistent across outputs.

4

Try sketch-to-image

Upload a rough sketch. UNI-1 turns it into a finished image while preserving your composition and layout. Works with pencil sketches, wireframes, or rough drawings.

How it compares to Nano Banana 2

Speed: Nano Banana 2 is faster. It's built on Google's Flash architecture for rapid iteration. UNI-1 is slower because it reasons about the image before generating.

Reference images: UNI-1 wins here. It can take multiple reference photos and maintain identity, style, and composition. Nano Banana 2 handles subject consistency (up to 5 characters) but UNI-1's reference-guided controls are more flexible.

Text rendering: Nano Banana 2 is better at precise text in images and even does in-image translation.

Ecosystem: Nano Banana 2 is available in the Gemini app, Google Search, AI Studio, Vertex AI, and Google Ads. UNI-1 is app-only right now.

API: Nano Banana 2 has full API access through AI Studio and Vertex AI. UNI-1's API is waitlist-only.

Before you start

!
Free to try in the app. No API key or payment needed to start generating.
!
API is waitlist-only. You can't integrate UNI-1 into automated workflows yet. Sign up for access.
!
API pricing when it launches: ~$0.09 per text-to-image at 2048px. ~$0.11 for multi-reference with 8 images. Per-token billing (each image = 2,000 billing tokens).
Start here

Open app.lumalabs.ai, type a prompt, generate. Then try uploading a reference image and generating with it. That's where UNI-1 clicks.

Sources

UNI-1 by Luma Labs
Multimodal reasoning model for image generation. Intelligent scene completion, reference-guided generation, culture-aware output. Ranked #1 in human preference Elo for overall quality.
lumalabs.ai/uni-1
Luma app
Free web app to try UNI-1. Text-to-image, reference-guided generation, sketch-to-image. No API key needed.
app.lumalabs.ai

FAQ

Is UNI-1 free?
Free to try in the Luma app. API pricing when it launches is per-token: ~$0.09 per text-to-image at 2048px, up to ~$0.11 for multi-reference with 8 images.
Can I use UNI-1 via API?
Not yet. The API is waitlist-only. Sign up at lumalabs.ai/uni-1.
What's the difference between UNI-1 and Nano Banana 2?
Nano Banana 2 (Google) is faster with full API access and deep Google ecosystem integration. UNI-1 (Luma Labs) is more controllable with reference-guided generation and spatial reasoning, but slower and API is waitlist-only. Use Nano Banana 2 for speed and text rendering. Use UNI-1 for reference-based creative work.