I gave Google’s new Nano Banana Pro a try, and it immediately took my clothes off. I didn’t ask it to, but the AI model evidently decided my greetings card would look better with more skin.
Nano Banana Pro is, as the name suggests, aimed at professionals. Powered by Gemini 3, it’s effectively an upgrade of the company’s popular image generation and editing tool that went viral in a social media trend that turned selfies into hyperrealistic 3D figurines. Google says it lets you create higher quality images that you can print, render legible text onto pictures, and blend multiple images together into a single composition. It’s also meant for “people who want to feel like professionals,” Naina Raisinghani, a product manager at Google DeepMind, told The Verge. That sounds good, because I am by no means a professional. For me, the results were glossy, but goofy. It looked good, but felt amateurish.
Using Nano Banana Pro is pretty simple: you go into the Gemini app, select “create images,” and toggle on the ‘thinking’ mode. Just plug in your prompt (and image, if you’re using one) and go. It’s also free, though there are limits, with quotas expanding for Google AI Plus, Pro, and Ultra subscribers.
Google makes some bold claims, promising “studio-quality designs,” “flawless text rendering,” and a host of nifty and creative edits. To test these, I uploaded a simple photo of myself near The Verge’s office in New York with the Brooklyn Bridge in the background. I asked Gemini to change the lighting from day to night and it did a pretty good job. The result looks believable. It even handled details that often trip up image generators, like having cars go in the right direction. Adjusting the camera angle was equally easy. I asked Gemini to recreate the shot as if it were taken from a higher angle on the right and it did.

Image: The Verge and Image: The Verge / Google, Nano Banana Pro
Google also says Nano Banana Pro can create infographics and diagrams to help visualize real-time information like weather or sports. Being British, I asked about the weather for the next four days in Washington, DC, and New York City, where I currently am. Visually, the infographic would’ve been at home on a basic forecast site. The text and numbers appeared normal — a far cry from the garbled nonsense you often see in AI-generated images — and Gemini gave me a list of citations at the end that helped me confirm it was accurate.
The model stumbled a little on more complex tasks. I asked it to summarize a recent Verge story about how Europe is scaling back its AI and privacy laws in a comic book-style format. The images and text were indeed rendered flawlessly in a cartoonish font, but the comic didn’t summarize the story at all, giving a vague overview of the bloc’s AI Act instead. The issue may have been because I gave Gemini a link to the story, rather than pasting the text in.

Image: The Verge / Google, Nano Banana Pro
It gave me a passable comic-style summary when I did. It communicated the gist of the actual story, though I don’t think I’d have been able to understand easily had I not written the source material. It also made up phrases that didn’t appear anywhere in my article.

Image: The Verge / Google
To really feel like a pro designer, I tried my hand at making greetings cards. Christmas is coming up, after all. Considering I only uploaded three selfies, Gemini did a frankly amazing job creating three full-body versions of myself, each in different outfits and sporting a different facial expression. It also created a realistic, snowy setting with Christmas trees, like I’d asked it to, and plastered “Merry Christmas!” on the top like I’d asked it to.
Gemini took liberties when I asked it to change the card’s snowy backdrop to a summery beach for an Australian-style holiday. Those liberties were my deepfaked clothes: two of my clones were topless. It was weird. There were also some prominent AI-generated feet and a smiley sandman to replace the snowman from the wintery scene (being built by my topless lookalike). There were issues, though — the sandman was missing a shadow, unlike other rendered objects in the picture, and the Christmas lights in the palm trees were magically glowing in the bright sun. I tested its precision editing skills by asking it to add some muscle to only one clone, which it did in seconds (if only it were that easy in the real world). Overall, the quality was superb, and the image would’ve been somewhat believable (abs aside) if you didn’t know there was a large tattoo missing on my chest.

It wasn’t all great, though. The model failed to preserve the exact text on my card that I’d asked it to. Instead of “Merry Christmas!” it opted for “Aussie Summer Christmas!” It also seems to struggle with animals: my sister’s cat is sitting in exactly the same stilted pose as the reference image I’d provided in every version of the card (he was given a whimsical Santa hat, though).
All in all, I was impressed. Nano Banana Pro is a clear upgrade on the basic model. I was able to ask for more precise edits and it actually produces intelligible text, removing a massive roadblock stopping generative AI tools like this being usable in the real world. But, alas, these features were not enough to make me a good designer.