Google is introducing a new feature in its Gemini app that makes AI-generated images more personal. Powered by “Personal Intelligence” and Nano Banana 2, it will let users create images using their Google Photos library without uploading pictures or writing long prompts. The update will generate more relevant results by using familiar faces, pets and moments, while also giving users control to refine and adjust outputs. It will also give users more control over how images turn out, with options to refine results and try different variations.
What is Personal Intelligence?
According to Google, Personal Intelligence is a feature that allows Gemini to understand your digital activity across different Google apps and use that information to provide tailored responses. It connects apps like Gmail, Google Photos, YouTube and Search in one place, but only if the user chooses to enable it.
Personal Intelligence with Nano Banana
Personal Intelligence helps Gemini understand your preferences from the start. By working with Nano Banana 2, it can automatically fill in details and create images based on what matters to you. Since it is built into the Gemini app, there is no extra setup if your Google apps are already connected.
This makes things simpler. Instead of writing long, detailed prompts, users can use short ones like “Design my dream house” or “Create a picture of my desert island essentials,” and Gemini will generate images that match your tastes and lifestyle.
Creating images with personal photos
According to Google, Gemini can use images from a user’s Google Photos library to generate customised visuals by recognising people, pets and moments already organised and labelled. This context helps create more personal results, allowing users to include themselves, family and friends in different styles, whether they want realistic images or more imaginative creations.
For example, users can ask Gemini to create an image of themselves and their family in a specific style, such as claymation or a watercolour painting. Since the app already understands labels and faces from Google Photos, it can generate results that feel more connected to real-life moments. This removes the need to search, download and upload images separately.
Google said users will still have control over how their images turn out. If the result isn’t accurate, they can refine it by giving feedback or choosing a different reference image from their library. They can also tap the ‘+’ icon to select another photo from Google Photos and try a new perspective. There is also an option to check which photo was used to generate the image through a “Sources” feature. The company said that this makes it easier to adjust details and try different versions until the output matches what the user wants.
Privacy and availability
Google has highlighted that privacy remains unchanged with this feature. Google said it uses only limited data, such as specific prompts in Gemini and the model’s responses, to improve performance over time. It also added that connecting Google apps to Gemini is optional and can be managed or changed anytime through settings.
The personalised image feature is currently rolling out to select Gemini subscribers in the US, including Google AI Plus, Pro and Ultra users. Google plans to expand availability to more users and platforms, including Chrome desktop, in the future.