Adobe has integrated generative AI capabilities from their Firefly engine into Photoshop as a new set of tools, Generative Fill in Photoshop. Generative Fill is a new approach to adding, extending or removing content from images, very rapidly and non-destructively using simple text prompts. This beta release of Photoshop is Adobe’s first Creative Cloud application to deeply integrate Firefly. The company says more AI-based releases are planned that will substantially change workflows for Creative Cloud, Document Cloud, Experience Cloud and Adobe Express users.
Over the past 10 years, Adobe has been developing and releasing intelligent capabilities through its Adobe Sensei Engine into applications including features like Neural Filters in Photoshop, Content Aware Fill in After Effects, Customer AI in Adobe Experience Platform and Liquid Mode in Acrobat.
Flying with Firefly
The Firefly integration is somewhat different in that it changes the process of creating the content itself, not only working with it. When Adobe launched Firefly in April 2023 as a series of creative generative AI models, it initially focussed on generating images and text effects. Since then, Firefly has been expanded to support vector recolouring and generative fill. Firefly’s first model is trained on Adobe Stock images, openly licensed content and other public domain content without copyright restrictions. Thus, Firefly generates commercially viable, professional quality content and is designed to be embedded directly into creators’ workflows.
Enterprises will be able to extend Firefly with their own creative collateral in order to generate content featuring the company’s images, vectors and brand language. Following the integration of Firefly across Adobe Experience Cloud applications, for example, marketing organisations will be able to use Firefly to speed up production for their content supply chains.
Creativity and Design
Deeply integrating AI into Photoshop’s core tools is intended to speed up work particularly at the ideation stage, giving users precise creative control over the quality of the final content. Generative Fill automatically matches the perspective, lighting and style of images, replacing the repetitive tasks typically needed ahead of seeing any results. The idea is to expand creative expression and productivity by allowing creators to use natural language and concepts to generate digital content much faster than has been possible until now.
For example, adding, extending or removing content from images can be done with simple text prompts. Because newly generated content is placed on its own layers, users retain a lot of flexibility. You can iterate through a progression of creative possibilities and reverse the effects you are not happy with, without impacting the original image. These steps can be performed as fast as the user can type.
Generative Fill is also available as a new module in the Firefly beta for users interested in testing the new capabilities online.
Generative Fill supports Content Credentials so that people know whether a piece of content was created by a human, AI-generated or AI-edited. Adobe says that Content Credentials remain associated with content wherever it is used, published or stored, enabling proper attribution and helping consumers make informed decisions about digital content. This system was developed by the Content Authenticity Initiative, which Adobe founded and recently surpassed 1,000 members.
Adobe also said that they are ‘taking a creator-focused approach to building generative AI in a way that enables users to monetize their talents’, similar to the development of Adobe Stock and Behance. They are developing a compensation model for Adobe Stock contributors, taking steps to prevent artists’ names from being used in Adobe’s generative AI actions and pushing for open industry standards through the CAI, including a universal ‘Do Not Train’ tag. Details will be shared once Firefly is out of beta.
More Updates from Adjustment Presets to Gradients
The other new features in Photoshop are mainly updates to existing tools. For instance, 32 new Adjustment Presets have been added to the Adjustments panel. The Presets are filters that speed up complex tasks by previewing and changing the appearance of images in a few steps. The user can hover over each one to see what the image would look like with each preset applied before selecting it. Once selected, it can be further refined by editing the automatically created adjustment layers in the layers panel. This is a fast way to achieve a distinctive look and feel.
The Remove Tool is AI-powered, used to replace an unwanted object by brushing over it. The tool maintains the integrity of nearby objects and creates an uninterrupted transition on complex and varied backgrounds. This tool can save a lot of time when removing larger objects and matching the smooth focus shift across the image. It is also useful when the object to remove is near other objects has an essential structure behind it, or appears against a background where the focus varies
The Contextual Task Bar is an on-screen menu that recommends the most relevant next steps in several workflows. It reduces the number of clicks required to complete a project, and making the most common actions more easily accessible. For example, when an object is selected, the Contextual Task Bar appears below your selection and suggests actions for selection refinement that you might want to use next, such as Select and Mask, Feather, Invert, Create Adjustment Layer, Fill Selection, or generate something with the new Generative Fill capabilities.
The Gradients creation feature now includes new on-canvas controls, which give you precise control over many aspects of the gradient in real-time. An automatic live preview gives an immediate view of how the changes you make affect your image. Non-destructive edits to gradients are also possible, which means gradients can edited without permanently altering the original image. www.adobe.com