Adobe is pushing deeper into AI‑driven workflows with the launch of its Firefly AI Assistant, a conversational agent designed to operate across Photoshop, Premiere Pro, Illustrator, Lightroom, Express and more from a single interface. Announced alongside new video, image and collaboration tools, the assistant is Adobe’s clearest attempt yet to reposition Creative Cloud around agentic AI — systems that can take a goal, choose the right tools, and execute multi‑step creative tasks.
At the same time, Adobe is expanding Firefly’s model lineup with new third‑party video generators, rolling out a rethought color workflow in Premiere Pro, and trying to solve distributed media access via Frame.io Drive. Together, the moves are aimed squarely at creative professionals and enterprise teams weighing how far to lean into AI for daily production work.
What the Firefly AI Assistant actually does
The Firefly AI Assistant is intended to change how users interact with Adobe tools. Instead of opening individual apps and manually choosing the right feature for each step, creators describe the end result in natural language. The assistant then determines which tools to call, in what sequence, and carries out the workflow.
The product is the commercialization of Project Moonlight, a research prototype Adobe first showed at MAX 2025. According to Alexandru Costin, Adobe’s VP of AI & Innovation, the company took learnings from that prototype and a private beta to build a more ambitious architecture capable of spanning the Creative Cloud suite.
Under the hood, Adobe says the assistant can draw on roughly 100 tools and “skills.” These cover generative outputs (images and video), precision image editing, layout adaptation, and even review workflows via Frame.io. All of this is exposed through a single conversational interface in the Firefly web app, which maintains context across sessions.
Adobe is also prepackaging complex workflows into what it calls Creative Skills: multi‑step templates for tasks like portrait retouching or social asset creation. These can be invoked with a single prompt and customized to match a creator’s style. Over time, the assistant is designed to learn a user’s preferred workflows and aesthetics, and to adapt its decisions based on media type — whether the user is working with stills, video, vectors, or brand assets.
Outputs are delivered in native Adobe formats such as PSD, AI, and PRPROJ. That matters for working professionals who expect to move fluidly between fast AI‑driven edits and detailed manual adjustments. Costin describes this as a “continuum” between conversational and pixel‑perfect editing, where creatives decide how much control to retain at each step. The Firefly AI Assistant is slated to enter public beta in the coming weeks, though Adobe has not set a firm date.
Agentic workflows across Photoshop, Premiere, Illustrator, and more

From a workflow perspective, Firefly AI Assistant is Adobe’s bid to make agentic AI central to production rather than an optional add‑on. Instead of treating generative models as isolated features, the assistant is meant to orchestrate complex, multi‑app sequences on behalf of the user.
In practical terms, this could mean handling tasks that once required bouncing between several tools: generating concept imagery, adapting layouts to different aspect ratios, performing detailed photo touch‑ups, cutting short social videos from longer edits, and even routing work out for stakeholder review in Frame.io. The assistant’s ability to call around 100 discrete tools suggests Adobe is trying to encode many common production pipelines into a single, goal‑driven agent.
Because the system is aware of the content type in play, it can make different choices for a video‑heavy project than for a static brand toolkit. Over time, the learning component — remembering tools, steps, and style preferences — is intended to reduce repetitive setup for recurring work, which is particularly relevant to agencies and in‑house teams producing high volumes of similar content.
Notably, Adobe is framing agentic AI as an evolution of its existing automation features rather than a wholesale replacement of human craft. Costin compares Creative Skills to a “next‑generation” version of Photoshop Actions, long used by power users to script repetitive steps. The difference is that now the “macro” is assembled and adapted on the fly by an AI agent based on a conversational brief, rather than pre‑recorded by the user.
Pricing, generative credits, and what investors are watching
How Adobe monetizes Firefly AI Assistant is under close scrutiny from Wall Street, which has been cautious about the company’s AI revenue story. At launch, access to the assistant requires an active Adobe subscription that covers the relevant apps. For example, to have the agent invoke Photoshop’s cloud‑based capabilities, a user needs a subscription tier that includes Photoshop.
Generative actions triggered via the assistant will draw from the same pool of Firefly generative credits customers already use across Adobe’s platform. Costin notes that users will effectively “consume credits” whenever they leverage generative features through the assistant, consistent with existing Firefly usage.
Adobe is signaling that this model is not fixed. Costin acknowledges that as Adobe better understands both the value creators derive from the assistant and the cost of running the underlying “brain” and conversation engine, pricing could change. For enterprise buyers and tech decision‑makers, that means the current alignment with existing SKUs and credits may evolve as usage patterns and infrastructure costs become clearer.
The financial stakes are substantial. In its most recent quarter, Adobe reported 10% year‑over‑year revenue growth to $6.4 billion, and said annual recurring revenue from AI standalone and add‑on products had reached $125 million, with CEO Shantanu Narayen projecting that figure would double within nine months. Whether Firefly AI Assistant becomes a meaningful driver of that growth will be an important data point for investors.
Third-party video models and the commercial safety trade-offs

Alongside the agent launch, Adobe is broadening Firefly’s access to external models by adding Kling 3.0 and Kling 3.0 Omni, video generators developed by Chinese tech company Kuaishou. Kling 3.0 emphasizes fast, high‑quality clips with built‑in storyboarding and audio‑visual sync, while the Omni version adds more granular controls over shot duration, camera moves and character motion across multi‑shot sequences.
These models join a roster that now exceeds 30, including Google’s Nano Banana 2 and Veo 3.1, Runway’s Gen‑4.5, Luma AI’s Ray3.14, Black Forest Labs’ FLUX.2[pro], and ElevenLabs’ Multilingual v2. The strategy is to give Firefly users model choice for different tasks and quality profiles, rather than constraining them to Adobe’s first‑party engines.
That choice has implications for commercial safety, particularly as agentic systems begin to choose models autonomously. Adobe maintains a distinction between its own Firefly models — trained on licensed Adobe Stock and public domain content, with associated commercial indemnity — and partner models, which can carry different safety and indemnity profiles.
Costin says customers have asked specifically for access to external models for ideation and other non‑production uses, where they may be more flexible about commercial constraints. For final production work, he notes that many customers want higher assurance from first‑party, commercially safer models.
This creates a nuanced risk profile once Firefly AI Assistant is empowered to pick models on its own. Costin points to Adobe’s Content Credentials system — the metadata and fingerprinting framework developed through the Content Authenticity Initiative — as the primary transparency mechanism. Content generated through different engines can carry distinct credentials, allowing users to see how an asset was created and decide whether it meets their commercial and legal thresholds. For enterprises, the varying indemnity levels between Adobe’s own models and its partners will be an important part of due diligence.
Nvidia, long-running agents, and infrastructure that isn’t here yet
Adobe’s agentic ambitions are also tied to its partnership with Nvidia, which is building an ecosystem for enterprise AI agents. The companies highlighted this collaboration at Nvidia’s GTC conference, but Adobe is clear that Nvidia’s technology has not yet landed in a shipping product.
Costin says Adobe is actively exploring Nvidia’s Nemotron models and tools such as Open Shell and Nemo Claw. These are aimed at enabling long‑running agent workflows in sandboxed environments — the kind of infrastructure needed when a single creative request may trigger dozens of model calls and tool invocations over an extended period.
The implication for creative teams is that today’s Firefly AI Assistant represents only the first step. As workflows become longer and more autonomous, efficiency, cost management, and isolation of these agents will become more critical. Nvidia’s stack could ultimately underpin a future version of the assistant that can manage more complex, multi‑stage projects securely and at scale, but Adobe is explicit that this is still in the exploration phase rather than production.
New tools shipping now: Premiere Color Mode, Firefly Video Editor, and After Effects updates
While the agent is entering beta, Adobe is simultaneously shipping or beta‑releasing a range of more conventional feature upgrades aimed at working editors and designers.
In Premiere Pro, a new Color Mode is entering public beta. Adobe describes it as a first‑of‑its‑kind grading environment tailored specifically to editors, not dedicated colorists. Developed with input from hundreds of working editors in a private beta, the mode is intended to make color work more approachable — testers told Adobe they “actually enjoy color grading,” suggesting the company may be lowering one of post‑production’s more specialized barriers. General availability is expected later in 2026.
Firefly Video Editor is gaining several notable capabilities: Enhance Speech, migrated from Premiere and Adobe Podcast; direct Adobe Stock integration with access to over 800 million licensed assets; and streamlined color adjustment with slider‑based controls and one‑click looks.
On the still‑image side, Adobe is introducing Precision Flow, which generates semantic variations from a single prompt and lets users browse them via a slider. Costin characterizes it as combining “the best slider‑based control” with deep understanding of both the existing scene and plausible alternatives. AI Markup complements this by allowing users to mark up images directly — drawing where and how edits should apply.
For motion graphics, After Effects 26.2 adds Object Matte, an AI‑powered tool that speeds rotoscoping and masking. Users can create accurate mattes of moving subjects with a hover and click, refine with a Quick Selection brush, and polish edges using a Refine Edge tool, potentially saving substantial time in complex compositions.
Frame.io Drive and the push to make cloud media feel local

Beyond AI, Adobe is targeting a persistent bottleneck in distributed video production: media movement. Frame.io Drive, a new desktop application, is designed to make cloud‑hosted media behave like local files, reducing dependence on downloads, sync processes, or physically shipped hard drives.
Drive mounts Frame.io projects directly to the user’s operating system, so assets appear in Finder or Explorer. Underneath, Frame.io Mounted Storage streams media on demand as applications request it, while local caching supports smooth playback and editing. The technology builds on streaming capabilities from Suite Studios, and real‑time file access is included with every Frame.io account. Adobe emphasizes that content remains within Frame.io and is not shared with third parties.
Strategically, this repositions Frame.io from a review‑and‑approval layer at the end of the pipeline to a central media hub from initial capture through delivery. If adoption is strong, this could deepen Adobe’s integration with professional video workflows by making Frame.io the de facto source of truth for distributed teams. Frame.io Drive and Mounted Storage are rolling out in phases, starting with Enterprise customers, with other tiers following and a waitlist available.
Trust, competition, and what this means for creative teams
All of these launches arrive at a complicated moment for Adobe. Firefly was first introduced in March 2023 as a family of generative models focused on image and text effects, with a strong emphasis on commercial safety via licensed training data. Since then, Adobe has moved quickly into video generation, multi‑model access, and now agentic workflows, mirroring a broader industry shift from standalone AI features to AI‑native systems.
But competition is intense. Video‑first AI players such as Runway and Pika, design platforms like Canva, and foundation model providers including OpenAI, Google and Anthropic (which Adobe says it plans to integrate with Firefly AI Assistant capabilities) are all targeting the same budgets and mindshare. At the corporate level, Adobe is navigating leadership transition with the impending departure of CEO Shantanu Narayen, an actively exploited Acrobat Reader zero‑day (CVE‑2026‑34621) only recently patched, a U.K. antitrust investigation over cancellation fees, and a $75 million lawsuit settlement.
Adobe’s response is to lean heavily on its existing moat: deep integration of AI into mature, professional‑grade tools that are already entrenched in creative and post‑production pipelines. Costin frames the agentic shift as a way to elevate human roles from doing every step to directing outcomes — likening creators to creative directors guiding the assistant rather than executing every operation themselves.
For creative professionals and tech decision‑makers, the key questions are less about whether Adobe can build these systems, and more about trust, control, and fit with existing workflows. The Firefly AI Assistant promises speed and orchestration, but it also asks teams to let an AI broker between them and the tools they’ve spent years mastering. How much autonomy to grant the agent, which models to allow for which stages, and how to interpret Content Credentials metadata will become practical policy decisions, not theoretical debates.
Ultimately, the trajectory of Firefly AI Assistant will help determine not only Adobe’s competitive position, but also how comfortable the industry becomes with creative work executed by agents that understand both tools and context — and that increasingly act on behalf of, rather than purely at the direction of, human creators.

Hi, I’m Cary Huang — a tech enthusiast based in Canada. I’ve spent years working with complex production systems and open-source software. Through TechBuddies.io, my team and I share practical engineering insights, curate relevant tech news, and recommend useful tools and products to help developers learn and work more effectively.



![Black Forest Labs’ FLUX.2 [klein]: Fast, Open-Weight Image Generation for Enterprise and Developers vahdmqspge-image-0](https://www.techbuddies.io/wp-content/uploads/2026/01/vahdmqspge-image-0-150x150.png)

