Abacus Studio is your all-in-one AI creative powerhouse. You can generate, edit, and enhance images and videos in one sleek chat-style workspace. It brings together cutting-edge models from OpenAI, Google DeepMind, xAI, and Abacus.AI, so you don’t have to jump between tools to create pro-level visuals.
Auto Mode is your easiest, fastest path. Just type what you want, and Abacus Studio automatically picks the best model, resolution, and settings for the job. Static scene? It routes to image generation. Motion-heavy prompt? It routes to video generation. Use Auto Mode when you want great results without manual setup. Switch to Image or Video mode when you want tighter control.
Here’s the quick breakdown:
You get a stacked lineup of state-of-the-art models across image, video, and speech:
Yes, you can. Hit the microphone button in the prompt bar and speak in your preferred language. Abacus Studio supports multilingual speech recognition and turns your voice into a prompt. For the best non-English accuracy, align your browser language settings with the language you’re speaking.
Open My Generations and pick the output you want to revisit. You’ll see the original prompt there. Copy it into the prompt bar, tweak it, and regenerate. It’s a super quick way to iterate on ideas that already worked.
You can upload PNG, JPG/JPEG, and WebP for image-to-image and image-to-video workflows. Add files through the “+” attachment menu, paste from your clipboard, or import from Google Drive. For Motion Control video references, MP4 is the recommended format.
You’ve got a powerhouse image stack: GPT Image 1.5, GPT Image 2, Nano Banana 2, Nano Banana Pro, Seedream 4.5, Midjourney, Grok Imagine Image, FLUX.2 [Pro], Hunyuan Image 3.0, Wan 2.7, Imagen 4, Recraft SVG, and Ideogram 3.0. Each model shines in different areas like photorealism, artistic style, text rendering, or vector output.
Abacus Studio gives you multiple creation lanes:
Recraft SVG is purpose-built for vector creation. Instead of pixel-based images, it outputs true SVG files that scale infinitely without losing quality. That makes it ideal for logos, icons, illustrations, and reusable design assets. You also get style control with options like Engraving, Line Art, Linocut, and Stamp.
It depends on the model you choose. Most support core resolutions like 1024x1024 (1:1), plus common landscape and portrait formats such as 1536x1024 (3:2) and 1024x1536 (2:3). Some models also offer 16:9 and 9:16 for widescreen and vertical content. You’ll see exact options dynamically in the settings panel when a model is selected.
When you enable Rewrite Prompt, AI optimizes your input before generation. It turns rough ideas into richer, model-friendly prompts with better detail on lighting, composition, style, and scene clarity. Bottom line: you usually get stronger outputs with less manual prompt engineering.
Absolutely. Run the same prompt multiple times and you’ll get unique variations because AI generation includes natural randomness. It’s one of the best ways to explore different creative directions fast.
Vector generation supports these specialized styles:
Upload an existing image, describe your edits, and let the model handle the heavy lifting. Abacus Studio supports GPT Image 1.5 [Edit], GPT Image 2 [Edit], and Qwen Image Edit for this flow. You can change backgrounds, tweak colors, add or remove objects, or shift visual style while preserving untouched regions.
Magnific Upscaler includes nine style presets so you can control how details are enhanced:
You can upscale at 2x, 4x, 8x, and up to 16x. Final max resolution depends on your original input size and the upscale factor you pick.
Yes. You can upscale any image in Abacus Studio, whether it was generated inside the platform or uploaded from elsewhere.
Use this winning combo: pick the style preset that matches your content, start with a lower upscale factor if your source is low-res, and include a clear descriptive prompt to guide reconstruction.
You get a top-tier video roster: Sora 2, Seedance 2.0, Wan 2.5, Seedance 1.5 Pro, Hailuo 2, Kling AI v3, Kling AI O3, Kling v2.6 Motion Control, Luma Labs, Veo 3.1, Veo 3.1 Lite, and Grok Imagine Video. Different models excel at different things, including cinematic quality, physics realism, photorealism, and controlled motion.
You can choose from multiple generation styles:
Not sure which one to use? Auto Mode has your back and picks for you.
Duration options are model-dependent. Common choices include 4, 5, 10, and 15 seconds.
Resolution support varies by model. Most offer 720p and 1080p, and select models support 4K. Resolution choices are tied to aspect ratio selection.
These models are trained to simulate real-world physics, so motion looks believable. You’ll see more realistic gravity, momentum, collisions, fluid behavior, and material response.
Yes, all three workflows are supported:
Supported options include:
Motion Control (powered by Kling v2.6 Motion Control) lets you steer movement with a reference image or video. Upload a reference showing the motion path, camera behavior, or movement vibe you want, and the model applies that motion profile to your new output.
Reference videos for Motion Control can be up to 30 seconds long.
Upload a portrait image or video, then type text or pick a built-in voice profile. Abacus Studio generates speech and syncs lips, jaw movement, and facial expressions to match naturally. Speech generation is powered by ElevenLabs, OpenAI, and Hume models.
You get 10 distinct voice profiles across ElevenLabs, OpenAI, and Hume. You can preview voices before generation so you can quickly choose the best fit.
The 10 built-in voice profiles have fixed tone and speaking style characteristics.
Topaz Upscaler supports 2x and 4x video upscaling.
Yes. In the video upscaling flow, you can set a target FPS during processing.
The “+” menu is your upload hub for reference assets. You can:
Yes. Abacus Studio integrates directly with Google Drive, so you can import image and video assets into your workflow quickly.
Chaining is native to the conversational experience. Just run your workflow step by step:
With Rewrite Prompt turned on, an AI optimization layer expands your prompt before it reaches the generation model. It adds sharper detail around style, composition, lighting, and scene intent so output fidelity improves.
Abacus Studio is offered in two tiers:
Credit usage depends on the model, resolution, and output type. You’ll see exact credit cost in the model settings panel before you generate.
Yes. Higher resolutions require more compute, so they consume more credits.
Yes. Video credit usage scales with duration.
Upscaling is its own generation step, so it uses additional credits.
Yes. You can track real-time credit balance and usage history in the Abacus.AI billing dashboard.
If your balance hits zero, you won’t be able to start new generations until you add more credits.
Yes. Abacus Studio has Basic and Pro tiers as listed above, and it’s also available through ChatLLM Teams subscription plans.
Commercial rights depend on each model’s license terms. In general, content from Abacus Studio can be used commercially, but certain model-specific restrictions may apply.