Abacus Studio - FAQs

Platform Overview & Getting Started

What is Abacus Studio?

Abacus Studio is your all-in-one AI creative powerhouse. You can generate, edit, and enhance images and videos in one sleek chat-style workspace. It brings together cutting-edge models from OpenAI, Google DeepMind, xAI, and Abacus.AI, so you don’t have to jump between tools to create pro-level visuals.

How does Auto Mode work and when should I use it?

Auto Mode is your easiest, fastest path. Just type what you want, and Abacus Studio automatically picks the best model, resolution, and settings for the job. Static scene? It routes to image generation. Motion-heavy prompt? It routes to video generation. Use Auto Mode when you want great results without manual setup. Switch to Image or Video mode when you want tighter control.

What’s the difference between the Featured, Images, Videos, and My Generations tabs?

Here’s the quick breakdown:

  • Featured — A curated highlight reel of standout community creations. Great for inspiration.
  • Images — A gallery of all publicly shared image generations.
  • Videos — A gallery of all publicly shared video generations.
  • My Generations — Your private vault of everything you’ve created. It’s the fastest place to find, reuse, and iterate on your own work.
What models are available in Abacus Studio?

You get a stacked lineup of state-of-the-art models across image, video, and speech:

  • Image Models: GPT Image 1.5, GPT Image 2, Nano Banana 2, Nano Banana Pro, Seedream 4.5, Midjourney, Grok Imagine Image, FLUX.2 [Pro], Hunyuan Image 3.0, Wan 2.7, Imagen 4, Recraft SVG, Ideogram 3.0, Magnific Upscaler, GPT Image 1.5 [Edit], GPT Image 2 [Edit], and Qwen Image Edit.
  • Video Models: Sora 2, Seedance 2.0, Wan 2.5, Seedance 1.5 Pro, Hailuo 2, Kling AI v3, Kling AI O3, Kling v2.6 Motion Control, Luma Labs, Veo 3.1, Veo 3.1 Lite, Grok Imagine Video, and Topaz Upscaler.
  • Speech Models: ElevenLabs, OpenAI, and Hume (supporting Text-to-Speech, Speech-to-Text, and Speech-to-Speech workflows).
Can I use voice input in languages other than English?

Yes, you can. Hit the microphone button in the prompt bar and speak in your preferred language. Abacus Studio supports multilingual speech recognition and turns your voice into a prompt. For the best non-English accuracy, align your browser language settings with the language you’re speaking.

How do I reuse prompts from previous generations?

Open My Generations and pick the output you want to revisit. You’ll see the original prompt there. Copy it into the prompt bar, tweak it, and regenerate. It’s a super quick way to iterate on ideas that already worked.

What file formats can I upload for image-to-image or image-to-video?

You can upload PNG, JPG/JPEG, and WebP for image-to-image and image-to-video workflows. Add files through the “+” attachment menu, paste from your clipboard, or import from Google Drive. For Motion Control video references, MP4 is the recommended format.

Image Generation

What image generation models are available?

You’ve got a powerhouse image stack: GPT Image 1.5, GPT Image 2, Nano Banana 2, Nano Banana Pro, Seedream 4.5, Midjourney, Grok Imagine Image, FLUX.2 [Pro], Hunyuan Image 3.0, Wan 2.7, Imagen 4, Recraft SVG, and Ideogram 3.0. Each model shines in different areas like photorealism, artistic style, text rendering, or vector output.

What are the different image generation options available?

Abacus Studio gives you multiple creation lanes:

  • Advanced photorealistic generation — Best for realistic scenes, readable text, complex compositions, and precise instruction-following.
  • Fast creative generation — Built for speed and strong creative output. Perfect for rapid ideation and iteration.
  • High-fidelity artistic generation — Dialed in for rich textures, fine details, and premium-quality artistic renders.
How does vector graphics generation work?

Recraft SVG is purpose-built for vector creation. Instead of pixel-based images, it outputs true SVG files that scale infinitely without losing quality. That makes it ideal for logos, icons, illustrations, and reusable design assets. You also get style control with options like Engraving, Line Art, Linocut, and Stamp.

What resolution options are available for each image model?

It depends on the model you choose. Most support core resolutions like 1024x1024 (1:1), plus common landscape and portrait formats such as 1536x1024 (3:2) and 1024x1536 (2:3). Some models also offer 16:9 and 9:16 for widescreen and vertical content. You’ll see exact options dynamically in the settings panel when a model is selected.

How does the Rewrite Prompt feature improve my results?

When you enable Rewrite Prompt, AI optimizes your input before generation. It turns rough ideas into richer, model-friendly prompts with better detail on lighting, composition, style, and scene clarity. Bottom line: you usually get stronger outputs with less manual prompt engineering.

Can I generate multiple variations of the same prompt?

Absolutely. Run the same prompt multiple times and you’ll get unique variations because AI generation includes natural randomness. It’s one of the best ways to explore different creative directions fast.

What styles does vector graphics generation support (Engraving, Line Art, etc.)?

Vector generation supports these specialized styles:

  • Engraving — Crosshatch-heavy, vintage line shading like currency art or classic prints.
  • Line Art — Clean line-driven drawings with minimal fill and modern simplicity.
  • Linocut — Bold, high-contrast block-print look with thick strokes.
  • Stamp — Simplified stamp-like visuals with clear outlines and flat color areas.

Image Editing & Enhancement

How does Image-to-Image editing work?

Upload an existing image, describe your edits, and let the model handle the heavy lifting. Abacus Studio supports GPT Image 1.5 [Edit], GPT Image 2 [Edit], and Qwen Image Edit for this flow. You can change backgrounds, tweak colors, add or remove objects, or shift visual style while preserving untouched regions.

What are the 9 style presets in our upscaling technology?

Magnific Upscaler includes nine style presets so you can control how details are enhanced:

  • Standard — Balanced enhancement for general use.
  • Portraits — Tuned for facial features and skin detail.
  • Art & Illustrations — Preserves stylization and brush/line character.
  • Videogame — Optimized for game assets and CG scenes.
  • Science Fiction & Horror — Pushes dramatic, atmospheric detail.
  • 3D Renders — Sharpens geometry, lighting, and material detail.
  • Nature & Landscapes — Enhances terrain, foliage, and environmental texture.
  • Film & Photography — Adds cinematic crispness and photographic sharpness.
  • Books & Comics — Improves line work, lettering, and panel clarity.
What’s the maximum upscale factor and resolution?

You can upscale at 2x, 4x, 8x, and up to 16x. Final max resolution depends on your original input size and the upscale factor you pick.

Can I upscale images generated by any model?

Yes. You can upscale any image in Abacus Studio, whether it was generated inside the platform or uploaded from elsewhere.

How do I preserve details when upscaling?

Use this winning combo: pick the style preset that matches your content, start with a lower upscale factor if your source is low-res, and include a clear descriptive prompt to guide reconstruction.

Video Generation

What video generation models are available?

You get a top-tier video roster: Sora 2, Seedance 2.0, Wan 2.5, Seedance 1.5 Pro, Hailuo 2, Kling AI v3, Kling AI O3, Kling v2.6 Motion Control, Luma Labs, Veo 3.1, Veo 3.1 Lite, and Grok Imagine Video. Different models excel at different things, including cinematic quality, physics realism, photorealism, and controlled motion.

What video generation options are available?

You can choose from multiple generation styles:

  • Cinematic generation — Great for storytelling, multi-shot coherence, and creative concepts.
  • Physics-accurate generation — Best for realistic motion behavior, including products and mechanical movement.
  • Photorealistic generation — Focused on high realism and visual clarity.

Not sure which one to use? Auto Mode has your back and picks for you.

What duration options are available (4s, 10s, 15s)?

Duration options are model-dependent. Common choices include 4, 5, 10, and 15 seconds.

What resolution options does each video model support?

Resolution support varies by model. Most offer 720p and 1080p, and select models support 4K. Resolution choices are tied to aspect ratio selection.

How does physics-accurate motion work?

These models are trained to simulate real-world physics, so motion looks believable. You’ll see more realistic gravity, momentum, collisions, fluid behavior, and material response.

Can I generate videos from text, images, or both?

Yes, all three workflows are supported:

  • Text-to-Video — Type your scene and generate from scratch.
  • Image-to-Video — Upload a still image and animate it.
  • Image + Text — Upload an image, then guide the motion/style/action with a text prompt.
What aspect ratios are supported for video?

Supported options include:

  • 16:9 — Standard widescreen.
  • 9:16 — Vertical format.
  • 1:1 — Square format.
  • 3:2 / 2:3 — Available on select models for photo-style framing.

Video Editing & Enhancement

How does Motion Control work?

Motion Control (powered by Kling v2.6 Motion Control) lets you steer movement with a reference image or video. Upload a reference showing the motion path, camera behavior, or movement vibe you want, and the model applies that motion profile to your new output.

What’s the difference between using a reference video vs reference image for motion?
  • Reference Video — Gives frame-by-frame motion guidance.
  • Reference Image — Gives a starting visual anchor for generation.
How long can reference videos be?

Reference videos for Motion Control can be up to 30 seconds long.

How does lip-sync technology work?

Upload a portrait image or video, then type text or pick a built-in voice profile. Abacus Studio generates speech and syncs lips, jaw movement, and facial expressions to match naturally. Speech generation is powered by ElevenLabs, OpenAI, and Hume models.

What voice profiles are available?

You get 10 distinct voice profiles across ElevenLabs, OpenAI, and Hume. You can preview voices before generation so you can quickly choose the best fit.

Can I customize voice speed or tone?

The 10 built-in voice profiles have fixed tone and speaking style characteristics.

What upscale factors does video upscaling support?

Topaz Upscaler supports 2x and 4x video upscaling.

Can I adjust target FPS when upscaling videos?

Yes. In the video upscaling flow, you can set a target FPS during processing.

Advanced Features

How does the “+” Attachment menu work?

The “+” menu is your upload hub for reference assets. You can:

  • Upload from your device — Pull in images or videos directly from your computer.
  • Paste from clipboard — Drop in screenshots and copied images instantly.
  • Import from Google Drive — Bring in Drive files without downloading them first.
Can I import files from Google Drive?

Yes. Abacus Studio integrates directly with Google Drive, so you can import image and video assets into your workflow quickly.

How do I chain workflows (e.g. generate image → upscale → image-to-video)?

Chaining is native to the conversational experience. Just run your workflow step by step:

  • Step 1: Generate an image with any image model.
  • Step 2: Send it to Magnific Upscaler for more resolution and detail.
  • Step 3: Use that upscaled image as input for an image-to-video model.
  • Step 4: Optionally upscale the final video with Topaz Upscaler.
What’s the difference between Auto, Image, and Video modes?
  • Auto Mode — Abacus Studio picks the best model and output type based on your prompt.
  • Image Mode — Forces image output.
  • Video Mode — Forces video output.
How does the Rewrite Prompt toggle optimize my inputs?

With Rewrite Prompt turned on, an AI optimization layer expands your prompt before it reaches the generation model. It adds sharper detail around style, composition, lighting, and scene intent so output fidelity improves.

Pricing & Credits

How much does Abacus Studio cost?

Abacus Studio is offered in two tiers:

  • Basic Tier — $10/month: Includes these usage limits: Video: Maximum of 3 conversations. You’ll need to upgrade after the third conversation. Per-conversation credit cap: Maximum of 2,500 credits per conversation. You’ll need to upgrade once this cap is exceeded.
  • Pro Tier — +$10/month (additional): Unlocks unrestricted access to both Abacus AI Agent and Abacus Studio, as long as your account has active credits.
How many credits does each model consume?

Credit usage depends on the model, resolution, and output type. You’ll see exact credit cost in the model settings panel before you generate.

Do higher resolutions consume more credits?

Yes. Higher resolutions require more compute, so they consume more credits.

Do longer videos (15s vs 4s) consume more credits?

Yes. Video credit usage scales with duration.

How does upscaling affect credit consumption?

Upscaling is its own generation step, so it uses additional credits.

Can I see my credit usage history?

Yes. You can track real-time credit balance and usage history in the Abacus.AI billing dashboard.

What happens if I run out of credits?

If your balance hits zero, you won’t be able to start new generations until you add more credits.

Are there different pricing tiers?

Yes. Abacus Studio has Basic and Pro tiers as listed above, and it’s also available through ChatLLM Teams subscription plans.

Do I have commercial usage rights for my generations?

Commercial rights depend on each model’s license terms. In general, content from Abacus Studio can be used commercially, but certain model-specific restrictions may apply.

I have more questions / feedback on the platform!
Love that — we'd be happy to hear from you. If you've got questions or feedback, contact us at support@abacus.ai.
FAQ
Copyright © 2026 Abacus.AI. All Rights Reserved