🎬 ComfyUI Practical Workflows

Step-by-step guides for your creative projects

🎯 RealSense D435i → Game Character

⭐⭐⭐ Intermediate ⏱️ 15-30 minutes

Transform real-world 3D scans into stylized game characters using depth maps and AI

📦 Nodes & Models Required:

Load Image - Input depth map
ControlNet Depth - Depth guidance
SDXL Checkpoint - Base model
CLIP Text Encode - Prompts
KSampler - Generation
VAE Decode - Final image
1

Capture Depth Map with RealSense

Use your RealSense D435i camera to capture a depth map of your subject:

# Install RealSense SDK if not already installed sudo apt-get install librealsense2-utils # Capture depth image realsense-viewer # Or use Python: import pyrealsense2 as rs import numpy as np import cv2 pipeline = rs.pipeline() config = rs.config() config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30) pipeline.start(config) # Get depth frame frames = pipeline.wait_for_frames() depth_frame = frames.get_depth_frame() depth_image = np.asanyarray(depth_frame.get_data()) # Save depth map cv2.imwrite('/data/comfyui/input/depth_scan.png', depth_image)
💡 Tip: Position subject 1-3 meters from camera for best depth data. Good lighting helps even though it's a depth sensor.
2

Load Depth Map in ComfyUI

In ComfyUI workspace:

  • Add Load Image node
  • Select your depth map: /data/comfyui/input/depth_scan.png
  • Connect to ControlNet Apply node
⚠️ Note: Depth maps are grayscale - white = close, black = far. ComfyUI expects this format.
3

Setup ControlNet Depth

Configure the ControlNet node:

  • Model: control_v11f1p_sd15_depth.pth (for SD1.5) OR controlnet-depth-sdxl-1.0.safetensors (for SDXL)
  • Strength: 0.7-1.0 (higher = more depth accuracy, lower = more creative freedom)
  • Start/End: 0.0 to 1.0 (full influence)
💡 Recommended Settings: Start with strength 0.8 for first tries. You can lower it if the output is too literal.
4

Write Your Character Prompt

Add CLIP Text Encode (Prompt) nodes for positive and negative:

Positive Prompt: "fantasy warrior character, detailed armor, heroic pose, high quality, professional 3D game asset, Unreal Engine style, dramatic lighting, 8k" Negative Prompt: "blurry, low quality, distorted, malformed, ugly, bad anatomy, duplicate, watermark"
💡 Style Tip: Add art style keywords like "cel-shaded", "realistic", "anime", "low-poly" to match your game's aesthetic.
5

Configure Generation Settings

In the KSampler node:

  • Steps: 25-30 (higher = better quality but slower)
  • CFG Scale: 7-9 (how closely to follow prompt)
  • Sampler: DPM++ 2M Karras (good quality/speed balance)
  • Scheduler: Karras
  • Denoise: 1.0 (full generation from depth)
6

Generate and Refine

Click Queue Prompt to generate!

✅ What to Expect:

  • Generation takes 30-90 seconds on P5000
  • Character will maintain depth/pose from scan
  • Style will match your prompt
  • May need 2-3 attempts to get perfect result
💡 Pro Tip: Save good results, then use img2img with lower denoise (0.3-0.6) to make variations while keeping the same character.
7

Post-Process & Export

Enhance final output:

  • Add Upscale Image (using Model) node
  • Select RealESRGAN_x4plus.pth
  • Output goes to: /data/comfyui/output/
  • Ready to import into Unity/Unreal/Godot!

🎨 Concept Sketch → Polished Game Asset

⭐⭐ Beginner-Friendly ⏱️ 10-20 minutes

Transform rough sketches into production-ready game assets

📦 Nodes & Models Required:

Load Image - Your sketch
Canny Edge Detector - Extract lines
ControlNet Canny - Line guidance
SDXL Checkpoint - Generation
Real-ESRGAN - Upscale
1

Prepare Your Sketch

Draw your concept in any tool (Krita, Photoshop, even paper + photo):

💡 Best Practices:
  • Clear, bold lines work best
  • Simple shapes → detailed results
  • Resolution: 512x512 minimum, 1024x1024 ideal
  • Black lines on white background

Save sketch to: /data/comfyui/input/sketch.png

2

Extract Edge Detection

In ComfyUI:

  • Add Load Image → select your sketch
  • Add Canny Edge Preprocessor
  • Set low_threshold: 100, high_threshold: 200
  • Connect to ControlNet Apply (Canny)
⚠️ Threshold Tips: Lower thresholds = more edges detected. Start with 100/200, adjust if too many/few lines are captured.
3

Setup ControlNet Canny

Configure ControlNet node:

  • Model: control_v11p_sd15_canny.pth
  • Strength: 0.5-0.8 (balance between sketch and creativity)
4

Describe Your Asset

Example Prompts: For Weapon: "legendary sword, ornate design, glowing blue runes, game asset, white background, professional render, highly detailed, 4k" For Character: "fantasy elf archer, detailed costume, game character design, Unreal Engine style, full body, white background" For Environment: "medieval castle exterior, stone walls, game environment asset, architectural detail, 8k textures"
5

Generate & Iterate

KSampler settings:

  • Steps: 25-30
  • CFG: 7-8
  • Denoise: 1.0

✅ Expected Results:

Your sketch will transform into a polished, colored, detailed asset while maintaining the original structure and composition!

6

Upscale for Production

Add Real-ESRGAN upscaler:

  • 512x512 → 2048x2048 (4x)
  • Perfect for Unity/Unreal texture import
  • Maintains sharp details

🎥 Still Image → Animated Video

⭐⭐ Intermediate ⏱️ 5-15 minutes per clip

Bring static images to life with SVD or create character animations with AnimateDiff

📦 Two Methods Available:

Method 1: SVD - Image-to-video (photorealistic)
Method 2: AnimateDiff - Character animation (stylized)
1

Choose Your Approach

Use SVD (Stable Video Diffusion) when:

  • You have a high-quality still image
  • Want photorealistic motion
  • Creating cutscenes, product demos, environment flythroughs
  • Need 14-25 frame clips

Use AnimateDiff when:

  • Creating character animations
  • Want stylized motion (walk cycles, actions)
  • Making 2D game sprites or anime-style clips
  • Need more control over motion
2

Method 1: SVD Workflow

For photorealistic image-to-video:

  • Add Load Image - your source image (1024x576 recommended)
  • Add SVD_img2vid_Conditioning node
  • Add VideoLinearCFGGuidance node
  • Add KSampler with SVD checkpoint
  • Settings: 20-25 steps, CFG 2.5, motion_bucket_id: 127
💡 SVD Tips:
  • Start image matters! Clear subject, uncluttered background
  • Lower motion_bucket_id = subtle motion (60-100)
  • Higher motion_bucket_id = dramatic motion (150-200)
  • Sweet spot: 127 for balanced movement
3

Method 2: AnimateDiff Workflow

For character/stylized animation:

  • Use SD 1.5 checkpoint (AnimateDiff doesn't support SDXL yet)
  • Add AnimateDiff Loader - select mm_sd_v15_v2.ckpt
  • Write action prompt: "character walking, side view, smooth animation"
  • Set frame count: 16 frames
  • Generate 16-frame animation sequence
💡 AnimateDiff Tips:
  • Include motion words: "walking", "running", "waving"
  • Specify view: "side view", "front view", "3/4 view"
  • Can combine with LoRAs for consistent character style
  • Export as image sequence, compile in Kdenlive
4

Post-Process in Kdenlive

Enhance your generated video:

# Open Kdenlive kdenlive # Import generated frames/video # Add effects: - Color correction - Speed ramping (if needed) - Transitions - Audio # Export at game-ready settings: - Format: MP4 (H.264) - Resolution: 1920x1080 or 1280x720 - Framerate: 30fps or 60fps

✅ Use Cases:

  • Game Cutscenes: SVD for dramatic camera movements
  • Character Intro: SVD on hero portrait
  • UI Animations: SVD on icons/buttons for subtle life
  • 2D Sprites: AnimateDiff for walk cycles, attacks
  • Trailer Shots: SVD on key art

🔄 Character Turnaround Sheet

⭐⭐⭐ Advanced ⏱️ 30-60 minutes

Generate consistent character views (front, side, back, 3/4) for game development

1

Create Base Character

Generate your character's front view first:

Prompt: "character design sheet, front view, full body, [your character description], white background, reference sheet, professional, consistent lighting"
2

Use OpenPose for Consistency

Extract pose from front view, modify for other angles:

  • Extract OpenPose from front view
  • Manually adjust skeleton for side view
  • Generate with same prompt + "side view"
  • Repeat for back, 3/4 views
💡 Consistency Trick: Use the same seed for all views to maintain character features. Only change the view angle in prompt.
3

Compile Turnaround Sheet

Combine all views into single reference sheet in GIMP/Krita

🔍 Production Quality Upscaling

⭐ Easy ⏱️ 2-5 minutes

Enhance any generated image to production resolution

1

Add Upscale Node

After any generation:

  • Add Upscale Image (using Model)
  • Select RealESRGAN_x4plus.pth
  • 512x512 → 2048x2048
  • 1024x1024 → 4096x4096
💡 When to Upscale: Always upscale as final step before exporting to game engine!