Search Program
Organizations
Contributors
Presentations
Birds of a Feather
Gaming & Interactive
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Graphics Systems Architecture
Hardware
Image Processing
Rendering
Full Conference
Experience
DescriptionAs SIGGRAPH attendees grapple with aging infrastructure (much that predates the pandemic) they are at a critical technological inflection point – AI and GPU rendering are accelerating rapidly, should they invest in just GPU technology moving forward, stick with more flexible CPU farms or strike a sort of hybrid approach? Whether they’re buying new infrastructure or investing in cloud resources the questions are just as valid. Our session will help attendees understand the pros and cons of each approach, the economic considerations, where we are in the GPU adoption curve and insights from those that have already made a migration.
Production Session
Arts & Design
Gaming & Interactive
Production & Animation
Not Livestreamed
Not Recorded
Animation
Games
Real-Time
Full Conference
Virtual Access
Wednesday
DescriptionThis talk will break down the animation process in South of Midnight across gameplay and cutscenes. And how it is a mix of art and tech that brings the stop motion, southern gothic world and characters alive.
Poster
Full Conference
Experience
DescriptionWe propose a grid optimization method with regional control that uses attention mechanisms to prioritize visually significant areas and employs an attention flow mechanism to optimize resource allocation for structural consistency, thereby enhancing mesh reconstruction precision and capturing finer local geometry details.
Poster
Full Conference
Experience
DescriptionDynamic skinning is a new method that extends upon normal linear blend skinning and allows for time delay effects with a general framework allowing for oscillatory secondary motion and extension to linear blend skinning itself with great artistic control, all of which is compatible with existing standard rigged characters.
Poster
Full Conference
Experience
DescriptionAnts exhibit unique abilities to self-assemble into animate, living structures, which display both fluid and solid-like proprieties. We present an interactive constraints-based approach for simulating the collective dynamics of ant swarms in various 3D settings. Our method closely imitates real-world behaviors with compelling physical realism.
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
Poster
Full Conference
Experience
DescriptionThis project explores the role of the designer in digital fabrication workflows as digitization leads to higher levels of design automation. As digital technologies are adopted to streamline design to manufacturing workflows, elements of the creative process can become standardized to improve production efficiency at the cost of designer autonomy and product customization. In order to ensure designers’ agency and increase product variation, the Carrara project presents a collaborative tool utilizing agent-based modeling (ABM) to represent designers, fabrication machines, and algorithms as active co-participants in the design process. This co-participatory workflow enables a generative, scalable product line that takes advantage of digital efficiencies while providing the designer with autonomy and control in the creative process.
Poster
Full Conference
Experience
Description"Digitizing Devotion" utilizes advanced oblique photography and AI to create immersive virtual reconstructions of sacred spaces, preserving traditional worship practices for the global diaspora while ensuring cultural continuity across generations and geographical boundaries.
Poster
Full Conference
Experience
DescriptionThis paper introduces Dust in Time, an embodied and tangible interactive installation that transforms physical gestures into audio-visual responses through hourglasses and projected particles, offering a reflective exploration of time and human presence.
Poster
Full Conference
Experience
DescriptionThe animated short film Sensual explores a novel workflow for hand-painted watercolour animation, blending traditional artistic methods with AI-based frame interpolation techniques. By combining compositing with the Real-Time Intermediate Flow Estimation (RIFE) image interpolation network, we significantly reduced production time while maintaining the unique hand-painted aesthetic.
Poster
Full Conference
Experience
DescriptionFoliager is a generative AI-powered pipeline that transforms natural language into biologically plausible 3D forest ecosystems, combining ecological simulation with procedural graphics to support scientific visualization, storytelling, and environmental design.
Poster
Full Conference
Experience
DescriptionExtending Giada Peterle’s concept of auto-cartography, this paper explores Tasmania’s complex and dynamic island identity through an interactive installation powered by a customised generative AI model. By collecting human experience as a training dataset, it reimagines mapping as an embodied, affective process that engages participants to reflect on their relations to place.
Poster
Full Conference
Experience
DescriptionIn this study, we propose an experience inspired by the Anywhere Door concept, in which users transition between multiple life-sized projected virtual spaces by opening, closing, and passing through a physical door.
Poster
Full Conference
Experience
DescriptionA complete traditional puppetry performance requires diverse control interfaces to support a broader range of manipulation techniques. To address this, our work integrates three distinct immersive puppeteering experiences: VR-HMD, MR-HMD, and CAVE systems to enable asymmetric interaction, opening up new possibilities for the future of digital puppetry theater.
Poster
Full Conference
Experience
DescriptionThis study presents a method for designing balancing toys. Through interactive modeling techniques that optimize both shape and mass distribution, this research achieves the challenging feat of locating the center of mass outside the body. The designed balancing toys were successfully fabricated using an FDM 3D printer.
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
Poster
Full Conference
Experience
DescriptionWe presents DreamCraft, a VR system that demonstrates the potential of combining panorama with 3D generation techniques to provide users with an intuitive and feature-rich platform for creating interactive 3D scenes. Our pilot study shows that even users with no prior experience can effectively use the system.
Poster
Full Conference
Experience
DescriptionThis paper presents EARSIM, a new approach to auditory localization training through Virtual Reality, utilizing a configurable multi-sensory cue system to enable adaptive and personalized difficulty levels. The proposed system addresses the limitations of conventional localization techniques and demonstrates potential as a flexible platform for future clinical applications in auditory rehabilitation.
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
Poster
Full Conference
Experience
DescriptionWe present a marker-based VR system that simulates real-time water surface flow by tracking ArUco markers on physical water. The system generates FlowMaps from tracked motion to drive fluid effects in Unity. A circular pool with water-jet units enables controllable flow, enabling sensor-driven, immersive fluid simulation.
Poster
Full Conference
Experience
DescriptionThis paper presents two novel teleportation methods for VR environments that address limitations of conventional parabola-based approaches when navigating varying heights. The SphereBackcast and Penetration methods utilize straight-line specification for intuitive movement to elevated locations. Experiments with 22 participants showed our methods significantly outperformed parabola-based teleportation for height differences above 2m, while maintaining comparable performance on flat terrain. NASA-TLX and SUS evaluations confirmed improved usability and reduced cognitive load, indicating these methods can be readily integrated into existing VR applications.
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
Poster
Full Conference
Experience
DescriptionIn this poster we introduce the INT-ACT project which aims to investigate the use of immersive mobile eXtended Reality (XR) environments for presenting the emotional, experiential and environmental dimensions of Intangible Cultural Heritage (ICH) associated with tangible cultural heritage sites. We also present a museum exhibition that we have developed as part of INT-ACT that focuses on the ICH related to a prehistoric megalithic site in the Alentejo region of Portugal. Visitors to the exhibition can interact with the physical environment using a mobile immersive mobile XR app to access different audio-visual media content presenting the ICH of the selected site..
Poster
Full Conference
Experience
DescriptionCheerleading stunts are group gymnastics performed by multiple people. As the skills involved become more challenging, it is necessary to devise better practice methods. Thus, in this paper, we propose a pretraining support system for cheerleading stunts using Virtual Reality (VR) technology. This system allows the users to experience successfully performing a stunt in the virtual space by adopting the viewpoints of the cheerleaders performing various types of stunts. Our system has the potential to meaningfully augment the established training method of previsualization of stunts.
Poster
Full Conference
Experience
DescriptionThis paper introduces SugART, an MR project that enables users to learn and recreate traditional sugar painting at home. Combining hand tracking, virtual guidance, and real-time feedback, our project supports creative expression and cultural education, lowering barriers to participation in intangible cultural heritage through accessible and interactive digital experiences.
Poster
Full Conference
Experience
DescriptionThe Gesture Lives On transforms traditional Taiwanese glove puppetry into an immersive digital performance through real-time VR gesture tracking and virtual puppet co-performance, offering a novel model for integrating cultural heritage with contemporary performance technologies.
Poster
Full Conference
Experience
DescriptionThis study explores the effective range of a weight illusion induced by AR visual effects. Results show that AR visual effects on the arm, creating a “strong” impression, can make 100–500 g weights feel lighter when lifted with the visually augmented arm.
Poster
Full Conference
Experience
DescriptionYou Can Grow Here is an immersive virtual reality experience that combines theatrical storytelling, improvisational design methods, and evidence-based wellness techniques to guide users through emotional regulation and anxiety relief—successfully exhibited in UIC’s CAVE2 and aligned with the United Nations Sustainable Development Goal of Good Health and Well-Being.
Poster
Full Conference
Experience
DescriptionWe propose an interactive camerawork authoring system for free-viewpoint 3D dance contents that synthesizes and edits the camerawork by retrieving optimal sequences from a database based on user queries of music and pose similarity, and demonstrate its effectiveness quantitatively and qualitatively.
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
Poster
Full Conference
Experience
DescriptionDeveloped uNEEDXR™ system achieves 60,000 nits brightness in micro-OLEDs on silicon, featuring high brightness, high pixel density, low power consumption, high contrast ratio, high color saturation, and tunable energy distribution (including viewing angle, wavelength, and bandwidth).
Poster
Full Conference
Experience
DescriptionWe propose a large-étendue direct-view holographic display using dynamic optical steering with a high-pixel-resolution amplitude-only SLM.
The system expands the eye box in both lateral and depth directions by translating two lenses.
We further extend SGD-based hologram optimization to support dual light sources and amplitude-only SLM, achieving stereoscopic image delivery.
The system expands the eye box in both lateral and depth directions by translating two lenses.
We further extend SGD-based hologram optimization to support dual light sources and amplitude-only SLM, achieving stereoscopic image delivery.
Poster
Full Conference
Experience
DescriptionThis study proposes a novel Maxwellian optical system that combines a transmissive mirror device (TMD) with spherical multi-pinholes. Its effectiveness was verified through 2D and 3D simulations, demonstrating a significantly wider viewing angle than that of conventional systems.
Poster
Full Conference
Experience
DescriptionWe propose a naked-eye stereoscopic display with an ultra-wide viewing zone by applying the display principle of general LCDs. By replacing the polarizer of an LCD with a reflective polarizer and arranging them three-dimensionally, this technology refracts light rays freely and enables an expansion of the viewing zone.
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
Poster
Full Conference
Experience
DescriptionAn "infinity mirror" is an optical novelty that uses facing mirrors to create the appearance of a tunnel of copies of a scene receding into the distance. This poster shows how to use view-dependent appearance to make all copies of the scene appear non-reflected, allowing for "speed tunnel" effects.
Poster
Full Conference
Experience
DescriptionA real-time algorithm for driving multispectral LED lights in a spherical lighting reproduction stage to achieve accurate color rendition for a dynamic scene. This technique drives several thousand multispectral LED lights at video framerate by pre-computing a LUT of the NNLS solutions across the full range of input RGB values.
Poster
Full Conference
Experience
DescriptionWe propose a smartphone-based wide field-of-view HMD using inexpensive mirrors and lenticular lenses. Lenticular lenses on both display edges create multi-view images, which are then redirected by strategically placed mirrors to expand the peripheral field of view, effectively enlarging the display area without increasing the screen size.
Poster
Full Conference
Experience
DescriptionA novel method for automatic colorization of anime line drawings achieves improved accuracy over state-of-the-art segment matching-based approaches by leveraging semantic segmentation and color shuffling processes without relying on flow estimation, effectively addressing challenges posed by large motion gaps and small regions.
Poster
Full Conference
Experience
DescriptionWe evaluate four models using INR and VAE structures for compressing phase-only holograms. Our findings show that the pretrained VAE struggles with this task, while SIREN achieves 40% compression with high-quality 3D images (PSNR = 34.54 dB), highlighting the effectiveness of INRs and VAE limitations.
Poster
Full Conference
Experience
DescriptionWe present the first open-source system for automatic interpretation of Ancient Egyptian texts, combining OCR, transliteration, and translation into a unified pipeline that supports diverse writing styles and improves accessibility for learners and researchers.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Education
Games
Generative AI
Geometry
Modeling
Pipeline Tools and Work
Full Conference
Experience
Descriptionbenjamin.anderson@sfasu.edu
This discussion is for educators facilitating coursework in a rapidly changing digital world, where software updates and generative artificial intelligence (AI) in addition to online resources are shaping how learning outcomes are defined for students in higher education.
Ensuring that students in higher education are equipped with the methods, procedures, and technical understanding of technology is important for efficient and creative endeavors. This talk will present a pedagogical approach to sequential and non-sequential curricular activities for new and growing 3D Animation programs in higher education.
This discussion is for educators facilitating coursework in a rapidly changing digital world, where software updates and generative artificial intelligence (AI) in addition to online resources are shaping how learning outcomes are defined for students in higher education.
Ensuring that students in higher education are equipped with the methods, procedures, and technical understanding of technology is important for efficient and creative endeavors. This talk will present a pedagogical approach to sequential and non-sequential curricular activities for new and growing 3D Animation programs in higher education.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Digital Twins
Industry Insight
Virtual Reality
Full Conference
Experience
DescriptionJoin The 3D Artist Community for a panel discussion with 3D artists who’ve successfully transitioned from entertainment into industries like fashion, architecture, product design, and more. Hear firsthand how they adapted their skills, what challenges they faced, and what surprised them most along the way. Whether you're curious about switching industries or just exploring new possibilities, this is your chance to ask questions, get advice, and connect with artists who’ve been there. Learn how the world of 3D is expanding beyond film and games—and where you might fit in.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionGiven a 3D object representing the source content and a reference style image, our method performs 3D stylization with a large pre-trained reconstruction model. This is achieved in a zero-shot manner, with no training or test time optimization required, while delivering superior visual fidelity and efficiency compared to existing approaches.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
Description3D-Fixup enables realistic 3D-aware photo editing by leveraging 3D priors and a novel data pipeline that extracts training pairs from real-world videos. Its feed-forward architecture supports efficient, high-quality edits involving complex 3D transformations while preserving object identity, outperforming prior methods in both edit accuracy and user control.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present 3DGH, a generative model that creates realistic 3D human heads with composable hair and face components. By modeling both the separation and correlation between hair and face in a generative paradigm, it enables high-quality, full-head synthesis and flexible 3D hairstyle editing with strong visual consistency and realism.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThis paper introduces a nearly second-order convergent training algorithm for 3D Gaussian Splatting that exploits independent kernel attributes and sparse coupling across images. By constructing and solving small Newton systems for parameter groups, it achieves about an-order faster training while maintaining or exceeding SGD-based reconstruction quality.
Poster
Full Conference
Experience
DescriptionWe introduce a novel user interface, Manu-grid, to estimate the projection function, which corresponds to the drawing method used in a background image to obtain the geometrical consistent composite of 3D model images into the background scene.
Poster
Full Conference
Experience
DescriptionThis study proposes a region-wise confidence estimation method for anime-style line drawing colorization. By comparing local patches in the colorized image with training images using normalized cross-correlation, the method highlights uncertain regions. It improves usability by aiding artists in identifying colorization errors efficiently and reliably.
Poster
Full Conference
Experience
DescriptionA 30-parameter, physics-based model transforms digital images into authentically scanned film colour. Trained on a single roll of colour-positive film, it matches LUT accuracy without artefacts and exposes interpretable parameters, offering filmmakers a data-light and production-ready solution to revive and preserve classic film aesthetics.
Poster
Full Conference
Experience
DescriptionWe introduce a modular, open-source pipeline that combines multiple custom-trained LoRA and ControlNet models to disentangle style and identity, enabling fast, visually and narratively consistent AI-generated short films,validated through two award-winning multi-scene productions.
Poster
Full Conference
Experience
DescriptionWe present a compact, handheld holographic video camera that captures full-color holograms in real time under natural lighting, making laser-free holography possible. By integrating advanced optical components and AI-driven super-resolution, it enables high-quality holographic content capture, paving the way for portable, next-generation immersive media and real-world applications of holography.
Poster
Full Conference
Experience
DescriptionTo generate previews with near-final render quality in VFX and enable faster iteration, we propose G-FED, G-Buffer Guided Frame Extrapolation in Video Diffusion Models. G-FED denoises 1spp frames, guided by G-buffer data, to infill masked forward projections and generate high-quality images that are spatially and temporally coherent.
Poster
Full Conference
Experience
DescriptionWe propose a geometry- and illumination-aware 2d-graphic compositing pipeline. We use meshes generated by off-the-shelf monocular depth estimation methods to warp the 2d-graphic according to the surface geometry. Using intrinsic decomposition, we composite the warped graphic onto the albedo and reconstruct the final result by combining all intrinsic components.
Poster
Full Conference
Experience
DescriptionWe introduce the novel task of predicting flat colors for unintended small regions left unpainted by flood-fill operations—common in anime-style illustrations—and present a U-Net-based method that achieves 62.5% exact-match accuracy on professional data, outperforming naïve baselines and establishing a promising foundation for supporting anime-style colorization workflows.
Poster
Full Conference
Experience
DescriptionQRBTF is an AI QR code generator trained with ControlNet, which can generate scannable QR codes hidden within images based on prompt input.
Poster
Full Conference
Experience
DescriptionThis work presents a pipeline that converts rasterized graphic design posters into multi-layered, editable assets. It decomposes elements, addresses layer ordering using a novel Z-index strategy, and shows high accuracy through evaluations of over 24,000 posters. User feedback confirms its ability to accurately reconstruct posters with excellent fidelity.
Poster
Full Conference
Experience
DescriptionSAWNA tackles layout-sensitive text-to-image generation by treating user-specified empty regions as first-class constraints. Bounding-box masks are blurred and injected as mean-shifted, inert noise into the frozen Stable Diffusion latent, suppressing synthesis inside reserved areas while preserving diversity and quality elsewhere. This simple training-free modification supports workflows that require precise layout fidelity, including advertising (e.g., space for logos or headlines), UI design (e.g., button placement), and animation pre-production (e.g., speech bubbles, subtitles, or motion overlays).
Experiments show that SAWNA outperforms layout-aware baselines like GLIGEN and in-painting pipelines, both of which struggle to maintain truly empty regions without introducing artifacts or incoherence. In contrast, SAWNA yields clean, editable space while producing semantically rich images across the remaining canvas.
This makes it especially suitable for design-critical applications where reserved regions are integral to downstream compositing or storytelling.
Experiments show that SAWNA outperforms layout-aware baselines like GLIGEN and in-painting pipelines, both of which struggle to maintain truly empty regions without introducing artifacts or incoherence. In contrast, SAWNA yields clean, editable space while producing semantically rich images across the remaining canvas.
This makes it especially suitable for design-critical applications where reserved regions are integral to downstream compositing or storytelling.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present 4D Gaussian Video (4DGV) for high-quality, low-storage volumetric video reconstruction and real-time streaming. Our method effectively handles complex motion and enables effective motion compression, achieving superior performance in both reconstruction quality and storage efficiency.
Poster
Full Conference
Experience
DescriptionMinecraft to 3D automatically converts any Minecraft world into an engine-ready, fully textured 3D scene—recognising default structures, smoothing voxel terrain, and swapping structures for high-quality models—so educators, indie developers, and artists can transform their in-game prototypes into production environments in minutes.
Poster
Full Conference
Experience
DescriptionWe propose a finetuned conditional latent diffusion model for generating motion field from user-provided sketches, which are subsequently integrated into a latent video diffusion model via a motion
adapter to precisely control the fluid movement.
adapter to precisely control the fluid movement.
Poster
Full Conference
Experience
DescriptionWe present a pipeline for designing and detecting subtle code-conveying patterns that can be
printed on transparent sticker paper, then applied to real-world surfaces, rendering the modifications imperceptible to the human eye, but robustly detectable to our model, with specific emphasis placed on allowing for human error in sticker placement.
printed on transparent sticker paper, then applied to real-world surfaces, rendering the modifications imperceptible to the human eye, but robustly detectable to our model, with specific emphasis placed on allowing for human error in sticker placement.
Poster
Full Conference
Experience
DescriptionStructInbet is an skeleton-based inbetweening system that achieves controllable, structure-aware interpolation generation with improved pose clarity and motion alignment to user intent, surpassing prior point-based methods in reducing ambiguity.
Poster
Full Conference
Experience
DescriptionWe introduce an architecture-agnostic super-resolution framework that uses human visual sensitivity to allocate computational resources efficiently, delivering substantial reductions in computational demand without perceptible quality loss, as validated by user studies—offering significant advantages for applications like VR and AR.
Poster
Full Conference
Experience
DescriptionOur training-free method enables photorealistic facade editing by combining hierarchical procedural structure control with diffusion models. Starting from a facade image, we reconstruct, edit, and guide generation to produce high-fidelity, photorealistic variations. The method ensures structural consistency and appearance preservation, demonstrating the power of symbolic modeling for controllable image synthesis.
Poster
55. Train Once, Generate Anywhere: Discretization Agnostic Neural Cellular Automata Using SPH Method
9:00am - 5:30pm PDT Sunday, 10 August 2025 West Building, Level 2, Outside Rooms 211-214Full Conference
Experience
DescriptionWe introduce SPH‑NCA, a discretization agnostic neural cellular automata that uses a differentiable SPH method for perception and a stable training scheme, allowing image and texture synthesis on any grid, resolution, or 3D surface while trained on a fixed-resolution 2D image.
Poster
Full Conference
Experience
DescriptionWe propose a two-stage sketch-guided smoke illustration generation framework using stream function. The input sketch is converted into the stream function through a latent diffusion model, which subsequently drives the velocity field generation. The velocity field serves as a guidance force to drive the smoke simulation.
Poster
Full Conference
Experience
DescriptionWe propose a new game engine module, Capsule, that allows multiple players efficiently share one engine. We implemented Capsule in O3DE, in an application agnostic way. Our experience with four applications show that Capsule increases datacenter resource utilization by accommodating up to 2.25x more players, without degrading player gaming experience.
Poster
Full Conference
Experience
DescriptionThis study examines distance management in combat sports training with haptic feedback. Results show that haptic feedback reduced punch distances and movement, while no significant difference was found in step count or average distance to the opponent. Haptic feedback aids better distance management with less movement.
Poster
Full Conference
Experience
DescriptionThis study presents a gaze entropy–based framework to identify cognitive failures and predict accident risk before TOR(Take-Over Request) in conditionally autonomous driving. Using a Random Forest model, it enables early risk detection and offers practical insights for driver monitoring.
Poster
Full Conference
Experience
DescriptionWe achieve physically plausible 3D fragment reassembly by framing it as path-verified spectral packing, using FFT correlation and alignment-maximizing ICP refinement against a known target boundary for high-fidelity, collision-free reconstruction.
Poster
Full Conference
Experience
DescriptionThis study demonstrates that proposed interactive posters and trailers breaking the "Fourth Wall" significantly boost movie anticipation and viewing intentions, offering an effective promotional strategy for mobile-based video streaming platforms.
Poster
Full Conference
Experience
DescriptionThis paper proposes PAAP (Performer-Aware Automatic Panning System), the first system to automatically track performer(s) and generate spatial audio panning data integrated with a Digital Audio Workstation (DAW). Real-time processing of PAAP via Open Sound Control (OSC) confirms its readiness for deployment in professional music production.
Poster
Full Conference
Experience
DescriptionThis study presents a multimodal framework integrating human factors(workload, situation awareness), biometrics(heart rate variability, eye-tracking), and spatial complexity to predict Level 2 autonomous driving accidents, achieving 73.7% accuracy via logistic regression, with age and workload as key predictors and elevated cognitive load in complex environments informing real-time adaptive safety interventions.
Poster
Full Conference
Experience
DescriptionSEE-2-SOUND is a zero-shot approach that generates spatial audio for visual content. It decomposes the task into four steps: identifying visual regions of interest, locating them in 3D space, generating mono-audio for each, and integrating them into spatial audio. Our approach can generate realistic spatial-audio from images or videos.
Poster
64. Skylight: Real-Time Projection Mapping for Surgical Navigation Leveraging Skin-Adhered Fiducials
9:00am - 5:30pm PDT Sunday, 10 August 2025 West Building, Level 2, Outside Rooms 211-214Full Conference
Experience
DescriptionSkylight is a surgical navigation system that uses skin-mounted fiducials and real-time projection mapping to display high-accuracy, CT-registered guidance directly onto the patient’s body -- eliminating the need for bone-mounted trackers and enhancing surgical precision and usability.
Poster
Full Conference
Experience
DescriptionStroke Imprint is a knitted wearable that simulates affective strokes to comfort young women experiencing anxiety by using pressure sensing and SMA-based actuation. Paired with a digital interface, the glove allows users to record personalized tactile sensation. Through user interviews, design iterations, and user testing, the study demonstrates its therapeutic potential as an anxiety-tracking wearable within a closed biofeedback loop.
Poster
Full Conference
Experience
DescriptionPlay with Earth introduces a novel project addressing the preservation and innovation of ICH, focusing on traditional mud toys from China's Yellow River. Based on a comprehensive documentation of 15,686 photographs of mud toys and interviews with inheritors, our project achieved an interactive platform combining traditional craftsmanship with AI-assisted creativity.
Poster
Full Conference
Experience
DescriptionRay tracing is a widely used technique for modeling optical systems, involving
sequential surface-by-surface computations which can be computationally
intensive. We propose Ray2Ray, a novel method that leverages implicit neural representations to
model ray tracing through optical systems with greater efficiency and performance, eliminating the need for surface-by-surface computations in a single pass end-to-end model.
Ray2Ray learns the mapping between rays emitted from a given source and their
corresponding rays after passing through a given optical system in a physically
accurate manner.
sequential surface-by-surface computations which can be computationally
intensive. We propose Ray2Ray, a novel method that leverages implicit neural representations to
model ray tracing through optical systems with greater efficiency and performance, eliminating the need for surface-by-surface computations in a single pass end-to-end model.
Ray2Ray learns the mapping between rays emitted from a given source and their
corresponding rays after passing through a given optical system in a physically
accurate manner.
Poster
Full Conference
Experience
DescriptionWe present a comparative analysis of skin tone rendering in MetaHuman avatars using real-world and reference-based color inputs, revealing systematic differences across the Monk Skin Tone scale and highlighting key limitations in current real-time rendering pipelines for darker and intermediate skin tones.
Poster
Full Conference
Experience
DescriptionGBake introduces a raytracing-based technique for creating reflection probes in Gaussian-splatted environments that overcomes inherent EWA splatting errors at cubemap seams, enabling realistic integration of traditional mesh objects into scenes comprised of 3D Gaussians.
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
Poster
Full Conference
Experience
DescriptionWe propose the first framework to enable spatial adaptivity with the closest point method, which provides a more efficient spatial discretization suitable for recent applications of the closest point method in computer graphics, such as fluid simulation [Morgenroth et al. 2020] and geometry processing [King et al. 2024].
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
Poster
Full Conference
Experience
DescriptionThis paper presents a high-fidelity steganography method for 3D Gaussian splatting that requires no additional training. We propose a bit-level embedding technique that leverages the lower bits of the 32-bit floating-point representation. We further introduce an embedding strategy that considers each Gaussian's opacity and incorporates RSA encryption.
Poster
Full Conference
Experience
DescriptionHyperParamBRDF uses hypernetworks conditioned on physical parameters to predict nanostructure BRDFs with high fidelity, accelerating appearance evaluation by orders of magnitude compared to simulation and enabling real-time exploration.
Poster
Full Conference
Experience
DescriptionWe develop an object insertion pipeline and interface that enables iterative editing of illumination-aware composite images. Our pipeline leverages off-the-shelf computer vision methods and differentiable rendering to reconstruct a 3D representation of a given scene. Users can add 3D objects and render them with physically accurate lighting effects.
Poster
Full Conference
Experience
DescriptionWe present a high-performance, VFX-inspired workflow that transforms unstructured CFD data into scalable, high-fidelity visualizations using parallel voxelization, OpenVDB export, and CyclesPhi rendering, supporting both batch processing and interactive frame exploration for scientific analysis and visual communication.
Poster
Full Conference
Experience
DescriptionSurfelPlus introduces a real-time global illumination renderer optimized for low-end hardware, achieving dynamic indirect lighting through unified surfel generation, adaptive surfel management, and advanced spatial-temporal filtering, significantly improving performance and visual fidelity without expensive precomputations.
Poster
Full Conference
Experience
DescriptionOur GPU-resident pipeline based on Unreal Engine 5 unifies scene generation, rendering, and processing entirely on the GPU, eliminating CPU–GPU transfers and disk I/O to achieve near-constant per-sample latency, up to 12× speedups, and sustained high-throughput training with effectively infinite synthetic data.
Poster
Full Conference
Experience
DescriptionThis paper proposes a polarization path tracing method that incorporates multiple microfacet reflections and introduces an approximation potentially enabling computationally efficient rendering.
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
Poster
Full Conference
Experience
DescriptionThis paper introduces innovative methods for generating gestures, facial expressions, and exaggerated emotional expressions for non-photorealistic characters using comics-extracted expression data and dialogue-specific semantic gestures for conversational AI, achieving significantly enhanced user satisfaction compared to a state-of-the-art photorealistic method.
Poster
Full Conference
Experience
DescriptionSpeech driven 3D face animation driven by disentangled phoneme and prosody features, enabling fine-grained and intuitive control over visemes and expressions—uses a convolutional autoencoder to learn a relative motion prior and a transformer to map these interpretable audio features into latent deformations.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionGeometry processing often requires the solution of PDEs with boundary conditions on the manifold’s interior. However, input manifolds can take many forms, each requiring specialized discretizations. Instead, we develop a unified framework for general manifold representations by extending the closest point method to handle interior boundary conditions.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionA novel deep learning system enhances realistic virtual oculoplastic surgery simulations.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Augmented Reality
Education
Games
Haptics
Hardware
Robotics
Virtual Reality
Full Conference
Experience
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose a divide-and-conquer approach for orienting large-scale, non-watertight point clouds. The scene is first segmented into blocks, and normal orientations are estimated independently within each block. These local orientations are then globally unified through a graph-based formulation, solved via 0-1 integer optimization. Experiments demonstrate the robustness of our method.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionThis paper introduces a novel median filtering algorithm, using hierarchical tiling to reduce redundant computations and achieve better complexity than prior sorting-based methods. The paper discusses two implementations, for both small and larger kernel sizes, that outperform the state of the art by up to 5x on modern GPUs.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe introduce a material model for diffuse fluorescence that is compatible with RGB and spectral rendering. This models builds on an analytical integrable Gaussian-based model of the spectral reradiation that is efficient enough to permits real-time rendering and editing of such appearance.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionThis work presents a statistical wave-scattering model for surfaces with nanoscale mixtures in geometry and material. It predicts average appearance (BRDF) and draws realistic speckles directly from surface statistics, without explicit definitions. The proposed model demonstrates various applications including corrosion (natural), particle deposition (man-made) and height-correlated mixture (artistic).
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present an innovative hybrid near-wall model for the multi-resolution lattice Boltzmann solver to effectively enable simulations of high Reynolds number turbulent boundary layer flows. For the first time, it strikes an excellent balance between the precision demanded by industrial computational design and the efficiency required for various visual animations.
Talk
Gaming & Interactive
Production & Animation
Livestreamed
Recorded
Augmented Reality
Capture/Scanning
Digital Twins
Games
Modeling
Virtual Reality
Full Conference
Virtual Access
Sunday
DescriptionWe propose a novel mobile scanning solution that allows end-users to reconstruct 3D hairstyles with actual per-strand curves from just a phone scan. Using a mixture of deep learning and optimization, we make hair scanning fast and accessible while delivering qualitative assets ready to be used in any 3D software.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present a general spectral-domain simulation framework for optical heterodyne detection (OHD), extending path integral rendering to capture power spectral density of OHD. Unlike existing domain-specific tools, our approach supports diverse scenes and applications. We validate it against real-world data from FMCW lidar, blood flow velocimetry, and wind Doppler lidar.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose Neural PLS, a neural particle level-set method for tracking and evolving dynamic neural representations. Oriented particles serve as interface trackers and sampling seeders, enabling efficient evolution on a multi-resolution grid-hash structure. Our approach integrates traditional PLS and implicit neural representations, achieving superior performance in benchmarks and physical simulations.
Spatial Storytelling
Gaming & Interactive
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Augmented Reality
Capture/Scanning
Computer Vision
Ethics and Society
Industry Insight
Performance
Spatial Computing
Full Conference
Experience
DescriptionA case study of the storytelling and technical developments of THE TENT, an AR tabletop narrative built with volumetric video and photogrammetry that premiered at SXSW 2024 and toured the world including the Immersive Pavillion at SIGGRAPH 2024 and was lauded for its use of cinematic and theatrical techniques.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present a platform for creating believable, conversational digital characters that combine conversational AI, speech, animation, memory, personality, and emotions. Demonstrated through Digital Einstein, our system enables interactive, story-driven experiences and generalizes to any character, making immersive, AI-powered character experiences more accessible than ever.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionSpheres that are disjoint from a given union of spheres can be computing by solving a convex hull problem. This can be exploited for contouring discretely sampled signed distance functions.
Educator's Forum
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Education
Lighting
Pipeline Tools and Work
Rendering
Full Conference
Virtual Access
Experience
Tuesday
Description"A Sparrow’s Song" is a CG-animated short, following a widowed air raid warden who finds a dying sparrow in WWII. Created as a diploma project at Filmakademie Baden-Württemberg, it blends traditional workflows with modern production technologies. We explore creative problem-solving, sustainability, and innovative techniques that brought the film to life.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionInspired by a true story, a widowed air raid warden in the midst of World War II struggles to overcome grief and rediscover joy in her life—until she finds a dying sparrow she hopes to save.
Talk
Production & Animation
Livestreamed
Recorded
Rendering
Full Conference
Virtual Access
Sunday
DescriptionDisney Animation makes heavy use of Ptex, which required a texture streaming pipeline. The goal was to create a scalable system to provide a real-time experience even as the number of Ptex expand into thousands. We cap the maximum size of the GPU cache, and employ a LRU eviction scheme.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present an implicitly-integrated, quaternion-based constrained Rigid Body Dynamics (RBD) that guarantees satisfaction of kinematic constraints, unifying the solution strategy for complex mechanical systems with arbitrary kinematic structures, by navigating subspaces spanned by constraint forces and torques for systems with redundant constraints, over actuation, and passive degrees of freedom.
Course
Research & Education
Livestreamed
Recorded
Animation
Education
Modeling
Rendering
Full Conference
Virtual Access
Experience
Sunday
DescriptionFor a beginner, walking into a SIGGRAPH conference is an intimidating experience. There is much to see and much to do, and everyone seems to be speaking an unfamiliar language where they ooh and ahh over things that they appreciate but you don’t. This course is for them!
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThis paper proposes a scalable framework using Bayesian Neural Networks and a novel 2mD acquisition function to efficiently discover gamut boundaries in performance space. Combining NSGA-II's diversity and Bayesian Optimization's efficiency, the method enables large-batch, parallel optimization, outperforming traditional approaches in real-world engineering and fabrication tasks.
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Livestreamed
Recorded
Digital Twins
Industry Insight
Modeling
Real-Time
Rendering
Full Conference
Virtual Access
Sunday
DescriptionHNTB, a leading infrastructure engineering firm, collaborates with Cesium to enhance AEC workflows by integrating 3D geospatial context into runtime engines. This partnership reduces modeling time for large-scale projects and introduces tools for efficient editing. Cesium's platform optimizes and streams 3D data, improving design, visualization, and analysis for infrastructure projects.
ACM SIGGRAPH Award Talk
Livestreamed
Recorded
Full Conference
Virtual Access
Experience
Tuesday
DescriptionEach year, ACM SIGGRAPH presents nine awards recognizing exceptional achievements in computer graphics and interactive techniques at the ACM SIGGRAPH Conference.
For a list of the awardees, visit: https://www.siggraph.org/awards/
For a list of the awardees, visit: https://www.siggraph.org/awards/
Birds of a Feather
Research & Education
Not Livestreamed
Not Recorded
Scientific Visualization
Full Conference
Experience
DescriptionThe ACM SIGGRAPH Cartographic Visualization (Carto) session explores how viewpoints and techniques from the computer graphics community can be effectively applied to cartographic and spatial data sets. Speakers demonstrate their latest tools and application efforts.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Education
Full Conference
Experience
DescriptionThe ACM SIGGRAPH History Archive is an initiative created by individuals that believe in free exchange of information as a means of improving society. Knowledge is only possible when information is obtainable. The SIGGRAPH community gathers together each year at the annual conference to share knowledge, network with people, and learn from each other. The work included in the SIGGRAPH History online Archive is a testament to the fact that open access to information fuels innovation, creativity, and achievement. This session focuses on major improvements to the archive that have happened over the past year, including adding 12,000 new entries, programming new features, developing new pipelines, optimization of the infrastructure, adding new design features, and scanning hundreds of documents. The large team of volunteers and interns work daily to research and enter new entries and fix bugs and add new functionality.
The physical archive is currently housed at Bowling Green State University (home of the 2nd SIGGRAPH conference) and will be expanding its footprint. Recent acquisition of the Jim Blinn collection of SIGGRAPH artifacts necessitated a redesign of the space. The SIGGRAPH Archive is also involved in a major consortium of new media art archives from around the world and helps lead an initiative to globally connect archives. This session also will solicit audience input regarding the future of the SIGGRAPH History Archive, possible enhancements, integration of new technologies, and its long-term sustainability.
The physical archive is currently housed at Bowling Green State University (home of the 2nd SIGGRAPH conference) and will be expanding its footprint. Recent acquisition of the Jim Blinn collection of SIGGRAPH artifacts necessitated a redesign of the space. The SIGGRAPH Archive is also involved in a major consortium of new media art archives from around the world and helps lead an initiative to globally connect archives. This session also will solicit audience input regarding the future of the SIGGRAPH History Archive, possible enhancements, integration of new technologies, and its long-term sustainability.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Education
Full Conference
Experience
DescriptionThe kickoff of the 2025 SIGGRAPH Educators Program will include an overview of activities and opportunities with, and sponsored by, the ACM SIGGRAPH Education Committee. Additionally, the winning entries from the SpaceTime competition will be shown along with a screening of the show reel for the 2025 double-curated Faculty Submitted Student Work (FSSW) exhibit. Designed as a way for educators to share their project ideas across schools and disciplines, FSSW is an online archive of assignment and project briefs as well as a curated video emphasizing the variety of student work and schools submitted. The annual exhibition video is a taste of the content available on the Education Committee website and this event showcases and celebrates to the greater SIGGRAPH community the best examples of student work done for projects and assignments for the 2024-2025 school year.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Education
Full Conference
Experience
DescriptionThe kickoff of the 2025 SIGGRAPH Educators Program will include an overview of activities and opportunities with, and sponsored by, the ACM SIGGRAPH Education Committee. Additionally, the winning entries from the SpaceTime competition will be shown along with a screening of the show reel for the 2025 double-curated Faculty Submitted Student Work (FSSW) exhibit. Designed as a way for educators to share their project ideas across schools and disciplines, FSSW is an online archive of assignment and project briefs as well as a curated video emphasizing the variety of student work and schools submitted. The annual exhibition video is a taste of the content available on the Education Committee website and this event showcases and celebrates to the greater SIGGRAPH community the best examples of student work done for projects and assignments for the 2024-2025 school year.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Full Conference
Experience
DescriptionAcroType is an interactive installation where visitors can see themselves composed from numerous cut-outs on a video screen. Thousands of programmed and animated ants carry bits and pieces of the live video image and constantly recompose this video feed bit by bit. Visitors watch themselves appearing and disappearing as the laborious ants assemble steadily their portraits. Similar to leaf-cutter ants, these artificial animals get never tired to create and recomposed the human portraits.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionParth delivers adaptive fill-reducing ordering to accelerate Cholesky solvers in simulations with dynamic sparsity patterns, such as contact modelling, achieving up to 255× ordering speedups. With seamless, three-line integration into popular solvers like MKL and Accelerate, Parth ensures reliable, high-performance computations for applications in computer graphics and scientific computing.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present an algorithm for simulating large-scale, violently turbulent two-phase flows—such as breaking ocean waves, tsunamis, and asteroid impacts—at extreme resolutions of the coupled water-air velocity field. This is achieved by integrating a new multiphase FLIP variant with highly efficient dual particle–grid adaptivity and a novel adaptive Poisson solver.
Talk
Research & Education
Livestreamed
Recorded
Rendering
Full Conference
Virtual Access
Sunday
DescriptionWe advertise the use of tetrahedral grids constructed via the longest edge bisection algorithm for rendering volumetric data with path tracing. Our GPU implementation outperforms regular grids by up to a speed-up factor of 30 and allows to render production assets in real time.
Talk
Production & Animation
Livestreamed
Recorded
Dynamics
Geometry
Industry Insight
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Sunday
DescriptionIn this paper, we discuss some of the solutions we have found to common problems of non-procedural groom workflows. We discuss these solutions and how they were used to create, animate and simulate high-fidelity, photo-realistic character grooms for Mufasa: The Lion King
Educator's Day Session
Production & Animation
Research & Education
Not Livestreamed
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Games
Hardware
Modeling
Spatial Computing
Full Conference
Virtual Access
Experience
Monday
DescriptionThis technical talk presents recent innovations in hardware and post-processing workflows from Corbel3D (Pixel Light Effects), aimed at advancing mobile photogrammetry and high-fidelity 4D scanning. Key developments include portable head scanners integrating white and UV light capture, multi-mode stacked acquisition methods, and high-speed SLR burst synchronization at 120fps. Improvements in power source buffering significantly reduce on-set energy demands, enabling rapid deployment of full-body mobile scanning systems. We will also explore emerging cross-industry applications in sports performance analysis and ergonomic movement research, highlighting the broader impact of high-resolution volumetric data capture.
Course
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Games
Geometry
Industry Insight
Lighting
Performance
Real-Time
Rendering
Full Conference
Virtual Access
DescriptionModern video games employ a variety of sophisticated algorithms to produce groundbreaking 3D rendering pushing the visual boundaries and interactive experience of rich environments. This course brings state-of-the-art and production-proven rendering techniques for fast, interactive rendering of complex and engaging virtual worlds of video games.
This is the course to attend if you are in the game development industry or want to learn the latest and greatest techniques in the real-time rendering domain!
This is the course to attend if you are in the game development industry or want to learn the latest and greatest techniques in the real-time rendering domain!
Course
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Games
Geometry
Industry Insight
Lighting
Performance
Real-Time
Rendering
Full Conference
Virtual Access
DescriptionModern video games employ a variety of sophisticated algorithms to produce groundbreaking 3D rendering pushing the visual boundaries and interactive experience of rich environments. This course brings state-of-the-art and production-proven rendering techniques for fast, interactive rendering of complex and engaging virtual worlds of video games.
This is the course to attend if you are in the game development industry or want to learn the latest and greatest techniques in the real-time rendering domain!
This is the course to attend if you are in the game development industry or want to learn the latest and greatest techniques in the real-time rendering domain!
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present a GPU-optimized IPC framework achieving up to 10× speedup across soft, stiff, and hybrid simulations. Key innovations include a connectivity-enhanced MAS preconditioner, a parallel-friendly inexact strain limiting energy, and a hash-based two-level reduction strategy for fast Hes-
sian assembly and efficient affine-deformable coupling.
sian assembly and efficient affine-deformable coupling.
Talk
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Geometry
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Wednesday
DescriptionThis presentation will provide an in-depth exploration of RodeoFX's implementation of Houdini Solaris as a cornerstone of its VFX pipeline for House of the Dragon Season 2. Aimed at the Houdini community and CG artists, the session will highlight the challenges, solutions, and benefits of leveraging Solaris for large-scale productions.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present the first scene-update aerial path planning algorithm specifically designed for detecting and updating change areas in urban environments, which paves the way for efficient, scalable, and adaptive UAV-based scene updates in complex urban environments.
Art Paper
Arts & Design
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Experience
Monday
DescriptionAfter all of the presentations, attendees are invited to participate in a Q&A.
Educator's Forum
Arts & Design
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Education
Ethics and Society
Generative AI
Pipeline Tools and Work
Real-Time
Full Conference
Virtual Access
Experience
Tuesday
DescriptionThis paper introduces a case study of an AI & Filmmaking course designed as a sandbox for generative AI experimentation in computer graphics education. It explores students’ collaborative creation of a silent documentary, analyzing learning outcomes, technical and ethical concerns, and AI’s role in reenacting historical events in documentary storytelling.
Technical Workshop
Artificial Intelligence/Machine Learning
DescriptionGenerative AI is transforming how we create, edit, and understand visual content, yet a gap remains between researchers building these tools and artists using them. The 7th CVEU workshop at SIGGRAPH 2025 invites researchers, artists, and industry practitioners to bridge this gap and shape the future of creative workflows with GenAI. We will explore generative models for image and video creation, interactive editing, and personalized content generation while addressing practical challenges of latency and scalability. Through keynotes, artist-researcher discussions, and an art gallery, we will highlight emerging tools that empower creators.
More details: https://cveu.github.io/
More details: https://cveu.github.io/
Birds of a Feather
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Augmented Reality
Spatial Computing
Full Conference
Experience
DescriptionCalling all designers, engineers, and creative innovators at SIGGRAPH! Participate in a dynamic Birds of a Feather session, "AI for Smart Work Strategies." This meetup will feature a round table discussion and collaborative brainstorming for interactive workshops. Together, we'll delve into how AI and cutting-edge tech are revolutionizing design and engineering, exchanging insights and generating ideas for hands-on explorations of AI-powered workflows and groundbreaking creation.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Computer Vision
Generative AI
Full Conference
Experience
DescriptionThe Encephalartos woodii is a cycad believed to be extinct in the wild. Only one specimen was discovered and it has since been propagated in botanical gardens. However, existing specimens are clones originating from this single plant and all are male. With the female specimen undiscovered 'AI in the Sky' partakes in the search for the female using drone technology and artificial intelligence (AI).
Educator's Forum
Arts & Design
New Technologies
Production & Animation
Research & Education
Livestreamed
Recorded
Art
Education
Generative AI
Rendering
Full Conference
Virtual Access
Experience
Wednesday
DescriptionGenerative AI is transforming architectural design by assisting creative decision-making. The talk presents an architectural design course where students incorporated AI as a co-pilot—a means to break free from creative stagnation, explore different design personas within themselves, and push the boundaries of their architectural thinking while navigating real-world constraints.
Talk
Gaming & Interactive
New Technologies
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Augmented Reality
Dynamics
Games
Generative AI
Real-Time
Rendering
Simulation
Virtual Reality
Full Conference
Virtual Access
Sunday
DescriptionAI-Powered Real-Time VFX for Mobile explores the fusion of Generative Adversarial Networks (GANs) and GPU particle systems to achieve cinematic-quality fire and water effects on mobile devices. The technology optimizes real-time rendering across a wide range of hardware, enhancing mobile gaming and social media experiences while maintaining performance efficiency.
Appy Hour
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Generative AI
Image Processing
Pipeline Tools and Work
Real-Time
Spatial Computing
Full Conference
Experience
DescriptionUse AI to intuitively create 3D objects in reality. The user augments rough primitives and gives AI a prompt, takes a photo from an angle using an AI3D Easel which iteratively helps refine the image until it is ready for image to 3D process. We also introduce AI3D Render.
Art Paper
Arts & Design
Gaming & Interactive
Livestreamed
Recorded
Art
Games
Virtual Reality
Full Conference
Virtual Access
Experience
Tuesday
DescriptionAlgorithmic Miner uses VR to reveal the hidden labor behind AI systems. By immersing participants in data annotation tasks, it critically reflects on exploitation, automation, and techno-capitalism, prompting new discussions on ethical, human-centered design in interactive systems.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionAlignTex is a novel framework for generating high-quality textures from 3D meshes and multi-view artwork. It improves texture generation by ensuring both appearance detail and geometric consistency, outpacing traditional methods in quality and efficiency, making it a valuable tool for 3D asset creation in gaming and film production.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionA group of pigs are raised peacefully in a monastery when, one day, one of them finds out the truth behind their existence. Thus he decides to free his friends.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionPresenting AMOR, a policy conditioned on context and a linear combination of reward weights, trained using multi-objective reinforcement learning. Once trained, AMOR allows for on-the-fly adjustments of reward weights, unlocking new possibilities in physics-based and robotic character control.
Course
Gaming & Interactive
Livestreamed
Recorded
Generative AI
Hardware
Real-Time
Rendering
Full Conference
Virtual Access
Tuesday
DescriptionThis course teaches the fundamentals of neural shading, wherein traditional graphics algorithms are replaced with simple neural networks. Both theory and practical implementation will be covered,
along with hardware acceleration, and production deployment. Follow along with the instructors using interactive samples written in Python & Slang!
along with hardware acceleration, and production deployment. Follow along with the instructors using interactive samples written in Python & Slang!
Course
New Technologies
Research & Education
Livestreamed
Recorded
Games
Graphics Systems Architecture
Image Processing
Math Foundations and Theory
Rendering
Scientific Visualization
Full Conference
Virtual Access
Monday
DescriptionQuantum computing is a radically new and exciting approach to programming. By exploiting the unusual behavior of quantum objects, this new technology invites us to re-imagine the computer graphics methods we know and love in revolutionary new ways. This course is math-free and requires no technical background.
Talk
Production & Animation
Not Livestreamed
Not Recorded
Animation
Generative AI
Full Conference
Virtual Access
Thursday
DescriptionWe propose AniDepth, a novel anime in-betweening method using a video diffusion model enhanced by converting anime illustrations into depth maps. Guided by line-arts, our approach interpolates depth maps and colors to boost fidelity, temporal smoothness, and performance while seamlessly integrating into production pipelines.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionView a sneak peek of the upcoming Netflix feature animation release! In Your Dreams is a comedy adventure about Stevie and her brother Elliot who journey into the landscape of their own dreams. If the siblings can withstand a snarky stuffed giraffe, zombie breakfast foods, and the queen of nightmares, the Sandman will grant them their ultimate dream, the perfect family.
Join us for a live in-person conversation with Director Alex Woo, Kuklu Studios, Sony Pictures Imageworks Visual Effects Supervisor, Nicola Lavender and Head of Character Animation, Sacha Kapijimpanga followed by a Q&A and poster signing!
Join us for a live in-person conversation with Director Alex Woo, Kuklu Studios, Sony Pictures Imageworks Visual Effects Supervisor, Nicola Lavender and Head of Character Animation, Sacha Kapijimpanga followed by a Q&A and poster signing!
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionA screening and in-person conversation! Director Amanda Strong, Spotted Fawn Productions and Eloi Champagne, Head of Technical Direction and Production Technologies, NFB shares how CG was used in stop motion to create this meaningful work where Dove, a gender-shifting warrior, uses their Indigenous medicine (Inkwo) to protect their community from an unburied swarm of terrifying creatures. See the short film, participate in the Q&A and meet the filmmakers!
Two lifetimes from now the world hangs in the balance. Dove, a young, enigmatic, gender-shifting warrior, discovers the gifts and burdens of their Inkwo (medicine) to defend against an army of hungry, ferocious monsters. Dove’s courage, resilience and alliance with the Earth culminates in a battle against these flesh-consuming creatures, who become stronger with each body and soul they devour. Inkwo for When the Starving Return is a call to action to fight and protect against the forces of greed around us.
Two lifetimes from now the world hangs in the balance. Dove, a young, enigmatic, gender-shifting warrior, discovers the gifts and burdens of their Inkwo (medicine) to defend against an army of hungry, ferocious monsters. Dove’s courage, resilience and alliance with the Earth culminates in a battle against these flesh-consuming creatures, who become stronger with each body and soul they devour. Inkwo for When the Starving Return is a call to action to fight and protect against the forces of greed around us.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe derive a nonlinear elastic rod energy, starting from a general 3D volumetric isotropic material. Validated against FEM, we accurately capture rod stretching, bending and twisting, under finite deformations. We also propose how to separately control linear/nonlinear stretchability/bendability/twistability, supporting rod material design for application in computer graphics.
Real-Time Live!
Gaming & Interactive
Production & Animation
Livestreamed
Recorded
Animation
Games
Generative AI
Real-Time
Full Conference
Virtual Access
Tuesday
DescriptionGetting any character moving can be done in under 5 minutes. Uthana uses the power of AI to create text-to-motion animation, real time character control, rig-agnostic auto-retargeting, and motion stitching to make this possible. Uthana works for any rig, any movement, and any level of animation experience.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present the Anymate Dataset, a large-scale dataset of 230K 3D assets paired with expert-crafted rigging and skinning information---70 times larger than existing datasets. Using this dataset, we propose a learning-based auto-rigging framework with three sequential modules for joint, connectivity, and skinning weight prediction.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionAnyTop generates motion for diverse character skeletons using only skeletal structure as input. This diffusion model incorporates topology information and textual joint descriptions to learn semantic correspondences across different skeletons. It generalizes with minimal training examples and supports joint correspondence, temporal segmentation, and motion editing tasks.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionThis paper introduces an appearance-aware adaptive sampling method using deep reinforcement learning to optimize the reconstruction of spatially-varying BRDFs from minimal images. By modeling the sampling as a sequential decision-making problem, the method identifies the next best view-lighting pair, outperforming heuristic sampling strategies for heterogeneous materials.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present a novel volumetric representation for the aggregated appearance of complex scenes and a pipeline for level-of-detail generation and rendering. Our representation preserves accurate far-field appearance and spatial correlation from scene geometry. Our method faithfully reproduces appearance and achieves higher quality than existing scene filtering methods.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionVenturing through a distant desert, we wander the terrain of our intimate desires and fears, seeking answers. Yet, life never fails to deliver a surprise. When the opportunity arises, do you free yourself?
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionArenite is a novel, physics-based simulation method for generating realistic sandstone structures. It combines fabric interlocking, multi-factor erosion, and particle-based deposition. Our GPU-based implementation produces detailed 3D shapes such as arches, alcoves, hoodoos, and buttes in minutes and provides real-time control.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionIn a frostbitten frontier world, a legendary mech pilot learns his latest mission might hold the key to the demons that have haunted him for decades.
Art Gallery
Art Paper
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionWelcome to the Art Gallery and Art Papers Contributors from the Chairs, and announcement of the Best in Show awards for those programs. Acknowledgement of the Distinguished Artist Award winner, and celebration of the wider Art community, both at the SIGGRAPH Conferences and year-round.
Talk
Gaming & Interactive
Production & Animation
Not Livestreamed
Not Recorded
Animation
Dynamics
Games
Real-Time
Simulation
Full Conference
Virtual Access
Tuesday
DescriptionApplying noise fields to create procedural wind on curves has been an attractive method for its speed, variability, and timeframe independence, but the motion looks artificial. We present techniques to art-direct as well as enhance the realism of procedural curve wind with the addition of collisions, shielding, gusts, and recovery.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Art
Augmented Reality
Education
Full Conference
Experience
DescriptionSponsored by the ACM SIGGRAPH Digital Arts Community, the SIGGRAPH 2025 Art Gallery, and Art Papers, this event brings together artists, designers, and creatives for an evening of connection and inspiration. Attendees will also have the opportunity to experience nearby AR-enhanced murals created by local mural artist Priscilla Yu.
Event Location: Bentall Center, 110 – 1055 DUNSMUIR STREET, VANCOUVER, BC V7X 1L3
Time & Date: Monday, Aug. 11th, 5:30 pm - 9:30 pm
Event Location: Bentall Center, 110 – 1055 DUNSMUIR STREET, VANCOUVER, BC V7X 1L3
Time & Date: Monday, Aug. 11th, 5:30 pm - 9:30 pm
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionAssetDropper is a novel framework for extracting standardized assets from reference images, addressing challenges such as occlusion and distortion. Leveraging both synthetic and real-world datasets, along with a reward-driven feedback mechanism, it achieves state-of-the-art performance in asset extraction and provides designers with a versatile open-world asset palette.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThis paper introduces a novel asymptotic directional stiffness (ADS) metric to analyze the contribution of middle surface geometry on the stiffness of shell lattice metamaterials, focusing on Triply Periodic Minimal Surfaces (TPMS). It provides a theoretical framework and optimization techniques, advancing the understanding of TPMS shell lattices.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Virtual Reality
Full Conference
Experience
DescriptionA deep time meditation on the changing elemental composition of the atmosphere of the planet Earth. Atmos Sphaerae was commissioned by Christiane Paul for the DiMoDA 4.0 virtual reality exhibit "Dis/Location," which premiered at Gazelli Art House and was shown at the ZKM Karlsruhe and the Onassis Foundation's ONX Studio NY. The "flat" video version of Atmos Sphaerae has been presented by Gazelli Art House at ART SG in Singapore and developed for multi-screen immersive spaces at ONX Studio NY.
Immersive Pavilion
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Generative AI
Spatial Computing
Full Conference
Experience
DescriptionOur system revolutionizes assembly verification by combining CAD-trained detection, AR guidance, and vision-language models. The system eliminates extensive training data needs while providing natural language feedback. This enables workers of all skill levels to perform complex assemblies accurately, addressing workforce challenges through rapid skill development and reduced reliance on experts.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe extend the Vertex Block Descent method for fast and unconditionally stable physics-based simulation using an Augmented Lagrangian formulation to enable simulating hard constraints with infinite stiffness and systems with high stiffness ratios. This allows simulating complex contact scenarios involving rigid bodies with stacking and friction, and articulated joint constraints.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present AutoKeyframe, a novel framework that simultaneously accepts dense and sparse control signals for motion generation by generating keyframes directly. Our method reduces manual efforts for keyframing while maintaining precise controllability, using an autoregressive diffusion model and a new skeleton-based gradient guidance method for flexible spatial constraints.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionThis paper introduces an automated scheduling framework to optimize cloth and deformable simulations across heterogeneous computing devices. Using an enhanced HEFT algorithm and asynchronous iteration methods, our approach minimizes communication delays and maximizes parallelism. our experiments demonstrate superior frame rates over single-unit solutions for real-time and resource-constrained environments.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Real-Time Live!
Gaming & Interactive
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Computer Vision
Games
Full Conference
Virtual Access
Tuesday
DescriptionDeveloped in partnership with NVIDIA and Inworld AI, Streamlabs’ intelligent streaming assistant is an AI-powered co-host, producer and technical assistant for digital creators. Ashray Urs, Head of Streamlabs, will walk audiences through the streaming assistant’s evolving capabilities, with a focus on customizability and enhanced audience and streamer interactions.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionThis scientific visualization depicts a bacterial molecular landscape and explores the speed of diffusion and biochemical reactions that power life at the molecular scale.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionCelebrating the enduring legacy of Mac Miller, Hornet produced a 24-minute animated film in collaboration with the posthumous official release of the album “Balloonerism".
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionBANG introduces Generative Exploded Dynamics, a novel method that dynamically decomposes 3D objects into meaningful, volumetric parts through smooth, controllable exploded views. Bridging intuitive human understanding and generative AI, it enables precise part-level manipulation, semantic comprehension, and versatile applications in 3D creation, visualization, and printing workflows.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionText-to-image diffusion models struggle with multi-subject generation due to subject leakage. Prior methods impose external layouts that conflict with the model’s prior, harming alignment and natural composition. We introduce a method that leverages the layout encoded in the initial noise, promoting alignment and natural compositions while preserving the model’s diversity.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Art Paper
Arts & Design
New Technologies
Research & Education
Livestreamed
Recorded
Art
Generative AI
Geometry
Full Conference
Virtual Access
Experience
Monday
Description"Becoming Space" is an installation that explores the agency of AI, discourse, and material intersections through AI-generated forms and 3D printing. Inspired by Ovid's Metamorphoses, it explores human-animal transformations using CLIP-guided diffusion models and stereolithography. The installation reveals limitations in AI's physical form interpretation---which is dominated by discourse---while demonstrating "intra-action" between language, algorithms, machines, and materials. This work tries to discuss authorship and material agency through the entanglement of matters.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe derive vertex position and irradiance bounds for each triangle tuple, introducing a bounding property of rational functions on the Bernstein basis, to significantly reduce the search domain when systematically simulating specular light transport.
ACM SIGGRAPH 365 - Community Showcase
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Graphics Systems Architecture
Haptics
Image Processing
Industry Insight
Lighting
Math Foundations and Theory
Modeling
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionThis session will feature a selection of notable research papers from the Eurographics 2025 conference, offering attendees insights into current trends and developments in Computer Graphics and related fields in the European region and beyond. We will also provide an update on Eurographics' activities and opportunities.
Immersive Pavilion
Gaming & Interactive
New Technologies
Art
Games
Virtual Reality
Full Conference
Experience
DescriptionBetween Feathers & Footprints is a VR experience where players seamlessly shift between human and bird forms, exploring the world through physics-driven flight and adaptive perception. Through mini-games, players navigate challenges using both grounded interaction and aerial movement, showcasing how different forms shape problem-solving, exploration, and immersive multi-perspective gameplay.
Talk
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Livestreamed
Recorded
Animation
Art
Capture/Scanning
Digital Twins
Education
Games
Pipeline Tools and Work
Real-Time
Full Conference
Virtual Access
Wednesday
DescriptionWe explore 3D Gaussian Splatting for cultural heritage visualization, integrating game engines to create immersive experiences. Using a historical Hakka mansion in Hong Kong as a case study, we examine 3DGS’s limitations and potential, demonstrating how emerging workflows can enhance digital heritage storytelling through interactive, cinematic, and real-time 3D representations.
Talk
Production & Animation
Not Livestreamed
Not Recorded
Image Processing
Pipeline Tools and Work
Full Conference
Virtual Access
Tuesday
DescriptionThis session breaks down the visual and technical work behind two full-CG environments in The Sandman Season 2. From stylized depth and scale in the Underworld to intricate lens-matching at the Edge of the Dream World, we share how custom tools shaped the show’s distinctive visual signature.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Artificial Intelligence/Machine Learning
Computer Vision
Digital Twins
Education
Games
Geometry
Image Processing
Lighting
Modeling
Pipeline Tools and Work
Scientific Visualization
Virtual Reality
Full Conference
Experience
DescriptionBlender Foundation will present an overview of past year's Blender open source project results and the plans for next year. Everyone is welcome for feedback and share experiences.
Talk
Production & Animation
Not Livestreamed
Not Recorded
Animation
Display
Pipeline Tools and Work
Full Conference
Virtual Access
Sunday
DescriptionIn this talk, Netflix Animation Studios explores the challenges of implementing HDR technology in animation production workflows through the case study of the internal short film "Sole Mates", highlighting approaches to overcome software and hardware constraints in artist workflows.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Art
Audio
Dynamics
Fabrication
Haptics
Lighting
Modeling
Scientific Visualization
Full Conference
Experience
DescriptionThis study presents the Blooming Resonant Tea system, integrating cymatics (vibrations that create liquid surface patterns), music, and projections to enhance both flavor and ingredient immersion. Users customize their tea experience by selecting herbal teas, cymatics patterns, and flavor-associated music, creating a unique, immersive, multi-sensory ritual for future tea drinking.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Art
Generative AI
Real-Time
Virtual Reality
Full Conference
Experience
DescriptionIn this demo, we present Timeless Blossoms, a VR-AI system that reimagines Traditional Chinese Flower Arrangement, allowing users to create 3D floral compositions in an cultural- enriched virtual space while generative AI converts their designs into real-time Xieyi paintings, merging heritage artistry with technology to spark cross-cultural and temporal dialogue.
Talk
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Digital Twins
Geometry
Full Conference
Virtual Access
Thursday
DescriptionIn the realm of high-end visual effects, achieving lifelike character deformations is both an art and a technical challenge. BodyOpt is the latest evolution in WetaFX’s character deformation pipeline, integrating advanced simulation techniques with artist-friendly workflows to enable efficient processing of complex deformations across numerous shots and characters.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe propose a novel algorithm for efficient and accurate Boolean operations on B-Rep models by mapping them bijectively to controllable-error triangle meshes. Using conservative intersection detection on the mesh to locate all surface intersection curves and carefully handling degeneration and topology errors ensure that the results are watertight and correct.
Spatial Storytelling
Gaming & Interactive
New Technologies
Not Livestreamed
Not Recorded
Augmented Reality
Games
Pipeline Tools and Work
Real-Time
Rendering
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionILM developed Marvel's “What If...? – An Immersive Story” for Apple Vision Pro, overcoming many technical and design challenges. For telling stories in both mixed and virtual reality scenes, the team combined Unreal Engine with Apple's RealityKit and built many custom technical solutions for materials, skinning, particles, and gesture tracking.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present BrepDiff, a simple, single-stage diffusion model for generating Boundary Representations (B-reps). Our approach generates B-reps by denoising point-based face samples with a dedicated noise schedule. Unlike multi-stage methods, BrepDiff enables intuitive, editable geometry creation, including completion, merging, and interpolation, while achieving competitive performance on unconditional generation.
Immersive Pavilion
Bridging Physical and Virtual Realms in Mixed Reality: The Co-Presence Experience in LIAN: Re:Vision
10:30am - 5:00pm PDT Monday, 11 August 2025 West Building, Exhibit Hall BArts & Design
Gaming & Interactive
New Technologies
Art
Augmented Reality
Games
Full Conference
Experience
DescriptionLIAN: Re:Vision is a Mixed Reality installation that explores love and distance through a co-located, two-player experience. Players navigate narrative-driven challenges, with their avatars' proximity crucial for survival, reflecting on relationship dynamics. Innovative use of pass-through and occlusion features enhances co-presence, while external displays bridge participants and spectators.
Birds of a Feather
Research & Education
Not Livestreamed
Not Recorded
Fabrication
Full Conference
Experience
DescriptionThis event will provide an opportunity for those working on fabrication to meet others and share their experiences. We encourage them to bring physical objects (fabrication results) to make it a fun event. We plan to invite people working on fabrication widely using our own connections, but also welcome anybody interested in this topic.
This will be the 4th "Bring Your Bunny (or Something)" BoF event with previous events taking place at SIGGRAPH in 2023 and 2024, and SIGGRAPH Asia in 2024.
This will be the 4th "Bring Your Bunny (or Something)" BoF event with previous events taking place at SIGGRAPH in 2023 and 2024, and SIGGRAPH Asia in 2024.
Talk
Production & Animation
Not Livestreamed
Not Recorded
Animation
Full Conference
Virtual Access
Wednesday
DescriptionTo convey a vivacious, protopian space station in Pixar's "Elio", a small team of environment artists amplified the traditional pipeline with procedural techniques used in unique ways to develop a vast quantity of various biomes of alien terrains and architectures pulsing with their own internal energy.
Production Session
Arts & Design
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Art
Education
Image Processing
Industry Insight
Modeling
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Full Conference
Virtual Access
Sunday
DescriptionBringing NASA Data to Life: The Power of Visualizing Science
This production session presents how NASA Earth science reaches global audiences through compelling data-driven visualizations. As scientific data grows increasingly complex and voluminous, the challenge lies in transforming data into meaningful, accessible knowledge. NASA team members bridge this gap by working closely with scientists and mission teams to create innovative visualizations that make intricate Earth phenomena universally understandable. These visualizations transform complex datasets into captivating narratives that advance both science and public understanding.
This production session showcases the making of public facing NASA exhibits and new works in development including data dashboards that visualize near real-time extreme events as they happen, such as wildfires and disasters to name a few. In addition, the session reveals the processes of creating visualizations of atmospheric phenomena using state-of-the-art models that in turn are used as high-quality training data to fuel AI innovation.
A multidisciplinary team of artists, engineers, and data visualization experts demonstrates their process for creating large-scale data-driven media that engages diverse audiences. Building on their expertise working with Earth science data, the team reveals both technical challenges and creative breakthroughs encountered when transforming complex scientific datasets into compelling visual narratives—from initial concept development through final production. Attendees will discover cutting-edge approaches to artistic direction, scientific models, computational techniques, and robust pipeline development. The presentation explores powerful storytelling strategies, technical implementation methods, and emerging research opportunities that advance multi-dimensional storytelling that conveys scientific insights and inspires wonder.
https://svs.gsfc.nasa.gov/
https://earth.gov/
This production session presents how NASA Earth science reaches global audiences through compelling data-driven visualizations. As scientific data grows increasingly complex and voluminous, the challenge lies in transforming data into meaningful, accessible knowledge. NASA team members bridge this gap by working closely with scientists and mission teams to create innovative visualizations that make intricate Earth phenomena universally understandable. These visualizations transform complex datasets into captivating narratives that advance both science and public understanding.
This production session showcases the making of public facing NASA exhibits and new works in development including data dashboards that visualize near real-time extreme events as they happen, such as wildfires and disasters to name a few. In addition, the session reveals the processes of creating visualizations of atmospheric phenomena using state-of-the-art models that in turn are used as high-quality training data to fuel AI innovation.
A multidisciplinary team of artists, engineers, and data visualization experts demonstrates their process for creating large-scale data-driven media that engages diverse audiences. Building on their expertise working with Earth science data, the team reveals both technical challenges and creative breakthroughs encountered when transforming complex scientific datasets into compelling visual narratives—from initial concept development through final production. Attendees will discover cutting-edge approaches to artistic direction, scientific models, computational techniques, and robust pipeline development. The presentation explores powerful storytelling strategies, technical implementation methods, and emerging research opportunities that advance multi-dimensional storytelling that conveys scientific insights and inspires wonder.
https://svs.gsfc.nasa.gov/
https://earth.gov/
Talk
Production & Animation
Not Livestreamed
Not Recorded
Animation
Dynamics
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Wednesday
DescriptionThe Wild Robot's environment is rich in variety, dense in layout, painterly in look, and a character in itself alive with motion. This talk describes the different techniques, tools, and pipeline used to breathe life into this wild environment, along with the challenges faced while dealing with the environment’s complexity.
Talk
Gaming & Interactive
Production & Animation
Livestreamed
Recorded
Animation
Artificial Intelligence/Machine Learning
Games
Geometry
Pipeline Tools and Work
Full Conference
Virtual Access
Sunday
DescriptionThis talk explores how OpenUSD enables a scalable auto-rigging pipeline for user-generated content (UGC) in an avatar ecosystem. We discuss how OpenUSD supports an efficient procedural workflow, standardizes diverse asset representations through schemas, and optimizes pipeline execution and iteration via a structured task graph.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe propose BuildingBlock, a hybrid approach integrating generative models, PCG, and LLMs for diverse and structured 3D building generation. A Transformer-based diffusion model generates layouts, which LLMs refine into hierarchical designs. PCG then constructs high-quality buildings, achieving state-of-the-art results and enabling scalable architectural workflows.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionC-tubes are 3D tubular structures made of developable strips. We introduce an algorithm to construct C-tubes while guaranteeing exact surface developability and an optimization method for design exploration. Applications span architecture, engineering, and product design. We present prototypes showcasing cost-effective fabrication of complex geometries using different materials.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe propose a fast, single-threaded continuous collision detection (CCD) algorithm for convex shapes under affine motion. By combining conservative advancement with a cone-casting approach, it avoids primitive-level overhead and enables efficient integration into intersection-free simulation methods such as ABD.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe propose a framework for learning on in-the-wild meshes containing non-manifold elements, multiple components, and interior structures. Our approach uses cages and generalized barycentric coordinates to parametrize and learn volumetric functions, demonstrated by segmentation and skinning weights, achieving state-of-the-art results on wild meshes.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionCandice, a young girl is bullied by the other children. One day she found a dead cat which ask her to fix it. She is then going to search for every dead animals possible to create some new friends.
Talk
Production & Animation
Livestreamed
Recorded
Animation
Industry Insight
Lighting
Full Conference
Virtual Access
Tuesday
DescriptionDiscover how Image Engine simulated geometric crystals erupting from the floor in Avatar: The Last Airbender to capture the hero. Using custom Houdini tools, collision-aware systems, and physically inspired lighting, the team was able to integrate the plate and characters into beautiful final renders to relay the director's vision.
Educator's Forum
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Education
Games
Generative AI
Physical AI
Virtual Reality
Full Conference
Virtual Access
Experience
Tuesday
DescriptionCase Study: In year two of AI education and infrastructure at SCAD the institution has set in place a robust and accelerated series of new initiatives centred around open-source development and providing the necessary resources for students to develop and launch AI-driven products, tools and IP.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe introduce CAST, an innovative method for reconstructing high-quality 3D scenes from a single RGB image. Supporting open-vocabulary reconstruction, CAST excels in managing occlusions, aligning objects accurately, and ensuring physical consistency with the input, unlocking new possibilities in virtual content creation and robotics.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Art
Games
Virtual Reality
Full Conference
Experience
DescriptionCathex is an immersive artistic VR experience that unfolds like a virtual poem, guiding players on a philosophical journey of emotional transformation, catharsis, and self-discovery. By engaging in emotion-regulatory movements, players uncover a long-forgotten memory of the universe and their own self.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Generative AI
Graphics Systems Architecture
Haptics
Image Processing
Industry Insight
Lighting
Math Foundations and Theory
Modeling
Physical AI
Pipeline Tools and Work
Real-Time
Scientific Visualization
Spatial Computing
Full Conference
Experience
DescriptionKick off your SIGGRAPH week in style! Join the ACM SIGGRAPH Chapters Committee from 9pm-2am Monday, August 11th at Mansion Nightclub for a night of fun with the world’s greatest pixel wranglers. Your conference badge is your entry ticket, all registration levels welcome. The club is a 14 minute walk from the Vancouver Convention Center at 1161 W Georgia St, Vancouver, BC V6E 0C6. Ask for more info at the ACM SIGGRAPH Village!
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Ethics and Society
Fabrication
Generative AI
Geometry
Graphics Systems Architecture
Hardware
Image Processing
Industry Insight
Math Foundations and Theory
Modeling
Performance
Physical AI
Pipeline Tools and Work
Rendering
Robotics
Scientific Visualization
Spatial Computing
Full Conference
Experience
DescriptionCome to this casual gathering to learn about how to join or start an ACM SIGGRAPH professional or student chapter and to meet and socialize with current chapters.
More information about Chapters is available at https://siggraph.org/chapters
More information about Chapters is available at https://siggraph.org/chapters
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Generative AI
Graphics Systems Architecture
Haptics
Hardware
Image Processing
Industry Insight
Lighting
Math Foundations and Theory
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Full Conference
Experience
DescriptionHosted by the professional chapter of Bogotá, Colombia, join us for a social hour to connect with other students and professionals who live where you live, and learn how to participate in a year-round community. This event is especially for conference attendees who have traveled to Vancouver from other continents!
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Digital Twins
Display
Dynamics
Ethics and Society
Fabrication
Games
Graphics Systems Architecture
Haptics
Image Processing
Industry Insight
Lighting
Math Foundations and Theory
Modeling
Performance
Physical AI
Pipeline Tools and Work
Scientific Visualization
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionHosted by chapters on the East Coast of North America, join us for a social hour to connect with other students and professionals who live where you live, and learn how to participate in a year-round community.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Ethics and Society
Fabrication
Generative AI
Geometry
Graphics Systems Architecture
Haptics
Image Processing
Industry Insight
Lighting
Math Foundations and Theory
Modeling
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Spatial Computing
Full Conference
Experience
DescriptionHosted by chapters on the West Coast of North America, join us for a social hour to connect with other students and professionals who live where you live, and learn how to participate in a year-round community.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Fabrication
Robotics
Full Conference
Experience
DescriptionChoreoFin is a kinetic dress inspired by aquatic creatures and mythological beings such as mermaids and amphibian humanoids. Its fins, made from nano-metal-coated fabric, shimmer with multicolored textures and are animated by fifteen shape-memory alloy actuators. Each actuator bends in three directions using BioMetal fibers, enabling smooth, expressive motion. The fins respond to nearby viewers—freezing or trembling if approached suddenly, or gently swaying as if breathing beneath the surface. ChoreoFin reflects both biological function and ornamental beauty, subtly merging elements of living forms and wearable design.
Talk
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Livestreamed
Recorded
Animation
Art
Dynamics
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Sunday
DescriptionIn Disney’s Moana 2, crafting the intricate hair and cloth motion to support and enhance the complex character performances required a new strategic approach. This involved performance categorization, continuity through visual planning, and iterative refinement, enabling Technical Animation to achieve the highly art-directed shots with consistency, efficiency, and effectiveness.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionA 3D-aware and controllable text-to-video generation method allows users to manipulate objects and camera jointly in 3D space for high-quality cinematic video creation.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce an adaptive octree-based GPU simulator for large-scale fluid simulation. Our hybrid particle-grid flow map advection scheme effectively preserves vortex details, enabling high-resolution and high-quality results. The source code has been made publicly available at: https://wang-mengdi.github.io/proj/25-cirrus/.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe introduce a compact, C2-continuous kernel for MPM that reduces numerical diffusion and improves efficiency—without sacrificing stability. Built on a dual-grid framework and compatible with APIC and MLS, our method enables high-fidelity, large-scale simulations, further pushing the limits of MPM.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe introduce a novel scannable 2D code where the payload is stored in the topology of nested color regions, abandoning traditional matrix-based approaches (e.g., QRCodes). Claycodes can be largely deformed, styled, and animated. We present a mapping between bits and topologies, shape-constrained rendering, and a robust real-time decoding pipeline.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present a Clebsch PFM fluid solver that accurately transports wave functions using particle flow maps. Key innovations include a new gauge transformation, improved velocity reconstruction on coarse grids, and better fine-scale structure preservation. Benchmarks show superior performance over impulse- or vortex-based methods, especially for small-scale flow features.
Technical Paper
Closed-form Generalized Winding Numbers of Rational Parametric Curves for Robust Containment Queries
10:55am - 11:05am PDT Tuesday, 12 August 2025 West Building, Rooms 211-214Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe derive closed-form expressions for GWNs of rational parametric curves for robust containment queries.
Our closed-form expression enables efficient computation of GWN, even if the query points are located on the rational curve. We also derive the derivatives of GWN for other applications.
Our closed-form expression enables efficient computation of GWN, even if the query points are located on the rational curve. We also derive the derivatives of GWN for other applications.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionCLR-Wire is a unified generative framework for 3D curve-based wireframes, jointly modeling geometry and topology in a continuous latent space. Using attention-driven VAEs and flow matching, it enables high-quality, diverse generation from noise, images, or point clouds—advancing CAD design, shape reconstruction, and 3D content creation.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionCMD revolutionizes 3D generation by enabling flexible local editing of 3D models from a single rendering, as well as progressive, interactive creation of complex 3D scenes. At its core, CMD leverages a conditional multiview diffusion model to seamlessly modify/add new components—enhancing control, quality, and efficiency in 3D content creation.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionAn office worker unravels as he struggles to deal with bullying, psychological violence and harassment at work. At work, psychological health can sometimes be so fragile that it "hangs by a thread." The campaign aims to spark open conversations and encourage proactive solutions.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionCobra is a novel efficient long-context fine-grained ID preservation framework for line art colorization, achieving high precision, efficiency, and flexible usability for comic colorization. By effectively integrating extensive contextual references, it transforms black-and-white line art into vibrant illustrations.
Talk
Production & Animation
Not Livestreamed
Not Recorded
Animation
Geometry
Modeling
Rendering
Full Conference
Virtual Access
Thursday
DescriptionIn Pixar's Elio, the Universal Users Manual is a sentient alien book “character” created as a unique collaboration between characters and FX, consisting of a stack of pages animated along rigged shaped paths and then processed in Houdini to create the look of individual pages with particle and volume FX.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce a collaborative metalens array comprising over 100-million nanopillars for broadband imaging. The proposed array camera is only a few millimeters flat and employs a non-generative reconstruction method, which performs favorably and without hallucinations, irrespective of the scene illumination spectrum.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe propose a practical method for dental layer biomimicry and multi-spot shade matching using multi-material 3D printing. It integrates seamlessly into workflows combining dental CAD tools and industrial multi-material slicers.
We validated it by printing multiple dentures and teeth with varying inner structures and translucencies to match VITA classical shades.
We validated it by printing multiple dentures and teeth with varying inner structures and translucencies to match VITA classical shades.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe propose ColorSurge, a lightweight dual-branch network for end-to-end video colorization. It delivers vivid, accurate, and real-time results from grayscale input, and is easily extensible for high-quality performance at low computational cost.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe reveal that existing online reconstruction of dynamic scenes with 3D Gaussian Splatting produces temporally inconsistent results, led by inevitable noise in real-world recordings. To address this, we decompose the rendered images into the ideal signal and the errors during optimization, achieving temporally consistent results across various baselines.
Course
Arts & Design
New Technologies
Research & Education
Livestreamed
Recorded
Fabrication
Full Conference
Virtual Access
Wednesday
DescriptionThis course will introduce attendees to foundations in computational craft. Computational Craft integrates computational fabrication– the use of computer programming to develop models and machine instructions for digital fabrication– with established craft materials and techniques to fabricate functional and decorative craft artifacts.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionGothic microarchitecture—a prevalent feature of late medieval art—comprises sculptural works that replicate monumental Gothic forms, though its original construction techniques remain historically undocumented. Leveraging insights from 15th-century Basel goldsmith drawings, we present an interactive framework for reconstructing these intricate designs from 2D projections into accurate 3D forms.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Games
Modeling
Rendering
Full Conference
Experience
DescriptionFounded in 2019, the Computer Graphics & Animation Research Projects initiative develops research projects in computer graphics and animation, then seeks out undergraduate students to staff its research project teams. So, if you are a student who would like to learn more about our projects, or a faculty member looking for opportunities for your students, join us for a lively discussion of what it means to engage in research with other, undergraduate students, worldwide. And, for more info, just visit our website (cgarp.net).
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present a novel Monte Carlo approach to solve boundary integral equations with Dirichlet boundary conditions in two dimensions. While Walk-on-Spheres uses largest empty circles, which touch the boundary in only one point, we utilize semicircles and circle sectors that share one or two boundary edges resulting in shorter walks.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present a physics-based method for simulating intricate freezing dynamics on thin films. Our novel Phase Map method integrated with MELP particles reproduces Marangoni freezing dynamics and the "Snow-Globe Effect". The framework captures soap bubble freezing dynamics while ensuring stability in complex scenarios and enabling precise pattern control.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present a tracking-based video frame interpolation method, optionally guided by user inputs. It utilizes sparse point tracks, first estimated using existing point tracking methods and then optionally refined by the user. Without any user input, it already achieves state-of-the-art results, with further significant improvements possible through user interactions.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionCora is a novel diffusion-based image editing method that achieves complex edits, such as object insertion, background changes, and non-rigid transformations, in only four diffusion steps. By leveraging pixel-wise semantic correspondences between source and target, it preserves key elements of the original image’s structure and appearance while introducing new content.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionOne dancer, one body, one phone. In a time of collective alienation and technological mass control, one woman rediscovers her soul and reclaims her mind.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionMultiple importance sampling (MIS) is vital to most rendering algorithms. MIS computes a weighted sum of samples from different techniques to handle diverse scene types and lighting effects.
We propose a practical weight correction scheme that yields better equal-time performance on bidirectional algorithms and resampled importance sampling for direct illumination.
We propose a practical weight correction scheme that yields better equal-time performance on bidirectional algorithms and resampled importance sampling for direct illumination.
Birds of a Feather
Arts & Design
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Dynamics
Ethics and Society
Full Conference
Experience
DescriptionThis BOF is for attendees interested in discussing issues of racial bias embedded in computer graphics research. It is a follow-on from similarly titled BOFs at SIGGRAPH 2021, 2022, and 2024.
We will celebrate progress over the last four years, discuss setbacks, and brainstorm paths forward. This will be a friendly, collaborative space for mutual, authentic engagement across difference. Attendees can be at any stage, including learning about, taking steps toward, or enacting change in computer graphics research.
We will celebrate progress over the last four years, discuss setbacks, and brainstorm paths forward. This will be a friendly, collaborative space for mutual, authentic engagement across difference. Attendees can be at any stage, including learning about, taking steps toward, or enacting change in computer graphics research.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionWhen Anna, an Olympic athlete, finds herself behind in the race, she redoubles her efforts with the aim of winning the competition and never disappoint again. But as she pushes herself beyond her limits, she burns out...
Art Paper
Arts & Design
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Experience
Monday
DescriptionAfter all of the presentations, attendees are invited to participate in a Q&A.
Talk
Production & Animation
Not Livestreamed
Not Recorded
Animation
Modeling
Full Conference
Virtual Access
Tuesday
DescriptionThis talk examines the challenges and innovations in designing and rigging non-humanoid alien characters with unconventional anatomies for Pixar’s Elio. The team explored creatures with limbless, multi-segmented, and fluid bodies, requiring novel animation and rigging solutions.
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Not Livestreamed
Not Recorded
Animation
Art
Dynamics
Geometry
Lighting
Modeling
Performance
Rendering
Simulation
Full Conference
Virtual Access
Thursday
DescriptionWe explore Wētā FX's tools and techniques for bringing characters to life through the creation of Malgosha, touching on costumes, shaders, simulation, the motion capture process, and animators’ techniques for creating realistic body and cloth performance.
Real-Time Live!
Gaming & Interactive
Research & Education
Livestreamed
Recorded
Animation
Dynamics
Games
Real-Time
Simulation
Full Conference
Virtual Access
Tuesday
DescriptionWe present the fastest physics solver for games and interactive applications. Based on the new Augmented Vertex Block Descent method, it can simulate complex interactions of millions of objects in real time. It is numerically stable and computationally efficient, and it can properly handle frictional contacts, stacking, and articulated chains.
Talk
Production & Animation
Not Livestreamed
Not Recorded
Animation
Rendering
Full Conference
Virtual Access
Wednesday
DescriptionThe Wild Robot used a painterly style that utilized transparency, soft edges, and smearing of assets. Traditional flat data channels were not able to capture non-binary transparency, making compositing difficult. We present Crypto-Deep data, an extension to Cryptomatte that stores layered geometric data needed to address compositing artifacts with transparency.
Spatial Storytelling
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Animation
Art
Audio
Augmented Reality
Games
Geometry
Industry Insight
Modeling
Pipeline Tools and Work
Real-Time
Rendering
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionStudio Syro will cover the unique pipeline that they use to produce immersive animated films, games and experiences directly in VR using the VR painting software Quill. The session will cover their unique artist-first pipeline which blends traditional painting and animation techniques with VR giving each piece a handmade feel.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce a novel local-domain fluid-solid interaction simulator grounded in a lattice Boltzmann solver. By leveraging an MPC-based domain-tracking approach and an improved convective boundary condition, it offers enhanced stability and efficiency for deriving control policies of virtual agents, holding great promise for applications in both computer animation and robotics.
Talk
Production & Animation
Livestreamed
Recorded
Animation
Pipeline Tools and Work
Simulation
Full Conference
Virtual Access
Thursday
DescriptionCollisions are a key problem in generating complex crowds animation. The mudskippers in Walt Disney Animation Studios' "Moana 2" presented a particularly challenging scenario as they pack tightly together to form a towering pile. Our solution introduces an additional simulation step to deform the skinned character meshes to resolve contact.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Research & Education
Not Livestreamed
Not Recorded
Art
Education
Full Conference
Experience
DescriptionCurious about how to get involved in SIGGRAPH’s creative community? This session introduces the ACM SIGGRAPH Digital Arts Community (DAC), a group dedicated to connecting artists, technologists, and researchers working in digital and computational media arts. You’ll learn how DAC creates opportunities for creative exchange and collaboration across disciplines through programs that are open, inclusive, and designed to spark new ideas.
Come hear about DAC’s signature initiatives like SPARKS Lightning Talks, online exhibitions, and the student digital art competition, plus updates from international partners like ISEA and Expanded Animation. Whether you're a student, first-time attendee, or longtime SIGGRAPH participant, this session is your chance to explore how you can engage, share your work, and join a global network of creative thinkers.
Come hear about DAC’s signature initiatives like SPARKS Lightning Talks, online exhibitions, and the student digital art competition, plus updates from international partners like ISEA and Expanded Animation. Whether you're a student, first-time attendee, or longtime SIGGRAPH participant, this session is your chance to explore how you can engage, share your work, and join a global network of creative thinkers.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Generative AI
Real-Time
Robotics
Full Conference
Experience
DescriptionCryoScapes began during Jiabao Li’s Arctic Circle Artist Residency in Svalbard, inspired by water’s diverse forms—vapor, snow, waves, glaciers, and sea ice. The team developed a 3D ice printing system to create intricate sculptures that evolve with temperature changes. Roaming water droplets freeze on hydrophobic or hydrophilic treated surfaces, forming landscapes that blur the sense of scale, from lunar terrains to microscopic crystalline patterns. A macro camera captures the evolving formations in real time, while AI searches for nature-like patterns and composes them into Haiku-inspired poems. As CryoScapes travels to different cities, it co-creates with local conditions—temperature, humidity, and water mineral content—blurring the line between the artificial and the natural.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Full Conference
Experience
DescriptionManfred Mohr is a recipient of the ACM SIGGRAPH Distinguished Artist Award for Lifetime Achievement in Digital Art.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionCueTip is an interactive and explainable automated coaching assistant for a variant of pool/billiards. CueTip has a natural-language interface, the ability to perform contextual, physics-aware reasoning, and its explanations are rooted in a set of predetermined guidelines developed by domain experts. CueTip matches SOTA performance, with grounded and reliable explanations.
Art Paper
Arts & Design
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Experience
Tuesday
DescriptionAfter all of the presentations, attendees are invited to participate in a Q&A.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present a method for the automatic placement of knit singularities based on curl quantization. Our method generates knit graphs that maintain all structural manufacturing constraints as well as any additional user constraints. This approach allows for simulation-free previews of rendered knits and also extends to the popular cut-and-sew setting.
Educator's Forum
Production & Animation
Research & Education
Livestreamed
Recorded
Education
Full Conference
Virtual Access
Experience
Tuesday
DescriptionThis session presents academic experiences from multiple institutions and courses that share approaches to presentation, implementation and evaluation of virtual production concepts and techniques, including design considerations and technological implementations, that provide students with the opportunity to learn and experience virtual production within a studio classroom environment.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Art
Education
Games
Full Conference
Experience
DescriptionThis session offers a preview of SIGGRAPH 2025’s creative programming. Chairs and Directors from various conference programs will share highlights of their featured projects, upcoming events, and curatorial visions. Attendees will get an early look at how this year’s conference explores new directions in digital art, encourages cross-disciplinary collaboration, and invites creative experimentation across media and technology.
The session also introduces the ACM SIGGRAPH Digital Arts Community (DAC), which supports global dialogue among artists, designers, and technologists. Through programs like SPARKS Lightning Talks, digital exhibitions, and student competitions, DAC fosters connections between art, computer graphics, and interactive media. Learn more at dac.siggraph.org.
The session also introduces the ACM SIGGRAPH Digital Arts Community (DAC), which supports global dialogue among artists, designers, and technologists. Through programs like SPARKS Lightning Talks, digital exhibitions, and student competitions, DAC fosters connections between art, computer graphics, and interactive media. Learn more at dac.siggraph.org.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionIn this work, we propose DAM-VSR, an appearance and motion disentanglement framework for video super-resolution. Appearance enhancement is achieved through reference image super-resolution, while motion control is achieved through video ControlNet. Additionally, we propose a motion-aligned bidirectional sampling strategy to support the generation of long videos.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionThis paper introduces DAMO, a Deep solver for Arbitrary Marker configuration in Optical motion capture. DAMO directly infers the relationship between each raw marker point and 3D model joint, without using predefined marker labels and configuration information.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionAs to prove himself to his gangster father, Alessandro decides to rob a bar. What he wouldn’t expect is to meet another side of his father: Lady Victoria the drag queen.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce a method for discovering novel microscale TPMS structures with high-energy dissipation. By combining a parametric design space, empirical testing, and uncertainty-aware deep ensembles with Bayesian optimization, we efficiently explore and discover structures with extreme energy dissipation capabilities.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionA baby sea turtle needs to reach the ocean. To achieve that, she has to overcome lots of obstacles and predators on her way.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe propose DC-VSR, a novel video super-resolution approach based on a video diffusion prior. DC-VSR leverages Spatial and Temporal Attention Propagation (SAP and TAP) to ensure spatio-temporally consistent results and Detail-Suppression Self-Attention Guidance (DSSAG) to enhance high-frequency details. DC-VSR restores videos with realistic textures while maintaining spatial and temporal coherence.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionThe proposed neural network, DeepMill, can efficiently predict inaccessible and occlusion regions in subtractive manufacturing. By utilizing a cutter-aware dual-head octree-based convolutional architecture, it overcomes the computational inefficiency of traditional geometric methods and is capable of real-time prediction of inaccessible and occlusion regions during the 3D shape design phase.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionDeFillet, the reverse of CAD filleting, is vital for CAE and redesign but challenging with polygon CAD models. Our algorithm uses Voronoi vertices as rolling-ball center candidates to efficiently identify fillets. Sharp features are then reconstructed via quadratic optimization, validated on diverse models.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionDeformable Beta Splatting (DBS) is a novel approach for real-time radiance field rendering that leverages deformable Beta Kernels with adaptive frequency control for both geometry and color encoding. DBS captures complex geometries and lighting with state-of-the-art fidelity, while only using 45% fewer parameters and rendering 1.5x faster than 3DGS-MCMC.
Emerging Technologies
New Technologies
Research & Education
Full Conference
Experience
DescriptionWe introduce ``Multi-Layered Inflatables,'' a novel class of asymmetric-shaped inflatable structures with easy fabrication. These structures consist of multi-pair interconnected planar sheets, leveraging multi-layered inflation to create complex curved surfaces while maintaining structural integrity. In this hands-on, the attendees can design and fabricate various types of Multi-Layered Inflatables.
Course
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Art
Artificial Intelligence/Machine Learning
Computer Vision
Generative AI
Lighting
Rendering
Full Conference
DescriptionThis course explores the role of randomness in generative AI, drawing from statistical physics, stochastic differential equations, and computer graphics. It provides a deep understanding of how noise affects generative modeling and introduces advanced techniques and applications in AI.
Real-Time Live!
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Art
Computer Vision
Digital Twins
Display
Education
Ethics and Society
Geometry
Pipeline Tools and Work
Real-Time
Rendering
Simulation
Virtual Reality
Full Conference
Virtual Access
Tuesday
DescriptionDescendant pioneers a heritage conservation workflow through cross-platform real-time tool integration. This hybrid project uses a contemplative VR experience to sample and simulate Teochew embroidery and folk rituals, exploring how digitization reshapes cultural memory practices. By innovating in cultural memory digitization, it interrogates digital environments' impact on ritual preservation.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionOur method proposes a novel computational design framework for designing anisotropic tensor fields. It enables flexible control over scalings without requiring users to specify orientations explicitly. We apply these anisotropic tensor fields to various applications, such as anisotropic meshing, structural mechanics, and fabrication.
Talk
Arts & Design
Production & Animation
Livestreamed
Recorded
Augmented Reality
Lighting
Pipeline Tools and Work
Real-Time
Full Conference
Virtual Access
Sunday
DescriptionThe Oscar tablet-based interfaces provide intuitive access to Industrial Light & Magic's StageCraft technology platform, offering filmmakers extensive creative control in a real-time LED virtual production volume. Using an empathetic iterative design approach, the resulting user interfaces led to a successful realization of the filmmaker's vision.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce a pin-pression gripper featuring parallel-jaw fingers with 2D arrays of independently extendable pins, allowing instant shape adaptation to target object geometry and dynamic in-hand re-orientation for enhanced grasp stability. Reinforcement learning with curriculum-based training enables flexible, robust grasping and grasp-while-lift mode, validated by sim-to-real experiments with superior performance.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionDesignManager is an AI-powered design support system that functions as an interactive copilot throughout the creative workflow. With node-based visualization of design evolution and conversational interaction modes, it helps designers track, modify, and branch their processes while providing context-aware assistance through an innovative agent framework.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionDetlev has no problem. Why should he have a problem? He's obviously doing great.
Production Session
Production & Animation
Not Livestreamed
Not Recorded
Animation
Art
Industry Insight
Lighting
Rendering
Full Conference
Virtual Access
Monday
DescriptionThe Wild Robot follows the journey of Roz, a high-tech robot stranded on a remote island. In the film, we accentuate this juxtaposition of nature and technology stylistically–a futuristic machined precision amongst a deconstructed, painterly world. The style of the film serves the story–Roz does not initially belong on the island, stylistically. To that goal, we reimagined every workflow to retain the immediacy and fluidity of an artist's hand.
This production session features the development and technical challenges required to accomplish the film's unique look. Ultimately, our goal throughout every department was to incorporate the endearing qualities of traditional painting and 2D animation, while maintaining the richness and sophistication of a 3D space. These hand-crafted elements give the audience an immersive, imaginative experience and support the emotional intent of the story.
This production session features the development and technical challenges required to accomplish the film's unique look. Ultimately, our goal throughout every department was to incorporate the endearing qualities of traditional painting and 2D animation, while maintaining the richness and sophistication of a 3D space. These hand-crafted elements give the audience an immersive, imaginative experience and support the emotional intent of the story.
Birds of a Feather
Production & Animation
Not Livestreamed
Not Recorded
Pipeline Tools and Work
Full Conference
Experience
DescriptionSoftware companies have 40-60% ratio engineers. Movie and cartoon production companies are around 5-10%. Still they need to maintain a wide range of DCC plugins and OS, multiple versions of in-house tools across multiple projects. The ratio (size of the build matrix/number of engineers) in production companies is way bigger than software companies. Because their business model is about pictures, not software. To face such a ratio, production companies need to maximize the use of DevOps technologies. In this BOF we'd like to share production proof knowledge, tips and tricks to improve the delivery and robustness of technologies to production.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionThe battle against Hatred has only just begun.
Journey into the new region of Nahantu in search of Neyrelle, who is both suffering the fate of her choice to imprison the Prime Evil Mephisto, and seeking a means to destroy him.
Journey into the new region of Nahantu in search of Neyrelle, who is both suffering the fate of her choice to imprison the Prime Evil Mephisto, and seeking a means to destroy him.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionIntroducing differentiable path tracing for geometric acoustics with an efficient gradient algorithm based on path replay backpropagation. The system computes derivatives of output spectrograms with respect to arbitrary scene parameters (materials, geometry, emitters, microphones) within the framework of acoustic ray tracing, with applications demonstrated in various geometric scenarios.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionMeet Diffuse-CLoC—a powerful unification of intuitive steering in kinematic motion generation and physics-based character control. By guiding diffusion over joint state-action spaces, it enables agile, steerable, and physically realistic motions across diverse downstream tasks—from obstacle avoidance to task-space control and motion in-betweening—all from a single model, with no fine-tuning required.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionDiffusing Winding Gradients (DWG) efficiently reconstructs watertight 3D surfaces from unoriented point clouds. Unlike conventional methods, DWG avoids solving linear systems or optimizing objective functions, enabling simple implementation and parallel execution. Our CUDA implementation on an NVIDIA GTX 4090 GPU runs 30–120x faster than iPSR on large-scale models (10–20 million points).
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionDiffusion as Shader (DaS) is a unified approach for controlled video generation that uses 3D tracking videos to enable versatile editing, including animating mesh-to-video, camera control, motion transfer, and object manipulation, while improving temporal consistency.
Course
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Generative AI
Image Processing
Full Conference
DescriptionThis tutorial focuses on diffusion models, cutting-edge tools for image and video generation. Designed for graphics researchers and practitioners, it offers insights into the theory, practical applications, and real-world use cases. Participants will learn how to effectively leverage diffusion models for creative projects in computer graphics.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionPowder-snow avalanches are natural phenomena that result from an instability in the snow cover on a mountain relief. This paper introduces a physically-based framework to simulate powder-snow avalanches under complex terrains, allowing us to animate the turbulent snow cloud dynamics within the avalanche in a visually realistic way.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Not Livestreamed
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Computer Vision
Ethics and Society
Generative AI
Haptics
Hardware
Image Processing
Performance
Physical AI
Robotics
Virtual Reality
Full Conference
Experience
DescriptionThis session will present the highlights from four SPARKS (Short Presentations for the Kindred Spirit) Sessions. The "Sensing the Body to Expand Possibilities in Art and Performance" session moderated by Elizabeth Jochum, Alan Macy and Bonnie Mitchell explored how artists push creative boundaries utilizing computational tools to augment the body in artistically expressive ways. "AI and Artistic Autonomy", moderated by Mauro Martino, Rebecca Ruige Xu and Gustavo Alfonso Rincon investigated how the dependency on models and algorithms developed by others influences creative practice. "Artistic Interpretation of Digital Cultural Heritage" moderated by Fan Xiang and Victoria Szabo, explores how the choices visual artists make in sourcing, composing, and styling historical imagery affects our understanding of the past. And lastly the "First Nations’ Futures" SPARKS session, moderated by Rewa Wright and Clarissa Ribeiro, discusses how digital art is used as a gateway to share stories about culture, community, and identity.
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Livestreamed
Recorded
Art
Computer Vision
Dynamics
Education
Image Processing
Modeling
Simulation
Full Conference
Virtual Access
Experience
Monday
DescriptionThis work offers an innovative approach to digitally replicating crazing patterns, which are aesthetic crazing found on ceramics. By using a quadtree structure, the method captures the time dependent and user-interaction aspects of these patterns, providing a novel perspective in digital material design. This contribution is important for the fields of digital arts, computer graphics, and interactive techniques, as it enriches the representation of cultural aesthetics and advances digital texture generation methods.
Art Paper
Arts & Design
Research & Education
Livestreamed
Recorded
Art
Artificial Intelligence/Machine Learning
Digital Twins
Generative AI
Full Conference
Virtual Access
Experience
Tuesday
DescriptionBoth a critique and celebration of digital representation, this project offers multiple perspectives beyond technological homogenization. Through exploring digital f(r)ictions and multiplicities, we reject singular viewpoints in favor of interconnected truths. Our work with AI and Colombian art raises questions about bias, agency, and authenticity in cultural production, prompting reflection on AI's influence on collective imaginaries.
Talk
Arts & Design
Production & Animation
Livestreamed
Recorded
Animation
Augmented Reality
Digital Twins
Real-Time
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Tuesday
DescriptionAn R&D initiative within Aardman Animations exploring VP technologies for stop-motion film making. Taking a holistic view of VP from story development through to delivery. Utilising real-time tools, digital twins, and a cross platform XR sandbox. Striving to evolve traditional processes, enhancing creativity, efficiency, and integration across the production pipeline.
Talk
Production & Animation
Not Livestreamed
Not Recorded
Animation
Geometry
Modeling
Simulation
Full Conference
Virtual Access
Sunday
DescriptionThis work describes a new approach for directing cloth draping that accommodates 3D shaping and 2D pattern making simultaneously. We showcase our results with a series of garment assets and cloth animations from Pixar feature films Inside Out 2 (2024) and Elio (2025).
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Art Paper
Discipline Together With the Self in Kendo: Exploring "Qi" Through Mixed Reality and Autoethnography
3:45pm - 4:05pm PDT Monday, 11 August 2025 West Building, Rooms 109-110Arts & Design
Gaming & Interactive
Livestreamed
Recorded
Art
Augmented Reality
Digital Twins
Games
Virtual Reality
Full Conference
Virtual Access
Experience
Monday
DescriptionThis work explores “qi” in kendo through mixed reality and autoethnography, blending tradition and technology. By animating digital humans with “qi”, it frames martial arts as art. The project invites reflection on selfhood and offers fresh insights at the intersection of culture, embodiment, and digital experience.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionAlthough discrete connections are ubiquitous in vector field design, their torsion remains unstudied. We extend the existing toolbox to control the torsion of discrete connections: we introduce a new discrete Levi-Civita connection and define torsion as a measure of deviation from this reference, so torsion becomes a simple linear constraint.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThe paper proposes a construction algorithm based on a divide-and-conquer strategy to map a disk-topology triangular mesh onto any convex polygon., which supports arbitrary numerical precision and exact arithmetic. Under exact arithmetic, it strictly guarantees a bijection for any mesh and convex polygon.
Real-Time Live!
Arts & Design
Gaming & Interactive
New Technologies
Livestreamed
Recorded
Art
Audio
Real-Time
Full Conference
Virtual Access
Tuesday
DescriptionDJESTHESIA uses tangible interaction to craft real-time audiovisual multimedia, blending sound, visuals, and gestures into a unified live performance. Combining a DJ setup with motion capture and projected visuals, DJESTHESIA composes music and visuals from the DJ’s mixing and gestures, transforming the DJ into both a performer and a performance.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe formalize the path-tracing of volumes composed of anisotropic kernel mixture models. Our work enables computing physically-based light transport on complex volumetric assets efficiently, on tiny memory budgets. We further introduce Epanechnikov kernels as an efficient alternative in kernel-based rendering, and showcase our method in different forward and inverse volume rendering applications including radiance fields.
Talk
Production & Animation
Not Livestreamed
Not Recorded
Animation
Pipeline Tools and Work
Simulation
Full Conference
Virtual Access
Wednesday
DescriptionTo meet the ambitiously illustrative design requirements for the FX in The Wild Robot and The Bad Guys 2, we developed a collection of tools, called Doodle, to let artists nimbly blend drawing techniques with simulation to efficiently craft stylized shape and motion in a 3D FX pipeline.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionDYG is a 3D drag-based scene editing method for Gaussian Splatting that enables precise, multi-view consistent geometric edits using 3D masks and control points. It combines implicit triplane representation and a drag-based diffusion model for high-quality, fine-grained results. Visit our project page at \url{https://drag-your-gaussian.github.io/}.
Technical Workshop
Arts & Design
New Technologies
Research & Education
Art
Artificial Intelligence/Machine Learning
Computer Vision
Generative AI
Image Processing
Robotics
DescriptionDrawing is a fundamental human activity, used to think, communicate, and create. This workshop aims to deepen our understanding of drawing from both human and computational perspectives.
We will discuss questions such as: How do people draw pictures? What can psychology teach us about drawing behavior? How can the sketching process be modeled computationally? And how can sketching enhance designer control over AI tools?
We hope to inspire new connections and ideas by bringing these topics together in one place, and to inspire new interest in the computer graphics community in these topics.
We will discuss questions such as: How do people draw pictures? What can psychology teach us about drawing behavior? How can the sketching process be modeled computationally? And how can sketching enhance designer control over AI tools?
We hope to inspire new connections and ideas by bringing these topics together in one place, and to inspire new interest in the computer graphics community in these topics.
Real-Time Live!
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Generative AI
Performance
Real-Time
Rendering
Full Conference
Virtual Access
Tuesday
DescriptionThis presentation demonstrates my custom system integrating real-time AI generation with laser mapping. Using StreamDiffusion via TouchDesigner, images are created and refined through an interactive feedback loop, then transformed into laser paths. The system maps both projections and laser traces onto surfaces, with Ableton Live enabling synchronized performance with music.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionTo address a lack of generalization to novel classes, we propose DreamMask, which systematically explores data generation in the open-vocabulary setting, and how to train the model with both real and synthetic data. It significantly simplifies the collection of large-scale training data, serving as a plug-and-play enhancement for existing methods.
Emerging Technologies
Arts & Design
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Fabrication
Generative AI
Modeling
Pipeline Tools and Work
Rendering
Full Conference
Experience
DescriptionDreamPrinting is a volumetric 3D printing pipeline that transforms generative, radiance-based models into delicate real-world art pieces. By precisely assigning pigments at each voxel, it reveals breathtaking details—like translucent fur and glowing leaves—enabling users to convert imaginative digital scenes into tangible realities with unprecedented color fidelity.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce Dress-1-to-3 to reconstruct physics-plausible, simulation-ready separated garments from an in-the-wild image. Starting with the image, our approach combines a pre-trained image-to-sewing pattern generation model with a pre-trained multi-view diffusion model. The sewing pattern is refined using a differentiable garment simulator based on the generated multi-view images.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe present a neural global illumination method capable of capturing multi-frequency reflections in dynamic scenes by leveraging object-centric feature grids and a novel dual-band fusion module. Our approach produces high-quality, realistic rendering effects and outperforms state-of-the-art techniques in both visual quality and computational efficiency.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionDualMS is a novel framework for designing high-performance heat exchangers by directly optimizing the separation surface of two fluids using dual skeleton optimization and neural implicit functions. It offers greater topological flexibility than TPMS and achieves superior thermal performance with lower pressure drop while maintaining comparable heat exchange rates.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe present a framework for generating music-driven synchronized two-person dance animations with close interactions. Our system represents the two-person motion sequence as a cohesive entity, performs hierarchical encoding of the motion sequence into discrete tokens, and utilizes dual generative masked transformers to generate realistic and coordinated dance motions.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionPersonalizing text-to-video models is challenging because dynamic concepts require capturing both appearance and motion. We propose Set-and-Sequence, a framework that personalizes DiT-based video models by first learning an identity LoRA basis from unordered frames, then fine-tuning coefficients with motion residuals on full videos, enabling superior editability and compositionality for applications.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionIntroducing the first GPU-based system for dynamic triangle mesh processing, delivering order-of-magnitude speedups over CPU solutions across diverse applications. Our system uses patch-based data structure, speculative conflict handling, and a novel programming model, enabling robust, high-performance, and fully dynamic mesh operations directly on the GPU.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present a revolutionary method for experiencing live sports in stunning 3D, redefining the way games are seen, through immersive, interactive replays. Alongside, we release a large-scale synthetic dataset built to benchmark realism, motion, and human interaction in dynamic scenes, to fuel the next wave of research in 3D streaming.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThis paper presents Epsilon Difference Gradient Evolution (EDGE), a novel method for accurate flow-map computation on grids without velocity buffers. EDGE enables large-scale, efficient and high-fidelity fluid simulations that capture and preserve complex vorticity structures while significantly reducing memory usage.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe automate video nonlinear editing using a multi-agent system. An Editor agent uses tools to create sequences from clips and instructions, while a Critic agent provides feedback in natural language. Our learning-based approach enhances agent communication. Evaluations with an LLM-as-a-judge metric and user studies show our system’s superior performance.
Educator's Forum
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Education
Full Conference
Experience
DescriptionAfter four days of education-focused presentations, attendees are invited to an invigorating, open discussion on current topics in computer graphics education and interactive techniques. Come to the town hall session to ask questions, share perspectives, and have a good time with the SIGGRAPH education community!
Technical Workshop
New Technologies
Capture/Scanning
Computer Vision
Generative AI
Modeling
Virtual Reality
DescriptionThis workshop aims to democratize 3D content creation, including both static and dynamic content by exploring recent advances in 3D reconstruction from real-world inputs of videos and images, as well as in generative AI technologies. We bring together researchers and practitioners to discuss scalable pipelines that lower the barrier to immersive content creation. A key feature of this workshop is a hands-on demonstration: participants will experience 1) volumetric content generation and real-time streaming experience, and 2) VR contents generated from fast 3D methods on VR headsets. The workshop seeks to bridge the gap between cutting-edge reconstruction research and VR applications.
Course
Research & Education
Livestreamed
Recorded
Animation
Artificial Intelligence/Machine Learning
Computer Vision
Dynamics
Geometry
Image Processing
Modeling
Rendering
Simulation
Full Conference
Virtual Access
Sunday
DescriptionLike a semester long graduate seminar on Eigenanalysis, Singu-
lar Value Decompositions, and Principal Component Analysis in
Computer Graphics and Interactive Techniques, this course looks at
matrix diagonalization and analysis through the lens of 13 technical
papers selected by the lecturers.
lar Value Decompositions, and Principal Component Analysis in
Computer Graphics and Interactive Techniques, this course looks at
matrix diagonalization and analysis through the lens of 13 technical
papers selected by the lecturers.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionOur framework enables realistic and interesting elastic body locomotion by determining optimal muscle activations to achieve desired movements. It combines interior-point method for contact modeling with a novel mixed second-order differentiation algorithm that merges analytic and numerical approaches, allowing Newton's method optimization to create diverse soft body animations.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionElevate3D transforms low-quality 3D models into high-quality assets through iterative texture and geometry refinement. At its core, HFS-SDEdit refines textures generatively while preserving the input’s identity leveraging high-frequency guidance. The resulting texture then guides geometry refinement, allowing Elevate3D to deliver high-quality results with well-aligned texture and geometry.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionGenerating string instrument performances with intricate movements and complex interactions poses significant challenges. To address these, we present ELGAR—the first diffusion-based framework for whole-body instrument performance motion generation solely from audio. We further contribute innovative losses, metrics, and dataset, marking a novel attempt with promising results for this emerging task.
Talk
Gaming & Interactive
Research & Education
Livestreamed
Recorded
Games
Industry Insight
Real-Time
Full Conference
Virtual Access
Sunday
DescriptionLearn about ongoing sustainability journey that Call of Duty® graphics developers have embarked on, including data used to guide each decision. Several techniques will be surveyed, along with their results, to help inspire developers of any real time graphics application to reduce their carbon footprints with minimal effort.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionMarker-based optical motion capture (MoCap) is critical for virtual production and movement sciences. We propose a novel framework for MoCap auto-labeling and matching using uniquely coded clusters of reflective markers (AEMCs). Compared to commercial software, our method achieves higher labeling accuracy for heterogeneous targets and unknown marker layouts.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionA boy navigates fleeting a childhood friendship and discovers his own queerness across three pivotal summers in 1990s southern China.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionDesigning freeform surfaces to reflect or refract light to achieve target light distributions is a challenging inverse problem. We propose an end-to-end optimization strategy using a novel differentiable rendering model driven by image errors, combined with face-based optimal transport initialization and geometric constraints, to achieve high-quality final physical results.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Ethics and Society
Fabrication
Games
Generative AI
Geometry
Graphics Systems Architecture
Image Processing
Industry Insight
Lighting
Math Foundations and Theory
Performance
Physical AI
Pipeline Tools and Work
Rendering
Scientific Visualization
Spatial Computing
Full Conference
Experience
DescriptionAfter the enthusiastic response to our BOF last year, we're back for more!
Entrepreneurs are some of the most inspiring and supportive people to meet and connect with. If you're an established or upcoming business founder and want to meet, connect, and share start-up stories, struggles, and successes with like-minded entrepreneurs, don't miss this opportunity!
Entrepreneurs are some of the most inspiring and supportive people to meet and connect with. If you're an established or upcoming business founder and want to meet, connect, and share start-up stories, struggles, and successes with like-minded entrepreneurs, don't miss this opportunity!
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionA real-time deformation method for Escher tiles --- interlocking organic forms that seamlessly tessellate the plane following symmetry rules. Rather than treating tiles as mere boundaries, we consider them as textured shapes, ensuring that both the boundary and interior deform simultaneously. The deformation is achieved via a closed-form solution.
Frontiers
Gaming & Interactive
Research & Education
Not Livestreamed
Not Recorded
Games
Rendering
Full Conference
Experience
DescriptionEsports is a unique challenge for rendering research, with players regularly turning off even basic rendering techniques to reduce latency. In this workshop, three esports developers and three competing esports athletes will form an expert panel on esports rendering needs. The workshop will have three parts: a traditional panel session, with questions from a moderator and from the panel itself; an audience discussion session, with groups led by organizers and panel members producing questions and raising issues; and a closing panel session, with the panel addressing the questions raised by the audience.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionIn this work, we introduce Expressive Virtual Avatars (EVA), an actor-specific, fully controllable and expressive human avatar framework that achieves high-fidelity, lifelike renderings in real-time, while enabling independent control of facial expressions, body movements, and hand gestures.
Immersive Pavilion
Arts & Design
New Technologies
Virtual Reality
Full Conference
Experience
DescriptionThis study proposes a design that enables users to experience the movements of others outside their view in a social VR environment through multi-sensory feedback. Users perceive the distance and movement of invisible users through the VR screen content and chair, using scent, vibration and sound to enhance social presence.
Production Session
Production & Animation
Not Livestreamed
Not Recorded
Animation
Industry Insight
Real-Time
Full Conference
Virtual Access
Sunday
DescriptionCome hear about the creation of Nickelodeon’s hit new animated action/adventure series Max & the Midknights! The show’s Supervising Producer and CG Supervisor, along with production partner Xentrix’s Head of Pipeline and Creative Director, will share a behind-the-scenes look at the challenges they faced creating this ambitious cinematic CG show with its hand-made and stop-motion look and feel.
You will learn how Max’s design-forward pipeline, built upon a custom set of tools developed in Unreal, enabled a new agile story process that blends the best qualities of live-action and animated filmmaking. Unlike the “traditional” storyboard-based TV approach, Max’s visualization artists shoot on final models in 3D then deliver dailies to editors who leverage their live-action expertise to assemble a full episode for review. Once noted and updated, the visualization pass goes to the storyboard artists, who--thanks to the true 3D nature of the visualization work--can focus exclusively on character performance with the confidence that all of their compositions will be fully-reproducible in 3D. This hybridized process obviates the need for a downstream blocking/layout pass because the cameras and animations used to make the animatic are exported from Unreal and delivered along with the next-level animatic to Max’s Bangalore-based production partner Xentrix Studios.
In addition to Max’s unique story pipeline, you will also hear about the tools and techniques developed collaboratively between Nickelodeon and Xentrix to permit character animation in Maya with final rendering/delivery in Unreal, including a seamless alembic- and FBX-based animation transfer approach and automatic shot creation in Unreal from Maya data. The team will also highlight the creative power of real-time iteration for improving cameras, shading, and lighting later in the process than permitted in the typical pipeline, and how they were able to get broadcast-ready final pixels straight out of Unreal.
You will learn how Max’s design-forward pipeline, built upon a custom set of tools developed in Unreal, enabled a new agile story process that blends the best qualities of live-action and animated filmmaking. Unlike the “traditional” storyboard-based TV approach, Max’s visualization artists shoot on final models in 3D then deliver dailies to editors who leverage their live-action expertise to assemble a full episode for review. Once noted and updated, the visualization pass goes to the storyboard artists, who--thanks to the true 3D nature of the visualization work--can focus exclusively on character performance with the confidence that all of their compositions will be fully-reproducible in 3D. This hybridized process obviates the need for a downstream blocking/layout pass because the cameras and animations used to make the animatic are exported from Unreal and delivered along with the next-level animatic to Max’s Bangalore-based production partner Xentrix Studios.
In addition to Max’s unique story pipeline, you will also hear about the tools and techniques developed collaboratively between Nickelodeon and Xentrix to permit character animation in Maya with final rendering/delivery in Unreal, including a seamless alembic- and FBX-based animation transfer approach and automatic shot creation in Unreal from Maya data. The team will also highlight the creative power of real-time iteration for improving cameras, shading, and lighting later in the process than permitted in the typical pipeline, and how they were able to get broadcast-ready final pixels straight out of Unreal.
Talk
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Animation
Fabrication
Geometry
Modeling
Full Conference
Virtual Access
Tuesday
DescriptionOur groundbreaking groom pipeline for LAIKA's "Wildwood" revolutionizes stop-motion puppet fabrication through CG-assisted silicone casting. By leveraging the VFX team's 3D models and digital grooms, we were able to scale to the needs of this epic feature, while providing anisotropic characteristics and enabling seamless integration between practical and digital elements.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce FaceExpressions-70k, a large-scale dataset comprising 70,500 crowdsourced comparisons of facial expressions collected from over 1,000 participants. It supports the training of perceptual models for expression differences and helps guide decisions on acceptable latency and sampling rates for facial expressions when driving a face avatar.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionGiven a single co-located smartphone video captured in a dim room as the input, our method can reconstruct high-quality facial assets within the distribution modeled by a diffusion prior trained on Light Stage scans, which can be exported to common graphics engines like Blender for photo-realistic rendering.
Technical Paper
Research & Education
Recorded
Not Livestreamed
Full Conference
Virtual Access
Monday
DescriptionOur framework can efficiently synthesize facial microstructure from an unconstrained facial image via differentiable optimization. We propose neural wrinkle simulation for differentiable microstructure parameterization, and direction distribution similarity to align features with blurry image patches. Our framework is also compatible with existing facial reconstruction methods for detail enhancement.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose a novel method for normal estimation of unoriented point clouds and VR ribbon sketches that leverages a modeling of the Faraday cage effect. Our method is uniquely robust to the presence of interior structures and artifacts, producing superior surfacing output when combined with Poisson Surface Reconstruction.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionLarry, a man in his thirties, makes one final attempt to save his father from alcoholism, even at the risk of his own downfall.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionFashionComposer is a flexible model for compositional fashion image generation, with a universal framework that handles diverse input modalities such as text, human models, and garment images. It personalizes appearance, pose, and human figure, using subject-binding attention to integrate reference features, enabling applications like virtual try-ons and human album generation.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionThis paper presents a GPU-friendly framework for real-time implicit simulation of hyperelastic materials with frictional contacts. Utilizing a novel splitting strategy and efficient solver, the approach achieves robust, high-performance simulation across various stiffness materials, handling large deformations and precise friction interactions with remarkable efficiency, accuracy, and generality.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionDetecting surface self-intersections is crucial for CAD modeling to prevent issues in simulation and manufacturing. This paper presents an algebraic signature-based algorithm for fast determining self-intersections of NURBS surfaces. This signature is then recursively cross-used to compute the self-intersection locus, guaranteeing robustness in critical cases including tangency and small loops.
Talk
Production & Animation
Livestreamed
Recorded
Dynamics
Simulation
Full Conference
Virtual Access
Tuesday
DescriptionPresentation of a production proven, fast fluid up-resing method allowing high quality fluid FX to be generated from low resolution simulations.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionThe median filter is a staple of computational image processing. Existing efficient methods share a common flaw, which is that they use a square kernel, producing visual artifacts. Our method overcomes this limitation, enabling fast and high-quality circular-kernel median filtering, across multiple platforms and image types.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose a physics-based modeling system for knots and ties using pipe-like parametric templates, defined by Bézier curves and adaptive radii for flexible, intersection-free shapes. Our method maps cloth regions from UV space into 3D knot forms via a penetration-free initialization and supports quasistatic simulation with efficient collision handling.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce a novel reduced-order fluid simulation technique leveraging Dynamic Mode Decomposition (DMD) to enable fast, memory-efficient, and user-controllable subspace simulation. Combining spatial ROM compression with spectral physical insights, our method excels in animation, real-time interaction, artistic control, and time-reversible fluid effects.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe extend Penner-coordinate-based methods for seamless parametrizations to surfaces with sharp features to which the parametrization needs to be aligned. We describe a two-phase method to efficiently minimize feature constraint residual errors. We demonstrate that the resulting algorithm works robustly on the Thingi10k dataset, completing the quad mesh generation pipeline.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present a unified mesh repair framework using a manifold wrap surface to fix diverse imperfections while preserving sharp features. By optimizing projected samples and leveraging adaptive weighting, our method ensures watertightness, manifoldness, and high geometric fidelity, outperforming existing approaches in both topology correction and feature preservation.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionThis study explores how light color influences the perception of emotion of virtual characters. By analyzing various lighting conditions, including red and blue hues, we reveal how light affects emotion intensity, recognition, and genuineness. Findings show that lighting, realism, and shadows are key factors in enhancing the emotional impact.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionOur approach proposes a novel partition method for reliable feature-aligned quadrangulation. The core insight is that singularity-distant smooth streamlines are more suitable as patch boundaries. The key implementation confines patch boundaries to high field smoothness regions.
Validated on large-scale datasets, our method generates high-quality quad meshes while preserving reliability.
Validated on large-scale datasets, our method generates high-quality quad meshes while preserving reliability.
Talk
Production & Animation
Not Livestreamed
Not Recorded
Pipeline Tools and Work
Full Conference
Virtual Access
Wednesday
DescriptionOur talk includes technical information for setting up a similar pipeline, as well as Artist stories covering the necessary extensions discovered during production to achieve the artistic vision of the Directors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose FlexiAct, an image animation framework that transfers actions from a reference video to any target image, enabling variations in layout, viewpoint, and skeletal structure while maintaining identity consistency.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionThis work constructs Green coordinates for cages composed of Bézier patches, which enables flexible deformations with curved boundaries. The high-order structure also allows us to create a compact curved cage for the input models. Additionally, this work proposes a global projection technique for precise linear reproduction.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionIn this short based on Skydance Animation’s feature Spellbound, Flink sets out with a bit of magic to help rescue the messenger pigeons that have turned to stone.
Art Paper
Arts & Design
New Technologies
Production & Animation
Livestreamed
Recorded
Art
Modeling
Spatial Computing
Full Conference
Virtual Access
Experience
Monday
DescriptionTransforming 2D Chinese calligraphy into 3D forms deepens how traditional art is understood and experienced, combining cultural heritage with modern technology. This approach adds spatial depth, opening new possibilities for digital art, preservation, and interactive design.
By merging computational modeling with artistic expression, the work explores how technology can reinterpret historical artforms, encouraging cross-disciplinary dialogue and inspiring new ways to preserve and evolve intangible cultural heritage.
By merging computational modeling with artistic expression, the work explores how technology can reinterpret historical artforms, encouraging cross-disciplinary dialogue and inspiring new ways to preserve and evolve intangible cultural heritage.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionFlexible Level of Detail (FLoD) integrates the concept of LoD into 3DGS using a multi-level representation built with 3D Gaussian scale constraints and level-by-level training strategy. FLoD enables flexible rendering through single-level or selective rendering for optimal image quality under varying GPU VRAM constraints.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present a unified compressible flow map framework based on Lagrangian path integrals, enabling conservative density-energy transport and flexible pressure treatments. Validated on diverse systems—from shocks to shallow water—it captures complex flow features like vortices and wave interactions, broadening flow-map applicability across compressibility regimes and fluid morphologies.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present the Vortex Particle Flow Map (VPFM) method, which revitalizes the traditional Vortex-In-Cell approach for computer graphics. By evolving vorticity and higher-order quantities along particle flow maps, our method achieves significantly improved long-term stability and vorticity preservation, enabling high-fidelity simulation of complex vortical fluid motions in 2D and 3D.
Emerging Technologies
New Technologies
Research & Education
Robotics
Full Conference
Experience
DescriptionThis study presents "FluidicSwarm," a swarm robot control system that imitates fluid behavior, extending the user's body. Users manipulate fluid properties with hand movements, adjusting the swarm's shape and flexibility for easy avoidance and transport tasks. This improves the efficiency of swarm robot operations in various environments, including teleoperation.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Full Conference
Experience
DescriptionFollow the Ship (2025) is a generative video work emerging from Matthew Attard’s project I WILL FOLLOW THE SHIP. It explores the convergence of heritage and digital processes as a form of contemporary digital drawing. The work integrates eye-tracking datasets from historical maritime graffiti found in Malta with algorithmic generative systems, questioning how digital media can reframe cultural memory and visual language. As one of several outcomes from the project, Follow the Ship reflects on themes of heritage, metaphor, the maritime environment, and the digital present, offering a layered meditation on perception, history, and technological reinterpretation.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionForceGrip is a reference-free reinforcement learning-based agent for realistic VR hand manipulation. It uses a progressive curriculum (finger positioning, intention adaptation, dynamic stabilization) and physics simulation to convert VR controller inputs into faithful grip forces. In user studies, participants achieved higher realism and precise control than competitors, ensuring immersive interaction.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionAn orphaned bear cub finds a home with a fatherly evergreen tree, until his hunger for trash leads him to danger.
Birds of a Feather
Arts & Design
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Art
Pipeline Tools and Work
Full Conference
Experience
Descriptionframe:work is the home of LIVE PIXEL PEOPLE. We bring together the artists, technologists and producers who deliver creative video content for live audience or generate pixels live for camera & screen. Our community works across film, live entertainment, art installations and web, creating visual experiences that are a creative collaboration across technology and design. Join us for a discussion of projects, tools, best practices and market challenges. We'll be talking about our mission of client education, next generation mentorship and community knowledge sharing.
Talk
Production & Animation
Not Livestreamed
Not Recorded
Art
Geometry
Modeling
Full Conference
Virtual Access
Sunday
DescriptionThis talk explores the integration of traditional Jacquard weaving techniques into our existing workflow at Netflix Animation Studios. We discuss the merits of fibre-level construction and the lessons learned as we laboured to place the power of a Jacquard loom into the hands of digital artists.
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Livestreamed
Recorded
Animation
Art
Computer Vision
Education
Games
Real-Time
Spatial Computing
Full Conference
Virtual Access
Tuesday
DescriptionImmersion may be ancient, but creating collective, interactive experiences remains a challenge even today. Opaque interfaces, cognitive overload, and co-presence can hinder engagement. Through case studies, this talk explores practical frameworks for leveraging real-time engines, novel HCI, and large-format displays to craft resonant, shared experiences in public spaces.
Emerging Technologies
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Capture/Scanning
Computer Vision
Hardware
Image Processing
Spatial Computing
Full Conference
Experience
DescriptionWe present a compact, handheld holographic video camera that captures full-color holograms in real time under natural lighting, making laser-free holography possible. By integrating advanced optical components and AI-driven super-resolution, it enables high-quality holographic content capture, paving the way for portable, next-generation immersive media and real-world applications of holography.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Art
Augmented Reality
Spatial Computing
Full Conference
Experience
DescriptionFungiSync is a cyberdelic mixed reality participatory ritual that reprotocolizes bodily contact—for example, the handshake—through masquerade-style, mushroom-decorated mixed reality masks, enabling participants to merge or exchange their distinct, audio-reactive augmented reality overlays and, in doing so, dissolve "you" and "me" perspectives inspired by fungal non-dualism interdependence wisdom.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present GAIA (Generative Animatable Interactive Avatars) for high-fidelity 3D head avatar generation. GAIA learns dynamic details with expression-conditioned Gaussians, while being animatable consistently with an underlying morphable model. With a novel two-branch architecture, GAIA disentangles identity and expression. GAIA achieves state-of-the-art realism and supports interactive rendering and animation.
Educator's Forum
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Art
Education
Fabrication
Games
Haptics
Real-Time
Full Conference
Virtual Access
Experience
Wednesday
DescriptionThis talk explores how gamified learning and adaptive game design empower individuals with disabilities. Using case studies from Limbitless Solutions’ interdisciplinary training games, it highlights how computer graphics, interactive techniques, and accessibility-driven design can transform education, fostering engagement, empathy, and innovation in CG classrooms and beyond.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionGarment sewing patterns often rely on vector formats, which struggle with discontinuities and unseen topologies. GarmentImage instead encodes geometry, topology, and placement into multi-channel grids, enabling smooth transitions and better generalization. Using simple CNNs, it works well in pattern exploration, prompt-to-pattern generation, and image-to-pattern prediction.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe propose a Gaussian fitting compression method for light field probes, reducing storage and memory demands in large scenes. Using low-bit adaptive Gaussians and GPU-accelerated decompression, our technique replaces traditional PCA-based approaches, achieving 1:50 compression ratios. Real-time cascaded light field textures eliminate redundant baking, preserving visual quality and rendering speed.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present a grid-free fluid simulator featuring a novel Gaussian spatial representation (GSR) for velocity field. The advantages of GSR over traditional Lagrangian/Eulerian data structures are 4-folded: memory compactness, spatial adaptivity, vorticity preservation and continuous differentiability. Our method also greatly outperforms similar competitors in terms of quality and performance.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe develop novel and efficient computer-generated holography algorithms, dubbed Gaussian Wave Splatting, that transform Gaussian-based scene representations into holograms. We derive a closed-form 2D Gaussian-to-hologram transform supporting occlusions and alpha blending, along with an efficient, easily parallelizable Fourier-domain approximation of this process, implemented with custom CUDA kernels.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionGaVS: Transform unstable shaky videos into smooth, professional-quality footage. We design novel 3D rednering technology that preserves the motion intent while eliminating shakes and distortions—no cropping, no distortion and workable under dynamics and intense motions. GaVS delivers natural-looking results validated by users as superiority. Capture life steadily!
Talk
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Livestreamed
Recorded
Art
Artificial Intelligence/Machine Learning
Generative AI
Image Processing
Real-Time
Full Conference
Virtual Access
Wednesday
DescriptionAs part of our GenAI innovation program, Moment Factory collaborated with Third Rail Projects (TRP), a New York-based troupe renowned for its immersive and participatory creations, to co-create a unique exploration blending human creativity with the power of generative artificial intelligence tools.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present GenAnalysis, an implicit shape generation framework enabling joint shape matching and consistent segmentation by enforcing as-affine-as-possible (AAAP) deformations via regularization loss in latent space. It enables shape analysis via extracting and analysing shape variations in the tangent space. Experiments on ShapeNet demonstrate improved performance over existing methods.
Technical Workshop
Research & Education
Animation
Physical AI
Robotics
DescriptionLegged robots, particularly humanoids, represent an emerging technology whose widespread acceptance depends on their ability to perform meaningful tasks at the human cadence in the real world. While human motion data can drive progress in this field, it is often sparse and lacks action labels, limiting the effectiveness of supervised learning. Recent advancements in imitation learning, reinforcement learning, and robotic hardware improvements have led to better generalization of natural behaviors in robots. This workshop will bring together leaders in human/animal simulation, control, animation, and robotics to discuss the state-of-the-art techniques for natural motion generation of physics-based characters and robots.
Technical Paper
Research & Education
Livestreamed
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionA framework to generate past and future processes for drawing process videos.
Birds of a Feather
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Physical AI
Full Conference
Experience
DescriptionWhat an AI powered future might look like? Rapid evolution of computing capabilities enables transforming process of scientific discovery, creative production and invent new ways to experience and interact with art, music, design, film, literature, theatre, fashion and every other sphere of cultural production. This super unique BoF seeks to bring together technologists, artists, arts organizations and researchers to discuss what impact AI and generative technologies may have for our future.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present a tool for enhancing the detail of physically based materials using an off-the-shelf diffusion model and inverse rendering. Our goal is to enhance the visual fidelity of materials with detail that is often tedious to author, by adding signs of wear, aging, weathering, etc.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present the first generative model for neural BTFs, enabling single-shot generation from arbitrary text or image prompts. To achieve this, we introduce a universal neural material basis and train a conditional diffusion model to generate materials in this basis from flash images, natural images and text prompts.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionLimited high-quality ground-truth data hinders traditional video matting's real-world application. This work tackles this by advocating for large-scale training with diverse synthetic segmentation and matting data. A novel generative pipeline is also introduced to predict temporally consistent alpha masks with fine-grained details.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present a systematic derivation of a continuum potential defined for smooth and piecewise smooth surfaces, by identifying a set of natural requirements for contact potentials. Our potential is formulated independently of surface discretization and addresses the shortcomings of existing potential-based methods while retaining their advantages.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Birds of a Feather
Gaming & Interactive
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Pipeline Tools and Work
Full Conference
Experience
DescriptionA casual discussion on production pipeline development. Come talk shop about of the latest trends, common concerns, and best practices.
This is part of a linked series of Technical Pipeline BoFs, covering the VFX Reference Platform, Renderfarming, Cloud Native, Pipeline, Studio Storage, and MLOps.
This is part of a linked series of Technical Pipeline BoFs, covering the VFX Reference Platform, Renderfarming, Cloud Native, Pipeline, Studio Storage, and MLOps.
Birds of a Feather
Production & Animation
Not Livestreamed
Not Recorded
Pipeline Tools and Work
Full Conference
Experience
DescriptionThis is part of a linked series of Technical Pipeline BoFs, covering the Reference Platform, Renderfarming, Cloud Native, Pipeline, Studio Storage, and MLOps. Join a participant-driven discussion with key representatives from the graphics community, comparing experiences and exploring techniques related to pushing the production pipeline and resources toward the cloud.
This session will focus on peer-to-peer learning, collaboration and creativity, and high-value topics raised during the session will be explored further via The Pipeline Conference's online Speaker Series and in the Cloud Native user group (https://discord.gg/PU8hygUfbf).
Attendees will receive invites to our "Beers of a Feather" event, the same evening.
This session will focus on peer-to-peer learning, collaboration and creativity, and high-value topics raised during the session will be explored further via The Pipeline Conference's online Speaker Series and in the Cloud Native user group (https://discord.gg/PU8hygUfbf).
Attendees will receive invites to our "Beers of a Feather" event, the same evening.
Course
Gaming & Interactive
New Technologies
Research & Education
Livestreamed
Recorded
Education
Games
Geometry
Graphics Systems Architecture
Real-Time
Rendering
Full Conference
Virtual Access
Sunday
DescriptionThis course introduces graphics programmers of all levels to the latest GPU-feature "Work Graphs" and how to use in HLSL and Direct3D12 for their own projects.
After this class, particiapans should understand, explain, and apply Work Graphs in their own problem domain.
After this class, particiapans should understand, explain, and apply Work Graphs in their own problem domain.
Course
New Technologies
Research & Education
Livestreamed
Recorded
Animation
Artificial Intelligence/Machine Learning
Capture/Scanning
Dynamics
Education
Fabrication
Generative AI
Geometry
Image Processing
Math Foundations and Theory
Modeling
Physical AI
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Full Conference
Virtual Access
Thursday
DescriptionComputer graphics have evolved from a tool for visualization into a driving force behind scientific discovery, shaping advancements in biology, physics, and beyond. This course explores how graphics techniques have revolutionized interdisciplinary research, inspiring new frontiers in science.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionGSHeadRelight enables fast, high-quality relightability for 3D Gaussian head synthesis. A linear light model based on learnable radiance transfer is integrated into the native 3DGS rasterization process and supports colored illumination. Without requiring expensive light stage data, our method achieves 240+ FPS rendering speed and offers state-of-the-art relighting results.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionA guided lens sampling technique that improves Monte Carlo rendering of depth-of-field by projecting a global 3D radiance field into lens space via bipolar-cone projection. This method efficiently targets high-contribution regions, significantly reducing noise and improving convergence for circle-of-confusion effects in production rendering.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWalk on stars (WoSt) has shown its power in being applied to Monte Carlo methods for solving PDEs but the sampling techniques in WoSt are not satisfactory, leading to high variance. Inspired by Monte Carlo rendering, we propose a guiding-based importance sampling method to reduce the variance of WoSt.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe solve an inverse hand-shadow problem: finding poses of left and right hands that together produce a shadow resembling the target 2D input, e.g., animals, letters, and everyday objects. Our three-stage pipeline decouples the anatomical constraints and semantic constraints, and our benchmark provides 210 diverse shadow shapes of varying complexity.
Emerging Technologies
Gaming & Interactive
New Technologies
Research & Education
Hardware
Robotics
Virtual Reality
Full Conference
Experience
DescriptionHandoid is a novel hand-shaped robotic avatar that switches its morphology between acting as a part of a humanoid robot, and an independent hand-shaped robot avatar physically separated from the humanoid body. Handoid enhances remote user embodiment and expands the operational workspace, opening new horizons for robotic interaction.
Labs
Arts & Design
Gaming & Interactive
Not Livestreamed
Not Recorded
Digital Twins
Games
Real-Time
Simulation
Full Conference
Experience
DescriptionThis class walks participants through how to create global-scale virtual worlds using real-world geospatial data that leverages Cesium and 3D Tiles on the web and in game engines.
Labs
Arts & Design
Production & Animation
Not Livestreamed
Not Recorded
Art
Generative AI
Full Conference
Experience
DescriptionThis hands-on class guides participants through generating their own AI visuals (non-real-time) for projection mapping. They will map these creations onto physical surfaces using real-time tools and optimize rendering with interactive shaders. The session emphasizes creativity, ethical AI use, and sustainable design for accessible immersive experiences.
Labs
Research & Education
Not Livestreamed
Not Recorded
Digital Twins
Dynamics
Scientific Visualization
Simulation
Full Conference
Experience
DescriptionExplore how NVIDIA Earth-2 facilitates efficient weather and climate modeling. Learn how to run a large and growing stack of global AI weather forecasting models and how downscaling models generate super-resolution outputs. Discover use cases and applications benefiting the most from this emerging technology.
Labs
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Animation
Full Conference
Experience
DescriptionThis class focuses on how the artist-centered solutions developed at Blender Studio over more than a decade of filmmaking can help non-technical artists work together seamlessly.
Labs
Arts & Design
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Digital Twins
Performance
Rendering
Scientific Visualization
Spatial Computing
Full Conference
Experience
DescriptionThis Labs hands-on session presents the digital twin workflow behind Hammock Tower, an architectural design for Paris in 2100, which leverages NVIDIA Omniverse, SimScale, Cesium, and Autodesk
Forma to inform climate-resilient strategies for a projected +4°C future.
Forma to inform climate-resilient strategies for a projected +4°C future.
Labs
New Technologies
Not Livestreamed
Not Recorded
Augmented Reality
Digital Twins
Real-Time
Rendering
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionCreate an application for the Apple Vision Pro to configure a photoreal 3D asset. We'll develop an application and set it up to communicate with a product configurator built with the Omniverse Kit SDK and OpenUSD, and implement custom Swift UI to interact with the virtual product in real time.
Labs
New Technologies
Not Livestreamed
Not Recorded
Augmented Reality
Digital Twins
Physical AI
Real-Time
Rendering
Robotics
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionLearn how to create captivating spatial experiences with the Apple Vision Pro, leveraging Swift UI and Xcode for front-end development while utilizing NVIDIA Omniverse as a powerful backend server.
Labs
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Digital Twins
Geometry
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Full Conference
Experience
DescriptionDive into the cutting-edge world of digital twin technology for robotics. Learn to create virtual environments with OpenUSD, simulate robots using NVIDIA Isaac Sim, and control them via ROS. This hands-on lab equips you with essential skills for software-in-the-loop testing in industrial robotics applications.
Labs
Arts & Design
Research & Education
Not Livestreamed
Not Recorded
Art
Education
Full Conference
Experience
DescriptionIn this lab, we introduce participants to graphics in p5.js by creating an interactive postcard with a mix of 2D and 3D elements. This walkthrough includes an introduction to code-based animation, parameterized visuals, mouse and touch interactivity, and screen reader support in p5.js.
Labs
Gaming & Interactive
Research & Education
Not Livestreamed
Not Recorded
Education
Games
Lighting
Performance
Real-Time
Rendering
Full Conference
Experience
DescriptionThis Lab will demonstrate a practical, end-to-end workflow that merges rasterization and ray tracing using Vulkan’s latest features. Participants see how helper libraries, modular shaders (via Slang), and recent dynamic rendering extensions create a gentler ramp onto advanced graphics techniques while still exposing the low-level API handles seasoned developers expect.
Labs
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Art
Artificial Intelligence/Machine Learning
Education
Generative AI
Industry Insight
Modeling
Performance
Pipeline Tools and Work
Real-Time
Full Conference
Experience
DescriptionDiscover how to integrate state-of-the-art open-source generative AI into your 3D pipeline to flow from idea to 3D asset. In this 90-minute session, you'll build a ComfyUI workflow that transforms concept art into image arrays in any style, culminating in delivery to AI3D endpoints to generate 3D models.
Labs
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Education
Pipeline Tools and Work
Full Conference
Experience
DescriptionIn this session, we will focus on extending Blender’s functionality through its powerful Python API. Starting from fundamental concepts such as operators, we will gradually increase the complexity of our solution, to meet a real use-case scenario.
Labs
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Computer Vision
Education
Generative AI
Full Conference
Experience
DescriptionIntroduction to Generative AI models like Transformers, Diffusion, NeRFs and their application to Computer Graphics.
Labs
Gaming & Interactive
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Games
Graphics Systems Architecture
Image Processing
Performance
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Simulation
Full Conference
Experience
DescriptionThis hands-on lab introduces Slang, an open-source, open governance shading language hosted by Khronos that simplifies graphics development across platforms. Designed to tackle the growing complexity of shader code, Slang offers modern programming constructs while maintaining top performance on current GPUs.
Labs
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Digital Twins
Geometry
Modeling
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Full Conference
Experience
DescriptionIn this course, we'll discuss OpenUSD fundamentals in the domain of robotics, including benefits of a robot asset structure in OpenUSD, best practices of asset structure utilized by the URDF Importer in Isaac Sim, and review optimizations that can be performed on a robot asset.
Labs
Arts & Design
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Animation
Art
Geometry
Modeling
Simulation
Full Conference
Experience
DescriptionGeometry Nodes are Blender’s powerful and ever-improving visual framework to create procedural content. This session focuses on the creation of dynamic environment elements, such as space/air traffic in a sci-fi setting. Focus will be on building a system that is flexible, yet easy enough to tweak by a non-technical artist.
Labs
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Education
Rendering
Full Conference
Experience
DescriptionCovers the installation of the OSL repository, any necessary software libraries, and building the repository contents as well as the OSL testshade and testrender tools and the writing of custom OSL shaders. A summary of OSL source code repositories will conclude the lab.
Labs
Gaming & Interactive
Not Livestreamed
Not Recorded
Performance
Real-Time
Rendering
Virtual Reality
Full Conference
Experience
DescriptionLearn to render gaussian splats in real-time on mobile and standalone VR using UnityGaussianSplatting. This intermediate workshop covers its fundamentals, followed by optimizations for mobile GPUs.
Labs
Arts & Design
Research & Education
Not Livestreamed
Not Recorded
Education
Full Conference
Experience
DescriptionPaper Animatronics is a project-based learning activity where students create characters and stories and bring them to life through papercraft with sound and motion! Like making posters or dioramas, paper animatronics can be used to reinforce learning in almost any subject.
Labs
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Animation
Pipeline Tools and Work
Rendering
Full Conference
Experience
DescriptionFlamenco is the Open Source render farm software developed by Blender Studio. It can be used for distributed rendering across multiple machines, but also as a single-machine queue runner. This hands-on class will briefly teach how to install and use it, and then focus on customizing it to your needs.
Labs
Arts & Design
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Computer Vision
Education
Graphics Systems Architecture
Image Processing
Math Foundations and Theory
Performance
Scientific Visualization
Simulation
Full Conference
Experience
DescriptionIn this lab, we'll do just that, building a pair of quantum circuits that teleport the state of a quantum bit from one place to another. We'll start by running a few small quantum programs to get comfortable with the process and see the probabilistic nature of quantum measurement.
Labs
Arts & Design
Gaming & Interactive
Not Livestreamed
Not Recorded
Art
Real-Time
Rendering
Full Conference
Experience
DescriptionIn this course, we create real-time interactive graphics on embedded systems. We also design dynamic visuals with GPU shaders, mapping input data using gestural libraries for control. Finally, we explore embedded system limitations and direct-to-GPU rendering to bridge creative expression and technical implementation for artists, developers, and researchers.
Labs
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Art
Audio
Display
Education
Games
Haptics
Hardware
Performance
Physical AI
Robotics
Simulation
Full Conference
Experience
DescriptionThe Scrapyard Challenge is an interactive workshop where participants create unique arcade and console gaming controllers from found materials for classic games like Street Fighter, Pac-Man, Donkey Kong, Mario Kart, and more.
Labs
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Digital Twins
Education
Haptics
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionJoin us in learning how to create 3D and 2D interfaces and graphics with RealityKit, CoreML, and SwiftUI for visionOS applications. Together, we will cover the core design principles of 2D/3D UI and dive into depth perception, spatial awareness, and natural gestures.
Labs
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Art
Real-Time
Rendering
Full Conference
Experience
DescriptionThis talk covers several methods of stylizing real-time projects built in Unreal Engine for non-photoreal rendering.
Labs
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Artificial Intelligence/Machine Learning
Dynamics
Geometry
Performance
Physical AI
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Experience
DescriptionThis course offers an introduction to NVIDIA Kaolin library and an in-depth exploration of its physics module. Attendees will learn how to interactively render 3D Gaussian splats, and extend them to physics simulation with collisions using NVIDIA Warp accelerated features.
Course
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Augmented Reality
Haptics
Virtual Reality
Full Conference
DescriptionThis workshop explores the integration of haptic gloves in extended reality (XR) applications. Participants will learn about haptic perception, and the functionality and types of haptic gloves. Through lectures and interactive discussions, attendees will learn to assess glove capabilities and identify their benefits and limitations to enhance user experiences effectively.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Audio
Capture/Scanning
Digital Twins
Fabrication
Simulation
Full Conference
Experience
DescriptionParticle Forest uses machine vision to preserve a digital trace of ancient trees of historical, cultural, and ecological significance in order to build awareness of these charismatic megaflora's grandeur and plight.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present HexHex, which extracts a hexahedral mesh from a locally injective integer-grid map. Key contributions include a conservative rasterization technique and a novel mesh data structure called propeller. Our algorithm is significantly faster and uses less memory than the previous state-of-the-art method, especially for large hex-to-tet ratios.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present SplatDiff, a pixel-splatting-guided diffusion model for single-image novel view synthesis (NVS). Leveraging pixel splatting and video diffusion, SplatDiff generates high-quality novel views with consistent geometry and high-fidelity details. SplatDiff achieves state-of-the-art results in single-view NVS and demonstrates remarkable zero-shot performance on sparse-view NVS and stereo video conversion.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionThis paper presents a CPU-based cloth simulation algorithm that partitions garment models into domains that can be processed by each individual CPU core. Using projective dynamics with domain-level parallelization, this method achieves high performance comparable to GPU methods and runs about an order faster than existing CPU approaches.
Talk
Production & Animation
Not Livestreamed
Not Recorded
Animation
Industry Insight
Real-Time
Full Conference
Virtual Access
Sunday
DescriptionLeveraging the GPU in rigging is challenging due to the requirement of proprietary deformers to achieve photorealistic creature work. We present our strategy to accomplish both, high-performance and high-quality deformations, giving animators an interactive experience with high visual fidelity in a streamlined asset pipeline.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThis paper introduces stratification into resampled importance sampling (RIS) technique for real-time photorealistic rendering. It organizes sample candidates into local histograms and then employs Quasi Monte Carlo and antithetic patterns for efficient sampling. This low-overhead approach significantly reduces rendering noise, improving visual quality compared to existing methods.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present HOIGaze – a novel approach for gaze estimation during hand-object interactions in extended reality. HOIGaze features: 1) a novel hierarchical framework that first recognises attended hand and then estimates gaze; 2) a new gaze estimator combining CNN, GCN, and cross-modal Transformers; and 3) a novel eye-head coordination loss.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce a novel representation for learning and generating Computer-Aided Design (CAD) models in the form of boundary representations (BReps). Our representation unifies the continuous geometric properties of BRep primitives in different orders (e.g., surfaces and curves) and their
discrete topological relations in a holistic latent (HoLa) space.
discrete topological relations in a holistic latent (HoLa) space.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionHoloChrome introduces a novel holographic display method by multiplexing multiple wavelengths and two spatial light modulators to enhance image quality. By moving beyond standard three-color primary systems, it significantly reduces speckle noise while preserving natural depth cues while achieving more accurate color reproduction.
Immersive Pavilion
Gaming & Interactive
New Technologies
Animation
Augmented Reality
Games
Virtual Reality
Full Conference
Experience
DescriptionA mixed reality music video that allows you to enjoy music and games at the same time using the pass-through function of Quest 3. VR idol Mikasa will visit your room, sing and dance in front of you, and help you defeat monsters that suddenly appear.
Frontiers
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Augmented Reality
Generative AI
Physical AI
Robotics
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionAs artificial intelligence, mixed reality, and conversational interfaces become deeply embedded in daily life, how are they reshaping human connection and communication? Join human-centered mixed-reality designer Ketki Jadhav and cognitive linguist Aubrie Amstutz for an interactive exploration of communication's transformation. Through speculative dialogue and group activities, we'll examine critical questions: How does mixed reality's partial translation of body language impact group dynamics? Do voice assistants change how we communicate with humans? Can AI avatars fulfill our social needs? This 90-minute workshop combines expert perspectives with hands-on engagement to envision communication's evolving landscape.
Talk
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Recorded
Artificial Intelligence/Machine Learning
Dynamics
Games
Math Foundations and Theory
Simulation
Full Conference
Virtual Access
Thursday
DescriptionVirtual crowd simulation is prevalent in graphics and VFX. We demonstrate that widely popular state-of-the-art algorithms fail in several basic benchmark cases. With the goal of designing more robust algorithms, we propose a workaround, which can be easily integrated into real-time applications that simulate crowds.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Ethics and Society
Generative AI
Scientific Visualization
Full Conference
Experience
DescriptionHow to Find the Soul of a Sailor is an immersive audio/visual artwork about the future of the oceans told from the perspective of the artist’s late father. A sailor of many years, he left a number of journals from his travels as an officer in the Merchant Navy. Collaborating with The New Real Observatory Platform AI tools, Molga used these journals as small data sets to create future stories in the voice of her Dad. This work is a new take on marine art, and also touches upon digital afterlife.
Birds of a Feather
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Games
Industry Insight
Lighting
Full Conference
Experience
DescriptionThe ACM SIGGRAPH Early Career Development Committee’s “Resume and Reel Review” program has long offered students and early career professionals the chance to have their work reviewed by industry experts at SIGGRAPH conferences.
This session will feature a panel of industry professionals who will discuss and review a selection from this year’s program live. This unique opportunity allows attendees to see real-time reviews, learn what makes a great resume and reel, and ask questions directly to experts.
Register for a one-on-one session at https://ecdc.siggraph.org.
This session will feature a panel of industry professionals who will discuss and review a selection from this year’s program live. This unique opportunity allows attendees to see real-time reviews, learn what makes a great resume and reel, and ask questions directly to experts.
Register for a one-on-one session at https://ecdc.siggraph.org.
Talk
Production & Animation
Not Livestreamed
Not Recorded
Animation
Pipeline Tools and Work
Simulation
Full Conference
Virtual Access
Thursday
DescriptionWe present a modular, asset-centric CFX workflow for costumes and hair/fur. It focuses on individual asset-level simulation set ups, constructing scenes procedurally through merging assets and solving them together. Shot-specific modifications and overrides are applied hierarchically, allowing automatic updates with upstream changes, reducing manual rework and streamlining iterative processes
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionExisting avatar methods typically require sophisticated dense-view capture and/or time-consuming per-subject optimization processes. HumanRAM proposes a feed-forward approach for generalizable human reconstruction and animation from monocular or sparse human images. Experiments show that HumanRAM achieves state-of-the-art results in terms of reconstruction accuracy, animation fidelity, and generalization performance.
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Livestreamed
Recorded
Artificial Intelligence/Machine Learning
Games
Generative AI
Full Conference
Virtual Access
Experience
Tuesday
DescriptionHyborg Agency proposes an artistic perspective on AI agents: We can design AI agents that maintain their distinct non-human nature while meaningfully participating in human social contexts.
Presenting AI agents as mechanical deer nurtured by community conversations, this computational ecosystem demonstrates how defamiliarized AI agents can enrich human social experiences while promoting transparency about their artificial nature, contributing to more sustainable and ethical human-AI symbiotic relationship.
Presenting AI agents as mechanical deer nurtured by community conversations, this computational ecosystem demonstrates how defamiliarized AI agents can enrich human social experiences while promoting transparency about their artificial nature, contributing to more sustainable and ethical human-AI symbiotic relationship.
Frontiers
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Capture/Scanning
Generative AI
Pipeline Tools and Work
Real-Time
Virtual Reality
Full Conference
Experience
DescriptionA creator’s play is never done. Our Hybrid Dance Xplorations workshop invites you to play with us as we adventurously explore virtual camera control, motion capture, generative AI, touchless or gesture-based interaction, and potentially clothing/costume design & simulation – in evolving configurations of our XR sandbox for co-creation and hybrid performance. We will share previous and current work including three use/play cases for movement-based experiences with contemporary dance and salsa (solo, pairs and rueda). Presenters include local artists, researchers and technologists. A highlight will be engaging group activities for workshop attendees to contribute to ideas, laughter and ambitions to date.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose Hybrid Tours, a hybrid approach to creating long-take shots by combining short video clips in a virtual interface. We show that Hybrid Tours makes capturing long-take touring shots much easier, and that clip-based authoring and reconstruction lead to higher-fidelity results at lower compute costs.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionEver feel like three dimensions isn't quite enough? We performed the analysis necessary to simulate the motion of deformables in four spatial dimensions! Along the way, we developed techniques for generating simulation-ready hyper-meshes, analyzing hyper-dimensional deformation energies, and detecting and responding to collision scenarios for softbodies in any dimension.
Emerging Technologies
New Technologies
Research & Education
Virtual Reality
Full Conference
Experience
DescriptionWe developed, to our knowledge, the first virtual reality head-mounted display (HMD) that combines the visual benefits of above retinal-resolution with high brightness (~1000 nits) and high contrast. When showcasing hyperrealistic rendered scenes, it provides a new milestone on how realistic VR experiences can be.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce Image-GS, a content-adaptive image representation based on colored 2D Gaussians. Image-GS achieves remarkable rate-distortion performance across diverse images and textures while supporting hardware-friendly fast random access and flexible quality control through a smooth level-of-detail hierarchy. We demonstrate its versatility with two applications: semantics-aware compression and image restoration.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionOur goal is to accelerate inverse rendering by reducing the sampling budget without sacrificing overall performance. We introduce a novel image-space adaptive sampling framework to accelerate inverse rendering by dynamically adjusting pixel sampling probabilities based on gradient variance and contribution to the loss function.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionThis work introduces an efficient image-space collage technique that optimizes geometric layouts using a differential renderer and hierarchical resolution strategy. Our approach simplifies complex shape handling in image-space optimization, offering fixed computational complexity. Experiments show our method is an order of magnitude faster than state-of-the-art while supporting diverse visual expressions.
Emerging Technologies
Gaming & Interactive
New Technologies
Research & Education
Games
Haptics
Robotics
Virtual Reality
Full Conference
Experience
DescriptionWe propose the concept of “Imaginary Joints and Muscles” to provide intuitive proprioceptive feedback for extended body parts without reliance on vision. Our system uses skin stretch to simulate torque at virtual joint interfaces, evoking muscle-like sensations that accurately represent posture and motion, thereby enhancing the user’s body awareness.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe propose IMLS-Splatting, an end-to-end multi-view mesh optimization method that leverages point clouds for surface representation. By introducing a splatting-based differentiable IMLS algorithm, our approach efficiently converts point clouds into SDF and texture field, enabling multi-view mesh optimization in approximately 11 minutes and achieving state-of-the-art reconstruction performance.
Birds of a Feather
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Digital Twins
Scientific Visualization
Spatial Computing
Full Conference
Experience
DescriptionTraditionally (and since 2014), this session is about immersive visualization systems, software, and tools for science, research, scientific visualization, information visualization, art, design and digital twins. Invited speakers and panelists discuss newest initiatives and developments in immersive space as applied to data exploration, scientific discoveries, and more.
Educator's Forum
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Education
Pipeline Tools and Work
Full Conference
Virtual Access
Experience
Wednesday
DescriptionOver the course of four productions, we demonstrate an incremental USD adoption timeline suitable for small studio and educational contexts, resulting in a workflow that uses USD end to end. We present this process as a case study for how any small studio can implement USD into their pipeline.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe propose a physics-driven approach to IMU-based motion capture, improving global motion estimation with 3D contact modeling and gravity awareness. Our method estimates world-aligned 3D motion, contact points, contact forces, joint torques, and proxy surface interactions using only six IMUs in real time.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionWhen little Nora's parents split up, the Earth splits in two. She now has to juggle between both
hemispheres to visit them. Problem is, the backpack she's carrying is getting heavier and
heavier...
hemispheres to visit them. Problem is, the backpack she's carrying is getting heavier and
heavier...
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe introduce a novel construction algorithm of 3D Apollonius diagrams designed for GPUs. Our method features a fast execution while allowing a comprehensive computation. This is made possible thanks to a light data structure, a cell update procedure and a spacial exploration strategy all designed to support the diagram properties.
Art Paper
Arts & Design
Livestreamed
Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Computer Vision
Ethics and Society
Games
Generative AI
Performance
Real-Time
Full Conference
Virtual Access
Experience
Monday
Description2025, rumored to be the "year of AI agents," the artwork in(A)n(I)mate envisions a future where AI systems act behind the scenes of objects, providing them agency and performativity, animating them, and bringing them closer to human awareness. By inviting conversations with everyday objects, in(A)n(I)mate challenges us to reconsider agency, interaction, and the boundaries of intelligence.
Real-Time Live!
New Technologies
Livestreamed
Recorded
Capture/Scanning
Computer Vision
Dynamics
Real-Time
Rendering
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Tuesday
DescriptionWe present InfiniteStudio, the first 4D volumetric capture system that meets the visual fidelity requirements for professional-grade video production. Building upon innovations in 4D Gaussian Splatting, InfiniteStudio reduces production time while unlocking unprecedented creative freedom during post-production. It paves the way for the next-generation interactive media and immersive spatial storytelling.
Course
New Technologies
Livestreamed
Recorded
Artificial Intelligence/Machine Learning
Generative AI
Pipeline Tools and Work
Full Conference
Virtual Access
Thursday
DescriptionThis three-hour, hands-on workshop introduces artists, designers, and educators to ComfyUI, a powerful node-based interface for generative AI. Participants will install ComfyUI, then learn workflows for inpainting, outpainting, IP Adapters, ControlNet variants (depth, canny, pose), image-to-3D, and image-to-video, gaining practical skills and creative inspiration with open source models and tools.
Labs
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Digital Twins
Education
Games
Generative AI
Modeling
Pipeline Tools and Work
Real-Time
Scientific Visualization
Simulation
Full Conference
Experience
DescriptionCombining AI text-to-3D generation and peer-to-peer file sharing, MeshTorrent introduces a scalable platform for decentralized creation and exchange of 3D assets. This advancement empowers collaborative generation, rapid previews, and seamless sharing of .glb models, including extensions for 2D sprites and rigged characters, revolutionizing digital content creation for modern AI art.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionINKi enables instance segmentation for scene sketches by adapting image segmentation models with class-agnostic tuning and depth-based refinement. We introduce a new dataset INK-scene with diverse styles and demonstrate layered sketch organization for advanced editing, including inpainting occluded instances—paving the way for robust, editable sketch understanding.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe propose InstanceGen - a new technique for improving Text-to-Image models ability to generate images for prompts describing multiple objects, attributes and spatial relationships. InstanceGen requires no training or additional user inputs and achieves state-of-the art results in terms of both accuracy and visual quality on these highly challenging prompts.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present a novel framework that instantly (< 1 sec) repairs self-intersections in static surface meshes, which commonly occur during the 3D modeling process.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionInstantRestore is a fast, personalized face restoration framework that uses a single-step diffusion model with an extended self-attention mechanism to match low-quality image patches to high-quality reference patches. Leveraging implicit correspondences in the denoising network, we efficiently transfer identity details in one pass, enabling real-time, identity-preserving restoration without per-identity tuning.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Art
Display
Dynamics
Education
Fabrication
Haptics
Hardware
Lighting
Robotics
Full Conference
Experience
DescriptionA three-dimensional dynamic deformable display using novel liquid metal wiring is demonstrated. An integrated strain sensor on the stretchable display calculates its deformation and enables direct interaction between the users and the display device. This unprecedented display type provides a completely novel interactive experience with dynamic deformation.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Art
Virtual Reality
Full Conference
Experience
DescriptionInteractive Impossible Objects transforms illusions into physical VR experiences. By separately rendering each eye’s viewpoint and applying redirected walking and hand redirection, users can walk on endless staircases or touch paradoxical forms preserving the illusion. This approach opens new frontiers for immersive art and perceptual research, bridging illusions and reality.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe introduce a method for interactive design of procedural patterns, allowing users to sketch content incrementally in a level-by-level fashion. Each level, or scaffold, builds on the previous one, making optimization more responsive and controllable. A comprehensive validation demonstrates improved editing experience compared to conventional techniques.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce an automatic tool to retarget artist-designed garments on a standard mannequin to possibly non-human avatars with unrealistic characteristics, which widely appear in games and animations. We preserve the geometrical features in the original design, guarantee intersection-free, and fit the garment adaptively to the avatars.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe introduce a novel interspatial attention (ISA) for diffusion transformers, which maintains identity and ensures motion consistency while allowing precise control of camera and body poses. Combined with a custom video variation autoencoder, our model achieves state-of-the-art performance for photorealistic 4D human video generation.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionA generative workflow for precise image editing using an intrinsic-image latent space. Built on RGB-X diffusion, it enables diverse edits—like relighting, color changes, and object manipulation—while preserving identity and ameliorating intrinsic-channel entanglement. All this is done without extra data or fine-tuning, achieving state-of-the-art results.
Course
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Audio
Display
Education
Games
Image Processing
Lighting
Math Foundations and Theory
Modeling
Real-Time
Rendering
Scientific Visualization
Full Conference
Virtual Access
Tuesday
DescriptionThe Fourier Transform is fundamental to computer graphics, explaining topics from aliasing and sampling to image compression and filtering. This friendly course explains the principles in words, pictures, and animation, rather than math. The concepts are the important thing. We show that they are comprehensible, useful, and beautiful.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionIn this paper, we present a computational approach for designing Discrete Interlocking Materials (DIM) with desired mechanical properties. We demonstrate the effectiveness of our method by designing discrete interlocking materials with diverse limit profiles for in- and out-of-plane deformation and validate our method on fabricated physical prototypes.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present a computational framework for optimizing shape sequences to achieve user-defined motion objectives in deformable bodies undergoing geometric locomotion. Through a reduced spatiotemporal parameterization of the shape sequences, our method is able to efficiently capture the complex coupling between shape changes and motion in different environments.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionIP-Composer is a novel, training-free method for compositional image generation from multiple reference images. Extending IP-Adapter, it uses natural language to identify concept-specific subspaces in CLIP, projects input images into these subspaces to extract targeted concepts, and fuses them into composite embeddings—enabling fine-grained, controllable generation across diverse visual concepts.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThis paper presents T-Prompter, a method for visually prompting generative models to enable continuous image generation for specific themes, characters, and scenes. It introduces Dynamic Visual Prompting to enhance generation accuracy and quality, outperforming existing methods in maintaining character identity, style consistency, and text alignment.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionThis paper introduces a gradient combiner that blends unbiased and biased gradients in parameter space using the James-Stein estimator to infer scene parameters (BSDFs and volumes) from images. This approach enhances optimization accuracy compared to relying solely on either unbiased or biased gradients.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionThis paper presents a new GPU simulation algorithm, which converges as fast as global Newton's method and as efficient as Jacobi method.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionWind appears in a park. People fly away.
Birds of a Feather
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionThe Washington Post Video team invites you to join in this Birds of a Feather meet-up for graphics artists in journalism and documentary film.
Creating graphics for documentary or news reporting comes with its own set of unique challenges and creative opportunities. Whether you’re visualizing investigations, crafting illustrations, rendering scientific concepts or conducting research, let’s get together and share our insights while making new connections.
Creating graphics for documentary or news reporting comes with its own set of unique challenges and creative opportunities. Whether you’re visualizing investigations, crafting illustrations, rendering scientific concepts or conducting research, let’s get together and share our insights while making new connections.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Art
Augmented Reality
Virtual Reality
Full Conference
Experience
DescriptionThis work is a system for experiencing the traditional Japanese painting of flowers and birds. Users can paint pictures on the sliding doors with four different types of brushes. For example, when a branch is added to a cherry tree, flowers bloom around the painted branch.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present a novel shadow method named kernel predicting neural shadow mapping. By modeling soft shadow values as pixelwise local filtering from basic hard shadow values, we trained a neural network to predict local filter weights, achieving accurate and temporally-stable soft shadows with good generalizability.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Computer Vision
Games
Haptics
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Spatial Computing
Full Conference
Experience
DescriptionDevelopers around the world count on Khronos open standards to enable high-performance, cross-platform 3D graphics, AR/VR, vision processing, parallel computation, and machine learning acceleration. Khronos is a member-funded consortium that welcomes participation from companies and universities. Join us to hear the latest updates from the working groups shaping the standards behind the next generation of applications and devices
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Education
Games
Graphics Systems Architecture
Industry Insight
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Full Conference
Experience
DescriptionThe ACM SIGGRAPH Early Career Development Committee (ECDC)’s Resume and Reel Review program has long provided students and early-career professionals with valuable feedback from industry experts—both at SIGGRAPH conferences and online.
In this session, committee members will share essential best practices for launching your career in the computer graphics industry. Topics include crafting an impactful resume and demo reel, as well as tips on finding mentorship and professional development opportunities.
Be sure to also check out our Birds of a Feather session: How to Get Your Resume & Reel Noticed.
Register for a one-on-one session at https://ecdc.siggraph.org.
In this session, committee members will share essential best practices for launching your career in the computer graphics industry. Topics include crafting an impactful resume and demo reel, as well as tips on finding mentorship and professional development opportunities.
Be sure to also check out our Birds of a Feather session: How to Get Your Resume & Reel Noticed.
Register for a one-on-one session at https://ecdc.siggraph.org.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present a simple, but effective framework for kinematically retargeting contact-rich anthropomorphic hand-object manipulations by exploiting contact areas. We reliably retarget contact area data between diverse hands using a novel non-isometric shape matching process and generate high quality results using the retargeted contacts alongside a straightforward IK-based motion synthesis pipeline.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionLAM is an innovative Large Avatar Model for animatable Gaussian head reconstruction from a single image in seconds. Our Gaussian heads are immediately animatable and renderable without additional networks or post-processing. This allows seamless integration into existing rendering pipelines, ensuring real-time animation and rendering across various platforms, including mobile phones.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThis work introduces a conditional generative framework for large-scale multi-character interaction synthesis by facilitating natural interactive motions and transitions where characters are coordinated for new interactive partners, proposing a coordinatable multi-character interaction space for interaction synthesis and a transition planning network to plan transitions to achieve scalable, transferable multi-character animations.
Talk
Production & Animation
Not Livestreamed
Not Recorded
Pipeline Tools and Work
Full Conference
Virtual Access
Sunday
DescriptionWe present the Launcher, a software environment configuration tool that has contributed to the success of numerous productions over the course of two decades. We explore the core features that enable us to manage a large number of configurations across numerous departments and projects while balancing stability and flexibility.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe propose LayerFlow, a unified framework for layer-aware video generation, enabling seamless creation of transparent foregrounds, clean backgrounds, and blended scenes. With multi-stage training and LoRA techniques improving layer-wise video quality with limited data, it also supports variants like video layer decomposition, generating backgrounds for given foregrounds and vice versa.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionLayerPano3D is a novel framework that generates hyper-immersive 3D panoramic scenes from a single text prompt. By decomposing panoramas into multiple layers and optimizing them as 3D Gaussians, it enables full 360°×180° exploration with consistent visual quality, unlocking new possibilities for virtual reality and scene generation.
Technical Paper
Research & Education
Livestreamed
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present Leapfrog Flow Maps (LFM), a fast hybrid velocity-impulse scheme with leapfrog integration. The computations are further accelerated by a matrix-free AMGPCG solver optimized for GPUs. As a result, LFM achieves high performance and fidelity across diverse examples, including fireballs and wingtip vortices.
Course
Production & Animation
Not Livestreamed
Not Recorded
Full Conference
DescriptionIn this course, learn how you can develop OpenUSD schemas to customize 3D workflows. Explore schema types, data modeling, and the standardization process. Gain hands-on experience in extending OpenUSD's capabilities to meet specific project needs and industry requirements.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce a reinforcement learning framework for assembling structures composed of rigid parts. A pre-trained policy generates alternative assembly plans, enabling rapid adaptation to unexpected disruptions. Our approach supports efficient and robust planning for multi-robot assembly tasks.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present an image-to-image drawing setup capturing eye tracking and stroke data across 156 drawings from 10 artists. Our findings reveal consistent fixation patterns, strong gaze–stroke correlations, and structured drawing sequences, offering new insights into professional observation strategies and observation-guided assistive drawing system design.
Art Paper
Arts & Design
New Technologies
Livestreamed
Recorded
Art
Artificial Intelligence/Machine Learning
Audio
Digital Twins
Generative AI
Performance
Real-Time
Robotics
Full Conference
Virtual Access
Experience
Tuesday
Description"Learning to Move, Learning to Play, Learning to Animate" is an interdisciplinary multimedia performance, merging real-time AI visuals, plant biofeedback, and found object robotics to explore more-than-human intelligence. Challenging anthropocentrism, it envisions co-creative agency among humans, machines, and organic entities, inviting audiences to experience intelligence as relational, embodied, and shared across natural and synthetic forms.
Course
Research & Education
Livestreamed
Recorded
Animation
Dynamics
Education
Geometry
Modeling
Simulation
Full Conference
Virtual Access
Thursday
DescriptionLevel-of-detail (LoD) is a concept we experience in everyday life and an important topic in graphics. This course explores modern LoD techniques beyond rendering, focusing on hierarchical representations and multilevel solvers for geometry processing and simulation. We demonstrate applications showing how LoD enables efficient, accurate and scalable geometric computation.
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Livestreamed
Recorded
Art
Digital Twins
Games
Industry Insight
Pipeline Tools and Work
Full Conference
Virtual Access
Sunday
DescriptionThis talk explores how 3D Artists in Animation, VFX, and Gaming can transition into industries like fashion, product design, architecture, and more. Learn how to adapt your portfolio, leverage in-demand skills, and bridge knowledge gaps to unlock new career opportunities beyond entertainment in an evolving professional landscape.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe designed a neural field capable of capturing a diverse family of discontinuities, enabling the simulation of cuts in thin-walled deformable structures. By lifting input coordinates using generalized winding numbers, our approach models discontinuities explicitly and flexibly, supporting accurate, real-time simulations with dynamic cut updates and user-interactive cut shape design.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present and analyze a holographic augmented reality display with the bandwidth-preserved guiding method using a light pipe. We propose the use of light pipe to spatially relocate the light engine from the image combiner at the front-module, enabling enhanced weight distribution and obstruction-free view while preserving the wavefront bandwidth.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionLightLab is a diffusion-based method for parametric control over light sources in an image. Leveraging the linearity of light we create a dataset of controled illumniation changes from a small set of real image pairs and synthetic renders, which is used to fine-tune a model to enable physically plausible edits.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce an inverse-LU preconditioner to solve for the typical asymmetric and dense matrices generated by boundary element methods (BEM). The computational efficiency and low memory requirements of our approach conspire to scale up to millions of degrees of freedom, with orders of magnitude speedups in solving times.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionA story about some people who are like nature.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe propose a simple, parallelizable algorithm inspired by rectified flows to match probability distributions. With linear-time complexity, it approximates optimal transport by employing summed-area tables and direct particle advection. We illustrate our applications in stippling, mesh parameterization, and shape interpolation in 2D, 3D.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Digital Twins
Display
Dynamics
Haptics
Image Processing
Performance
Real-Time
Scientific Visualization
Simulation
Virtual Reality
Full Conference
Experience
Description"Liquid Views - Echoes of Self (2025) is a video reimagining of the 1992 interactive installation that blends nature, art, and technology in a reflection on digital identity. It draws on the ancient act of looking into water — a symbol of self-awareness — and uses technology to distort the viewer's reflection, creating a dynamic visualization of identity. The video explores the boundaries between the self and the digital persona, raising questions about human interaction, technological influence, and self-perception in the digital age. "Echoes of Self" offers a commentary on human connection in a fragmented world.
Emerging Technologies
New Technologies
Capture/Scanning
Computer Vision
Virtual Reality
Full Conference
Experience
DescriptionWe propose a novel technology- LiveGS, a real-time free-viewpoint video(FVV) live-broadcast system, which is capable of generating high-fidelity real-time volumetric human representations and rendering on mobiles efficiently with high degree of freedom, while maintaining a low transmission bandwidth.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Computer Vision
Digital Twins
Ethics and Society
Games
Industry Insight
Modeling
Physical AI
Real-Time
Robotics
Virtual Reality
Full Conference
Experience
DescriptionScientific research-workshop to initiate a global dialogue on humane digital wellbeing, addressing emerging challenges at the intersection of immersive technologies, gaming, misinformation, and AI ethics. The session will present evidence-based findings from cross-national studies exploring the psychological, behavioral, and societal impacts of AI-powered digital environments.
Will connect how these technologies influence cognitive attention, mental health, and public trust, while also highlighting gaps in ethical governance. It seeks to position digital wellbeing as a critical, philanthropic trajectory.
This will offers a vital platform to engage interdisciplinary researchers and creators in shaping responsible innovation that prioritizes human flourishing in technologically saturated societies.
Will connect how these technologies influence cognitive attention, mental health, and public trust, while also highlighting gaps in ethical governance. It seeks to position digital wellbeing as a critical, philanthropic trajectory.
This will offers a vital platform to engage interdisciplinary researchers and creators in shaping responsible innovation that prioritizes human flourishing in technologically saturated societies.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Emerging Technologies
Arts & Design
New Technologies
Research & Education
Art
Artificial Intelligence/Machine Learning
Generative AI
Full Conference
Experience
DescriptionThis interactive experience showcases generative models that create ambiguous anamorphoses—images that hold meaning both normally and when viewed through specific mirrors and lenses. Participants explore these illusions hands-on, generate their own using a custom UI, and take home a small cylindrical mirror to continue discovering hidden visuals beyond the exhibit.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Animation
Art
Computer Vision
Modeling
Real-Time
Virtual Reality
Full Conference
Experience
DescriptionLove That Defies Mortality reinterprets The Peony Pavilion using asymmetric VR and cross-media design, creating immersive, non-linear narrative experiences. By combining traditional Chinese theatre with modern technology, it bridges cultural heritage and contemporary audiences, offering innovative approaches in both academic research and practical application.
Immersive Pavilion
Gaming & Interactive
New Technologies
Research & Education
Education
Games
Simulation
Virtual Reality
Full Conference
Experience
DescriptionLunar Roving Adventure is a VR game that brings NASA's Apollo lunar missions to life. Designed to engage younger generations, it offers an immersive, interactive experience that combines historical facts with educational gameplay.
Talk
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Computer Vision
Image Processing
Lighting
Pipeline Tools and Work
Rendering
Full Conference
Virtual Access
Thursday
DescriptionThis paper describes the techniques we use to build the complex light rigs in the Sony Pictures Imageworks lighting pipeline. Our process is semi-automatic and removes manual labour from the artists.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Art
Games
Virtual Reality
Full Conference
Experience
DescriptionMagic You is a virtual reality interactive narrative coming-of-age story about ADHD, told in a hand-drawn, colouredpencil aesthetic as a magical journey. The artwork hopes to present the experiences and imagination of ADHD patients to the audience in a poetic way and give the audience a manifestation of neurodiversity.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce MAGNET (Muscle Activation Generation Networks), a scalable framework for reconstructing full-body muscle activations across diverse human movements, which also includes distilled models for solving downstream tasks or generating real-time muscle activations—even on edge devices. The efficacy is demonstrated through examples of daily life and challenging behaviors.
Talk
Production & Animation
Not Livestreamed
Not Recorded
Animation
Simulation
Full Conference
Virtual Access
Wednesday
DescriptionThis work discusses the technical solutions developed for making the Stream of Consciousness in Pixar's Inside Out 2, including in-house procedural tools that facilitated the authoring and stylization of velocity fields interacting with various 3D obstacles.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce Masked Anchored SpHerical Distances (MASH), a novel multi-view and parametrized representation of 3D shapes. MASH is versatilefor multiple applications including surface reconstruction, shape generation, completion, and blending, achieving superior performance thanks to its unique representation encompassing both implicit and explicit features.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionMatCLIP assigns realistic PBR materials to 3D models using shape- and lighting-invariant descriptors derived from images, including LDM outputs and photos. It outperforms prior methods by over 15%, enabling consistent material predictions across varied geometry and lighting, with applications to large-scale 3D datasets like ShapeNet and Objaverse.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionMaterialPicker is a multi-modal material generation model that creates high-quality material maps from images and/or text by fine-tuning a video diffusion model. It robustly extracts materials from real-world photos, even with distortion or occlusion, enhancing fidelity, diversity, and efficiency in material synthesis.
Appy Hour
New Technologies
Research & Education
Augmented Reality
Capture/Scanning
Computer Vision
Image Processing
Lighting
Real-Time
Full Conference
Experience
DescriptionMeCapture, our mobile augmented reality app, helps users document and visualize long-term body changes through augmented reality guidance. Built as part of our recently published research, Personal Time-Lapse, it shows potential in personal health monitoring and is freely available on the App Store (MeCapture.com).
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Education
Full Conference
Experience
DescriptionJoin us for a relaxed and informative mingling event designed for those interested in exploring volunteering opportunities within the ACM SIGGRAPH organisation and in SIGGRAPH Conferences. This is a great chance to meet current key volunteers and leadership, learn about the various roles and committees that you could get involved in, and discover how your skills and passions could make a meaningful impact to our community. Whether you're looking to expand your professional network, gain new experiences, or give back to the community, this event provides a valuable opportunity to get involved with SIGGRAPH!
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionMeschers are a mesh representation for Escheresque geometry. They allow us to solve partial differential equations on the surface of an impossible object, meaning that we can find impossible shortest paths, perform mescher smoothing, and even inverse render a mescher from an image.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Talk
Production & Animation
Not Livestreamed
Not Recorded
Animation
Art
Geometry
Modeling
Rendering
Full Conference
Virtual Access
Thursday
DescriptionWe present a novel character rigging solution developed for OOOOO, a liquid supercomputer in Pixar's Elio. OOOOO is Pixar’s first mesh-free character rig with a hierarchical arrangement of implicit surface primitives and operators, allowing for complex transformations and offers unprecedented flexibility in character animation.
Talk
Production & Animation
Livestreamed
Recorded
Animation
Geometry
Pipeline Tools and Work
Full Conference
Virtual Access
Thursday
DescriptionElio's OOOOO is Pixar's first character created as an implicit surface, visualized with GLSL in our animation software, Presto. This talk will go past the model/rig stage and consider look development challenges related to the loss of stable mesh data on a shapeshifting character, resulting in a per-frame process.
Spatial Storytelling
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Education
Performance
Real-Time
Spatial Computing
Full Conference
Experience
DescriptionDemonstration on how we are designing a hyper-reality adaptation of Shakespeare’s Macbeth featuring life-sized metahuman digital doubles that appear to intelligently interact with live actors in a virtual production volume.
Educator's Day Session
Production & Animation
Research & Education
Not Livestreamed
Animation
Capture/Scanning
Digital Twins
Education
Games
Performance
Pipeline Tools and Work
Real-Time
Full Conference
Virtual Access
Experience
Monday
DescriptionJoin Epic Games Education for an in-depth overview of the latest MetaHuman updates in Unreal Engine 5.6. This session will explore how advancements in high-fidelity digital humans can benefit post-secondary programs across diverse disciplines: film, animation, games, simulation, fashion, and more. The session will also cover the latest education-focused initiatives from Epic’s Education, Learning, and Training team- including partnerships, events, and resources for educators and trainers.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionIn high-stiffness, high-resolution simulations, while primal space methods typically fail, the dual-space XPBD method produces unphysical softening artifacts due to convergence stall. We design an innovative Algebraic Multigrid method to enhance XPBD, utilizing lazy-update prolongators and near-kernel optimization. Our approach ensures stability, efficiency, and scalability for high-stiffness, high-resolution deformable models.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Real-Time Live!
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Art
Education
Games
Geometry
Modeling
Real-Time
Rendering
Scientific Visualization
Simulation
Full Conference
Virtual Access
Tuesday
DescriptionI will show the game Miegakure, in which you are a 3D character inside 4D world. The way we represent the fourth dimension is by taking a 3D slice through the 4D world, similar to how you can take a 2D slice through a 3D world.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Education
Fabrication
Performance
Robotics
Full Conference
Experience
Description"Mimosa Pithics" revitalizes Hsinchu’s pith paper cultural heritage in Taiwan by integrating biomimicry, shape memory alloy (SMA) technology, and interactive design. This kinetic artwork features pith petals that open and close in response to viewer interaction, evoking the natural movement of a mimosa plant through infrared sensor activation. Developed in close collaboration with local pith paper artisans, the project highlights the expertise of traditional craftspeople while bridging heritage and contemporary technologies. By reimagining endangered crafts through hybrid methods, "Mimosa Pithics" fosters sustainable dialogues between tradition and innovation.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce MIND, a novel generative framework for inverse-designing diverse, tileable 3D microstructures. Leveraging latent diffusion and our hybrid neural representation, MIND precisely achieves targeted physical properties, ensures geometric validity, and enables seamless boundary compatibility—opening new avenues for advanced metamaterial design and manufacturing applications.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionMany problems in graphics can be formulated as a non-linearly constrained global minimization (MINIMIZE), or solution of a system of non-linear constraints (SOLVE). We introduce MiSo, a domain-specific language and compiler for generating efficient code for low-dimensional MINIMIZE and SOLVE problems, using interval methods to guarantee conservative results.
Birds of a Feather
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Pipeline Tools and Work
Full Conference
Experience
DescriptionThis is part of a linked series of Technical Pipeline BoFs, covering the Reference Platform, Renderfarming, Cloud, Pipeline, Storage, and MLOps. There's been much ado about Generative AI models, but how does one train, deploy, and integrate these into a studio/production pipeline reliably and efficiently?
This session aims to bridge the gap between experimental development of machine learning models and their operational deployment. It will present the state of the art and encourage participant-led discussion to identify challenges and solutions, fostering shared understanding and actionable insights.
Attendees will receive invites to our 'Beers of a Feather' event, the same evening.
This session aims to bridge the gap between experimental development of machine learning models and their operational deployment. It will present the state of the art and encourage participant-led discussion to identify challenges and solutions, fostering shared understanding and actionable insights.
Attendees will receive invites to our 'Beers of a Feather' event, the same evening.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionMobius is a novel method to generate seamlessly looping videos from text descriptions directly without any user annotations, thereby creating new visual materials for the multi-media presentation.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionModelSeeModelDo presents a speech-driven 3D facial animation method using a latent diffusion model conditioned on a reference clip to capture nuanced performance styles. A novel "style basis" mechanism extracts key poses to guide generation, achieving expressive, temporally coherent animations with accurate lip-sync and strong stylistic fidelity across diverse speech inputs.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionThis work presents a physically-based model for simulating and rendering glow discharge, a luminous plasma effect seen in neon lights and gas discharge lamps. The model captures particle interactions and emission dynamics, integrates into volume rendering systems, and enables realistic, interactive visualizations of complex light phenomena.
Talk
Production & Animation
Research & Education
Livestreamed
Recorded
Dynamics
Simulation
Full Conference
Virtual Access
Tuesday
DescriptionFrom observations of luminous flames, it is apparent that soot oxidation behaves as an erosion of the flame. Motivated by this, we model soot oxidation with a level set equation combining physics and proceduralism. We demonstrate our method by several examples ranging from small-scale flames to large-scale turbulent fire.
Course
Gaming & Interactive
Livestreamed
Recorded
Games
Performance
Real-Time
Rendering
Full Conference
Virtual Access
Wednesday
DescriptionThis course covers modern Vulkan features, including the latest additions in Vulkan 1.4. Topics include dynamic rendering, synchronization strategies, streamlined subpasses, and bindless techniques. Through practical examples, participants will learn how to implement different rendering techniques.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionMeasures can be compactly represented and approximated using the theory of moments. This work proves that such moment-based representations are differentiable, leading to principled and efficient approaches for approximating transmittance and visibility in differentiable rendering.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionMonetGPT explores using multimodal large language models (MLLMs) for photo retouching by injecting domain knowledge via visual puzzles. These puzzles help MLLMs understand individual operations, visual aesthetics, and generate expert plans. Our procedural pipeline enables explainable edits with detailed reasoning for the plan and individual operations.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe propose a high-quality online reconstruction pipeline for monocular input streams, reconstructing environments with detail across multiple levels while maintaining high speed.
Immersive Pavilion
Gaming & Interactive
New Technologies
Haptics
Simulation
Virtual Reality
Full Conference
Experience
DescriptionWe present Monsteroom, a Substitutional Reality -based system integrating virtual reality, movable furniture, and smart appliances to simulate giant virtual pets. It enhances realism through spatial illusions and environmental feedback, enabling immersive, embodied interactions beyond traditional virtual pet systems.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe solve partial differential equations in domains involving complex microparticle geometry that is impractical, or intractable, to model explicitly. Drawing inspiration from volume rendering, we treat the domain as a participating medium with stochastic microparticle geometry and develop a volumetric variant of the Monte Carlo walk on spheres algorithm.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionMetric-Aligning Motion Matching (MAMM) is a novel method for controlling motion sequences using sketches, labels, audio, or another motion sequence without requiring training or annotations. By aligning within-domain distances, MAMM provides a flexible and efficient solution for motion control across various control modalities.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose Motion Embeddings for video generation, enabling precise motion in video transfer across diverse scenes and objects. These embeddings disentangle motion from appearance, preserving original dynamics while adapting to new prompts. Experiments show that our method achieved high-quality, prompt-aligned video generation across a wide range of scenarios.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe present a framework to utilize Large Language Models (LLMs) for co-speech gesture generation with motion examples as direct conditions. It enables multi-modal controls over co-speech gesture generation, such as motion clips, a single pose, human video, or even text prompts.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionMotionCanvas enables intuitive cinematic shot design in image-to-video generation by letting users control both camera movements and object motions in a 3D-aware scene. Combining classical graphics with modern diffusion models, it translates motion intentions into spatiotemporal signals—without costly 3D data—empowering creative video synthesis for diverse editing workflows.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionLarge vision-language models often fail to capture spatio-temporal details in text-to-animation tasks. We introduce MoVer, a verification system using first-order logic to check properties like timing and positioning in motion graphics animations. Integrated into an LLM pipeline, MoVer enables iterative refinement, significantly improving animation generation accuracy from 58.8% to 93.6%.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Art Paper
Arts & Design
Livestreamed
Recorded
Art
Artificial Intelligence/Machine Learning
Physical AI
Robotics
Simulation
Full Conference
Virtual Access
Experience
Monday
Description“Symbiosis of Agents” merges AI-driven multi-agent robotics with immersive environments, exploring the delicate balance of machine agency and artist authorship through emergent behaviors in self-organized AI ecologies. Its layered approach—micro-level strategie, meso-level drives, and an LLM-based “faith system”—creates a novel creative apparatus challenging conventional ideas on creativity and responsibility.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce a fast, wave-based procedural noise model enabling precise spectral control in any dimension. Using precomputed wave functions and inverse Fourier transforms, it supports Gaussian and non-Gaussian noises—including Gabor, Phasor, and novel recursive cellular patterns—making it ideal for compact, controllable, and animated solid textures in 2D, 3D, and time.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionGenerate exciting multi-character interactions, such as team fights, with our training-free method! Multi-character interactions can be decomposed into multiple two-person interactions using a directed graph, which enables repurposing large pre-trained two-character motion synthesis models without any multi-character data. You can compose and vary multi-character interactions spatially and temporally!
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe combine the estimates generated in each guiding iteration, leveraging the importance distributions from multiple guiding iterations. We demonstrate that our path-level reweighting makes guiding algorithms less sensitive to noise and overfitting in distributions.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe personalize a pre-trained global aging prior using 50 personal selfies, allowing age regression (de-aging) and age progression (aging) with high fidelity and identity preservation.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce Neural Adjoint Maps, a novel representation for correspondences between 3D shapes. Built on and extending the functional map framework, our approach enables accurate, non-linear refinement of shape matching across meshes and point clouds, setting a new standard in diverse scenarios and applications like graphics and medical imaging.
Emerging Technologies
New Technologies
Artificial Intelligence/Machine Learning
Computer Vision
Image Processing
Real-Time
Full Conference
Experience
DescriptionNano-3D is an ultra-compact, metasurface-based neural depth sensor that captures orthogonally polarized image pairs in a single shot and reconstructs metric depth in real-time. Demonstrated at SIGGRAPH 2025, it offers live, robust depth maps for objects while revealing its 700-nm-thick TiO2 metalens.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionNeST enables non-destructive 3D stress analysis of transparent objects using the polarization of light. Traditional 2D methods require destructively slicing the object. Instead, we reconstruct the entire 3D stress field by jointly handling phase unwrapping and tensor tomography using neural implicit representations and inverse rendering, enabling novel 3D stress visualizations.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionThis paper introduces Nested Attention, a mechanism that improves text-to-image personalization by injecting query-dependent subject features into cross-attention layers, achieving strong identity preservation and prompt alignment. The method maintains the model’s prior, enabling multi-subject generation across diverse domains.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe introduce a reparameterization-based formulation of neural BRDF importance sampling. Comparing to previous methods that construct a probability transform to the BRDF through multi-step invertible neural networks, our BRDF sampling is in single step without needing network invertibility, achieving higher inference speed with the best variance reduction.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present a computational framework that co-optimizes structural topology, curved layers, and fiber orientations for manufacturable, high-strength composites. Using implicit neural fields, our method integrates design and fabrication objectives into a unified optimization process, achieving up to 33.1% improvement in failure load for multi-axis 3D printed fiber-reinforced thermoplastics.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionNeural approach for estimating spatially varying light selection distributions to improve importance sampling in Monte Carlo rendering. To efficiently manage hundreds or thousands of lights, we integrate our neural approach with light hierarchy techniques, where the network predicts cluster-level distributions and existing methods sample lights within clusters.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Talk
New Technologies
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Generative AI
Full Conference
Virtual Access
Wednesday
DescriptionMetaphysic’s Neural Performance Toolset offers photorealistic, AI-driven human performance synthesis, combining advanced neural architectures, identity training, and latent-space manipulation. The system delivers unmatched realism and control for cinematic and real-time productions. Its success in major film and worldwide live events illustrates its capacity to redefine AI-generated content in global entertainment.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe train a network to map signed distance fields to the quadrature points and weights of non-conforming numerical integration rule in a Mixed Finite Element formulation, enabling differentiable elastic simulation over evolving domains. We demonstrate applications to image-guided material and topology optimization.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe propose NeurCross, a self-supervised framework for quadrilateral mesh generation that jointly optimizes principal curvature direction field and cross field by employing an optimizable neural SDF to approximate the input surface. NeurCross outperforms state-of-the-art methods in terms of singular point placement, robustness to noise and geometric variations, and approximation accuracy.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
New Technologies
Not Livestreamed
Not Recorded
Art
Augmented Reality
Ethics and Society
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionA collaboration with the DAC standing committee, this session presents a dynamic conversation exploring “New Media Architecture(s): Vancouver”, a digital media exhibition that brings augmented reality artworks into dialogue with “Heron’s Dreamscape”, a vibrant public mural by artist Priscilla Yu in Vancouver, BC. Curated by Johannes DeYoung, Gustavo Alfonso Rincon, and Miriam Esquitín, the exhibition reimagines the role of public art through digital augmentation, activating the mural as a living interface between place, community, and technology.
This panel features the curators alongside Priscilla Yu in a discussion that delves into the collaborative process behind the exhibition and the evolving relationship between physical murals and digital interventions. Together, they’ll explore how site-specific digital media can expand the narrative capacity of public artworks, deepen community engagement, and reframe our experience of urban environments.
Through an interdisciplinary lens, the conversation will address the potentials and challenges of blending artistic traditions with emerging technologies — and what it means to co-author public space in the digital age.
Audience members will gain insight into the artistic, curatorial, and technical approaches that shaped False Creek Frequencies, while reflecting on the broader cultural impact of art in augmented urban landscapes.
This panel features the curators alongside Priscilla Yu in a discussion that delves into the collaborative process behind the exhibition and the evolving relationship between physical murals and digital interventions. Together, they’ll explore how site-specific digital media can expand the narrative capacity of public artworks, deepen community engagement, and reframe our experience of urban environments.
Through an interdisciplinary lens, the conversation will address the potentials and challenges of blending artistic traditions with emerging technologies — and what it means to co-author public space in the digital age.
Audience members will gain insight into the artistic, curatorial, and technical approaches that shaped False Creek Frequencies, while reflecting on the broader cultural impact of art in augmented urban landscapes.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionVideo forensics, which focuses on identifying fake or manipulated video, is becoming increasingly difficult with the development of more advanced video editing techniques. We show how coding near-imperceptible, noise-like modulations into the illumination of a scene can create information asymmetry that favors forensic verification of video captured from that scene.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Industry Session
New Technologies
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Digital Twins
Generative AI
Physical AI
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionLearn how developers and industry pioneers are adopting and evolving OpenUSD for every application from content creation and simulation, to industrial AI and robotics.
Industry Session
New Technologies
Rendering
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionExplore the latest advancements in neural rendering with NVIDIA RTX for visual effects, animation, virtual production, and design visualization from leading artists, designers, and rendering technology companies.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionOctGPT is a novel multiscale autoregressive model for 3D shape generation. It introduces hierarchical serialized octree representation, octree-based transformer with 3D RoPE and token-parallel generation schemes. OctGPT significantly accelerates convergence, achieves performance rivaling or surpassing state-of-the-art diffusion models, and supports text/sketch/image-conditioned generation and scene-level synthesis.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe introduce Offset Geometric Contact (OGC), a groundbreaking method offering "penetration-free for free" simulations of codimensional objects. OGC efficiently constructs offset volumetric shapes to ensure stable, artifact-free collisions. Leveraging parallel GPU computations, it delivers real-time simulations at speeds over 100× faster than previous methods, eliminating costly collision detection and global-synchronization.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionLogarithmic metric blending enables smooth interpolation between planar shapes while bounding both conformal and area distortions. By blending symmetric positive definite metrics in the log domain, our method geometrically interpolates distortions. This leads to natural transitions that outperform existing techniques in applications such as shape morphing and animation.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe propose a fast, on-the-fly 3D Gaussian Splatting method that jointly estimates poses and reconstructs scenes. Through fast pose initialization, direct primitive sampling, and scalable clustering and merging, it efficiently handles diverse ordered image sequences of arbitrary length.
Frontiers
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Education
Ethics and Society
Generative AI
Pipeline Tools and Work
Rendering
Scientific Visualization
Virtual Reality
Full Conference
Experience
DescriptionThis course introduces the Onboarding Generative AI Canvas to support individuals and teams in organizations to create a road map for understanding how generative AI systems will best support the work that they do, remove obstacles, minimize risks, and accelerate adoption.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionManual 3D rigging is slow. UniRig introduces a unified learning framework for automatic skeletal rigging. Trained on our large, diverse Rig-XL dataset, it uses an autoregressive model and cross-attention to accurately rig various characters and objects, significantly outperforming prior methods and speeding up animation pipelines.
Talk
Arts & Design
Production & Animation
Not Livestreamed
Not Recorded
Animation
Geometry
Lighting
Pipeline Tools and Work
Rendering
Full Conference
Virtual Access
Wednesday
DescriptionThe environments of Pixar's Elio (2025) feature dynamic elements that bridge the disciplines of shading, modeling, dressing and lighting. In this talk we enumerate the challenges of creating sets that move, glow and change - revealing pipeline innovations that leverage USD, Houdini and other in-house tools.
Talk
Production & Animation
Not Livestreamed
Not Recorded
Animation
Dynamics
Geometry
Simulation
Full Conference
Virtual Access
Thursday
DescriptionCloning Clay, a space amoeba-like organic cloning matter in Pixar’s Elio (2025), required a suite of procedural FX techniques to land each story beat. In regular collaboration with several departments, this method delivered a range of effects, including dynamic hero clay FX, secondary rippling, and full-character transformations.
Birds of a Feather
Production & Animation
Not Livestreamed
Not Recorded
Industry Insight
Lighting
Modeling
Pipeline Tools and Work
Full Conference
Experience
DescriptionOpenPBR is a state of the art material model being designed under an open governance. In this session, hear about the latest updates from the main contributors, feedback from the community, and bring questions about integration into your own workflow.
Course
Production & Animation
Research & Education
Livestreamed
Recorded
Artificial Intelligence/Machine Learning
Dynamics
Fabrication
Geometry
Modeling
Pipeline Tools and Work
Simulation
Full Conference
Virtual Access
Sunday
DescriptionThis course will cover the latest developments and tools in the open source library OpenVDB. To mention just a few this includes, fVDB (a DL framework based on VDB), improved GPU support of NanoVDB, new tools (eg. level set and anisotropic surfacing), new grid types (half-float grids), and production examples.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe propose a coupled mesh-adaptation model and physical simulation algorithm to jointly generate, per timestep, optimal adaptive remeshings and implicit solutions for the simulation of frictionally contacting, large-deformation elastica.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose a Generative Order Learner (GOL) that optimizes element ordering for graphic design generation. Our approach learns a content-aware neural order, which can significantly improve graphic generation quality, generalize across different types of generative models and help design generators scale up greatly.
Talk
Production & Animation
Not Livestreamed
Not Recorded
Animation
Art
Modeling
Rendering
Full Conference
Virtual Access
Wednesday
DescriptionFor the aliens in Pixar's Elio we crafted the looks for over 18 species from various inspirations that needed to be unique and appealing but not too familiar. We explored combining illumination models and animated shading techniques in a collaborative approach between our design artists and shading artists.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionThis work introduces a forward and differentiable rigid-body dynamics framework using Lie-algebra rotation derivatives. The approach offers simplified, compact derivatives, improved conditioning, and higher efficiency compared to traditional methods. Applications include fundamental rigid-body problems and Cosserat rods, showcasing its potential for multi-rigid-body dynamics and incremental-potential formulations.
Talk
Production & Animation
Not Livestreamed
Not Recorded
Animation
Full Conference
Virtual Access
Sunday
DescriptionFor The Wild Robot, the characters exhibit the sophistication of real fur and feathers in motion. But the final rendered look needed to integrate with a stylized painterly world. In LookDev, brushstrokes of detail are layered like an artist builds up strokes of paint, in response to specific key lights.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present the first interactive system for painting with 3D Gaussian splat brushes. With our tool, artists can sample volumetric fragments from real-world Gaussian splat captures and paint with them in real time. Our tool seamlessly deforms sampled splats along painted strokes, introducing realistic transitions between seams with diffusion inpainting.
Real-Time Live!
Arts & Design
New Technologies
Production & Animation
Livestreamed
Recorded
Art
Capture/Scanning
Geometry
Real-Time
Rendering
Full Conference
Virtual Access
Tuesday
DescriptionWe present the first interactive system for painting with 3D Gaussian splat brushes. With our tool, artists can sample volumetric fragments from real-world Gaussian splat captures and paint with them in real time. Our tool seamlessly deforms sampled splats along painted strokes, introducing realistic transitions between seams with diffusion inpainting.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Real-Time
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionRecreate Korea’s Joseon Dynasty’s last unrealized royal banquet in an immersive LBE VR journey through history and culture
Educator's Forum
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Artificial Intelligence/Machine Learning
Augmented Reality
Education
Games
Generative AI
Industry Insight
Pipeline Tools and Work
Virtual Reality
Full Conference
Virtual Access
Experience
Sunday
DescriptionThis panel brings educators and industry practitioners together to identify emerging trends that disrupt education while opening new opportunities for innovation and industry collaboration. Panelists will highlight innovative pedagogical and curricular approaches and discuss how industry perspectives shape academic training to prepare learners for the evolving workforce better.
Educator's Forum
Arts & Design
Research & Education
Livestreamed
Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Education
Ethics and Society
Generative AI
Real-Time
Rendering
Virtual Reality
Full Conference
Virtual Access
Experience
Monday
DescriptionThis panel examines the relevance of traditional VFX techniques in the rapidly evolving industry, discussing how education can integrate these foundational skills with new technologies like AI and real-time rendering. Featuring industry leaders and educators, the session seeks strategies to adapt curricula to prepare students for modern VFX demands.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionHigher-order surfaces enable compact, smooth geometry but require efficient rendering. We introduce PaRas, a GPU-based rasterizer that directly renders parametric surfaces, avoiding costly tessellation. It integrates seamlessly into existing pipelines, outperforming traditional methods for quartic triangular and bicubic rational Bézier patches. Experimental results confirm its superior efficiency and accuracy.
Emerging Technologies
Arts & Design
New Technologies
Research & Education
Art
Hardware
Robotics
Full Conference
Experience
DescriptionThe Parasitic Finger project explores how humans coexist with uncontrollable finger augmentation with an SMA actuator. What would happen if fingers made unrealistic movements on their own? Unlike human finger joints, they move like tentacles, performing actions such as waving and touching objects. Their vibrations indicate their alertness or cuteness.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionPARC is a framework that enhances terrain traversal with machine learning and physics-based simulation. By iteratively training a kinematic motion generator and simulated motion tracker, PARC produces a character controller capable of traversing complex environments using highly agile motor skills, overcoming the challenges of limited motion capture data.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present PartEdit, a novel diffusion-based system enabling precise, text-based edits of object parts without retraining or manual masks. Optimizing part-aware tokens generates localized non-binary attention maps to guide seamless edits. Our novel blending strategy delivers high-quality visual results and outperforms prior techniques in both synthetic and real-world scenarios.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce Patch-Grid, a unified neural implicit representation that efficiently represents complex shapes, preserves sharp features, and handles open boundaries and thin geometric details. By decomposing shapes into patches encapsulated by adaptive feature grids and merging them through localized CSG operations, Patch-Grid demonstrates superior robustness, efficiency, and accuracy.
Course
Production & Animation
Research & Education
Livestreamed
Recorded
Graphics Systems Architecture
Industry Insight
Lighting
Rendering
Full Conference
Virtual Access
Tuesday
DescriptionWe will share some nitty-gritty details and challenges when integrating path guiding into production rendering systems.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionPDT is a novel framework that uses diffusion models to transform unstructured point clouds into semantically meaningful and structured distributions, such as keypoints, joints, and feature lines. Exploring complex point distribution transformation, PDT captures fine-grained geometry and semantics, offering a versatile tool for diverse tasks.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionThis is a scientific data visualization of ocean currents around the world based on the ocean model, Estimating the Circulation and Climate of the Ocean (ECCO). The visualization is a tour of major currents of the world including western boundary currents and includes both surface and deep currents.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThis paper investigates photorealistic scene reconstruction using videos captured from an egocentric device in high dynamic range. It presents a novel system utilizing visual-inertial bundle adjustment and a physical image formation model that handles camera motion artifacts. The experiments using Project Aria and Quest3 show substantial improvements in visual quality.
Course
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Recorded
Artificial Intelligence/Machine Learning
Games
Lighting
Real-Time
Rendering
Full Conference
Virtual Access
Sunday
DescriptionUsing examples from film and games, this course presents advances in physically based shading in both theory and production practices, demonstrating how it enhances realism and leads to more intuitive and faster art creation.
Birds of a Feather
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Digital Twins
Math Foundations and Theory
Simulation
Full Conference
Experience
DescriptionThis CG and Simulation themed BOF focuses on three main topics. The first is high order acurate computational science that mimics the system being simulated. The second involves human factors in virtual reality for accessable simulation user interfaces. The third focuses on Augmented Intelligence tools and logic including computer algebra and IoT data acquisition. There will also be forays into special topics such as FORTRAN 2018 and computational science, parallel data flow for IoT, programming in English (and French) and the impact of augmented intelligence. Lastly, there may be some insights on numerical weather prediction and industrial plumes.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present a photograph relighting method that enables explicit control over light sources akin to CG pipelines. We achieve this in a pipeline involving mid-level computer vision, physically-based rendering, and neural rendering. We introduce a self-supervised training methodology to train our neural renderer using real-world photograph collections.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose a method to estimate optimal cloth mesh resolution based on material stiffness and boundary conditions like shirring or stitching, and dynamic wrinkles from motion-induced collisions. To ensure smooth resolution transitions, we calculate transition distances and generate a mesh sizing map, enhancing realism, efficiency, and versatility for garment design.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionPhysicsFC introduces a breakthrough in interactive football simulation—enabling real-time control of physically simulated players that perform complex skills with smooth transitions. It combines skill-specific learning, physics-informed rewards, latent-guided training, and transition-aware state initialization, achieving agile, lifelike football behaviors in scenarios ranging from 1v1 play to full 11v11 matches.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose a method to approximate arbitrary freeform surface meshes with piecewise ruled surfaces. Our approach optimizes mesh shape and ruling direction field simultaneously, extracts patch topology, and refines ruling positions and orientations. The technique effectively approximates diverse freeform shapes and has potential applications in architecture and engineering.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Games
Hardware
Full Conference
Experience
DescriptionPlant.play() reimagines the relationship between plants, humans, and digital spaces through an interactive installation where a living plant becomes the central player in a pet simulation game. Environmental sensors and bioelectrical signals translate the plant’s natural processes into caregiving decisions, shaping pets’ growth, personality, and evolution—leading to sixty possible appearances and eight personality types. Displays highlight the plant’s decision-making journey, emphasizing its role as an active agent. By blending plants, technology, and play, Plant.play() explores posthumanist ideas and invites audiences to reflect on the connections between living beings, their environments, and the roles plants can play in our interconnected world.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce a physically-based character animation framework that exploits part-wise latent tokens. The novel structured decomposition enables dynamic exploration to stably adapt to diverse unseen scenarios. Additional refinement networks improve overall motion quality. We show superior performance on multi-body tracking, motion adaptation, and locomotion with damaged body parts.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionPocket Time-Lapse is a system to record, explore and visualize long-term changes in the environment, based on data that a user can capture with the phone they carry. Our contributions include a process to conveniently capture a scene, and novel techniques for registering and visualizing panoramic time-lapse data.
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Display
Dynamics
Ethics and Society
Generative AI
Performance
Real-Time
Spatial Computing
Full Conference
Virtual Access
Experience
Monday
DescriptionPoeSpin is a human-AI cocreating system. By transforming pole dance movements into poetry through AI, we challenge both traditional prejudices against this art form and conventional approaches to human-AI creativity. This work demonstrates how computational systems can preserve the deeply human aspects of artistic expression while creating new possibilities for cross-modal artistic collaboration, suggesting pathways for more inclusive and expressive forms of human-AI co-creation.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present a new perspective on physics-based character animation. Assuming policies for similar motions should have similar weights, we introduce regularization during RL training to preserve weight similarity. By modeling the weights’ manifold with a diffusion model, we generate a continuum of policies adapting to novel character morphologies and tasks.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe propose polynomial 2D biharmonic coordinates for closed high-order cages containing polynomial curves of any order by extending the classical
2D biharmonic coordinates using high-order BEM. When applying our coordinate
to 2D cage-based deformation, users manipulate the \Bezier
control points to quickly generate the desired conformal deformation.
2D biharmonic coordinates using high-order BEM. When applying our coordinate
to 2D cage-based deformation, users manipulate the \Bezier
control points to quickly generate the desired conformal deformation.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionpOps is a framework for learning semantic manipulations in CLIP’s image embedding space. Built on a Diffusion Prior model, it enables concept manipulation by training operators directly on image embeddings. This approach enhances semantic control and integrates easily with diffusion models for image generation.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionAccurate modeling of normal distribution functions (NDF) over a high-resolution normal map enables intriguing glinty appearance but is inefficient. We present a manifold-based glint formulation, transferring the glint NDF computation to mesh intersections. This framework accelerates glint rendering, as well as providing a closed-form shadow-masking derivation for normal-mapped diffuse surfaces.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present a method for designing smooth directional fields on triangle meshes with precise control over singularities. Our approach uses a power-linear polar representation, allowing singularities of any index to be placed anywhere on the mesh. The resulting fields are smooth, robust to mesh quality, and support N-fold symmetry.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThis work addresses recovering textured materials using inverse rendering. Our Laplacian mipmapping improves the reconstruction of high-resolution textures. We also propose a novel gradient computation that enables efficiently reconstructing textured, path-traced subsurface scattering. The methods are applied to challenging scenes, including reconstructing realistic human face appearance from sparse captures.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present a practical method for rendering scenes with complex, recursive nonlinear stylization applied to physically based rendering. Our approach introduces nonlinear path filtering(NL-PF) and nonlinear neural radiance caching(NL-NRC), which reduce the exponential sampling cost of stylized rendering to polynomial, enabling rendering of nonlinear stylization with significantly improved efficiency.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionThis paper presents a novel pipeline to digitize physical threads and predict fabric appearance before fabricating cloth samples, addressing a real need in the fashion industry. It enables designers to make more informed material choices, thereby promoting sustainable production, reducing costs, and fostering innovation in fabric design.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present PrimitiveAnything, a novel framework that reformulates shape primitive abstraction as a primitive assembly generation task. PrimitiveAnything can generate 3D high-quality primitive assemblies that better align with human perception while maintaining geometric fidelity across diverse shape categories, which benefits various 3D applications.
Immersive Pavilion
Arts & Design
New Technologies
Research & Education
Art
Education
Ethics and Society
Performance
Virtual Reality
Full Conference
Experience
DescriptionThis study explores the primordial sensory experience of humans by re-living the experience of having the body and mind of a baby. This experience was achieved using virtual reality (VR) and by wearing a special membrane bodysuit.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose a general framework, Progressive Dynamics++, for constructing a family of progressive dynamics integration methods that advance physical simulation states forward in both time and spatial resolution. We analyze requirements for stable, continuous, and consistent level-of-detail animation and introduce a novel, stable method that significantly improves temporal continuity.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe propose an iterative prompt-and-select architecture to progressively reconstruct the CAD modeling sequence of a target point cloud. We propose the concept of local geometric guidance and come up with three ways to integrate this guidance into iterative reconstruction. Experiments demonstrate the superiority over the current state of the art.
Spatial Storytelling
Arts & Design
New Technologies
Not Livestreamed
Not Recorded
Art
Hardware
Performance
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionWe propose a live-streamed immersive performance from France featuring a "psychonaut" experiencing the aquatic VR journey "SPACED OUT," using our waterproof MeRCURY headset in a swimming pool. Three simultaneous live camera feeds and real-time user narration create captivating remote immersion, previously validated through two successful public performances, ensuring technical reliability.
Emerging Technologies
Gaming & Interactive
New Technologies
Hardware
Robotics
Spatial Computing
Full Conference
Experience
DescriptionRobotic systems are increasingly present in intelligent spaces, yet intuitive multi-user control remains challenging. Public Hand enables seamless and intuitive robotic hand manipulation by dynamically adjusting controllability based on proximity-aware approach. This approach enables intuitive and shared interaction without wearable devices, facilitating dynamic and flexible collaboration in robotic rooms.
Spatial Storytelling
Gaming & Interactive
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Animation
Capture/Scanning
Digital Twins
Performance
Real-Time
Full Conference
Experience
DescriptionLive physical whole-puppet performances are used to drive digital animation twins via puppix, a new capture system. As well as an overview of our work, we will demonstrate the practicalities of using the system with a live puppet character in the room with the audience, working with direction and interaction.
Talk
New Technologies
Production & Animation
Livestreamed
Recorded
Animation
Capture/Scanning
Digital Twins
Performance
Real-Time
Full Conference
Virtual Access
Thursday
DescriptionLive physical whole-puppet performances are used to drive digital animation characters and creatures via puppix, a new capture system. The benefits of having a live puppet character in the room with actors, directors and other characters are demonstrated and discussed, as well as the practical processes of capturing non-human physicalities.
Real-Time Live!
Gaming & Interactive
New Technologies
Production & Animation
Livestreamed
Recorded
Animation
Capture/Scanning
Digital Twins
Performance
Real-Time
Full Conference
Virtual Access
Tuesday
DescriptionLive physical whole-puppet performances are used to drive digital animation characters and creatures in real time via puppix, a new capture system. Experience the effect of interacting with a professionally puppeteered live theatrical puppet character in the room with you as it performs its digital twin in real time.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe identify stable orientations of any rigid shape, and the probability that it will rest at these orientations if randomly dropped on the ground. We use a differentiable inverse version of our method to design and fabricate shapes with target resting behavior, such as dice with target, nonuniform probabilities.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionPhysically based differentiable rendering computes gradients of the rendering equation. The task is made difficult by discontinuities in the integrand at object silhouettes. To address this challenge, we propose a novel edge sampling approach that outperforms the state-of-the-art among unidirectional differentiable renderers.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionThis paper introduces a novel grid structure that extends tall cell methods for efficient deep water simulation. Unlike previous methods, our approach subdivides tall cells horizontally, allowing for more aggressive adaptivity. We demonstrate that this novel form of adaptivity delivers superior performance compared to traditional uniform tall cells.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Full Conference
Experience
DescriptionQuantum Tango, 2025, connects Vancouver, London, and Milan through a real-time, interactive digital artwork. At each location, a screen and camera setup respond to local audience movement while revealing dynamic images and colour patterns influenced by activity in the other cities. Blending abstract visuals with interlaced images captured at different times, the work creates a shared, evolving aesthetic experience across continents. Building on Edmonds’ earlier Communications Game (1971 on) and Cities Tango (2007 on) projects, this generative networked piece explores urban presence, audience agency, and remote intimacy—transforming public interaction into a cross-cultural dialogue beyond the limits of conventional telepresence. This will be the first time a 3-node version of the work is presented, especially for SIGGRAPH.
Spatial Storytelling
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Games
Generative AI
Performance
Real-Time
Spatial Computing
Full Conference
Experience
DescriptionQuantum Theater takes quantum science as subject and method for playable theater. Phenomena like entanglement, superposition, coherence, and collapse shape the performance in a post-AI exploration of liveness, variability, and improvisation. Multiple realities are layered on stage, where the audience as observer-participant plays an active role in cohering singular narratives.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionThis paper introduces an improved quad-based geometry streaming method for remote rendering that reduces bandwidth demands through temporal compression and supports QoE-driven adaptation. It achieves high-quality visuals, captures disocclusion events, uses 15× less data than SOTA, and reduces bandwidth down to 100 Mbps, enabling real-time, low-latency rendering on lightweight headsets.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Art
Virtual Reality
Full Conference
Experience
DescriptionIn his living room, a retired playwright unveils a revelation to the spectator, addressing them in a nostalgic and intimate tone as if they were long-lost friends reuniting: He is losing his memories and seeks companionship on a transformative journey to reconstruct and preserve fragments of his cherished remembrances.
Talk
New Technologies
Livestreamed
Recorded
Augmented Reality
Hardware
Real-Time
Rendering
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Tuesday
DescriptionThe Mill was tasked to create an experience that allows a real Formula 1 Driver to virtually race against past legends on a real track, in his real race car. We will walk through our research and development process, from inception through the 1-hour TV special for the virtual race.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present a simple and fast method to reconstruct radiance surfaces by directly supervising the radiance field via image projection.
Unlike volumetric approaches, we move alpha blending and ray marching from image formation into loss computation.
This simple modification enables high-quality surface reconstruction while preserving baseline efficiency and robustness.
Unlike volumetric approaches, we move alpha blending and ray marching from image formation into loss computation.
This simple modification enables high-quality surface reconstruction while preserving baseline efficiency and robustness.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present the first algorithm to automatically compute sewing patterns for
upcycling existing garments into new designs. Our algorithm takes as input
two garment designs along with their corresponding sewing patterns and
determines how to cut one of them to match the other by following garment
reuse principles.
upcycling existing garments into new designs. Our algorithm takes as input
two garment designs along with their corresponding sewing patterns and
determines how to cut one of them to match the other by following garment
reuse principles.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Ethics and Society
Full Conference
Experience
DescriptionJoin LGBTQIA+ attendees and allies for the Rainbow Meetup, a welcoming space to connect, celebrate identity, and help shape the future of queer community at SIGGRAPH. This year is our first official gathering as the Rainbow Affinity Group. We’ll introduce the group’s mission and invite participation in key leadership roles, including a new Student Chapters Liaison and Fundraising Chair. To ensure a safe, respectful environment, the ACM Code of Conduct will be strictly enforced. Harassment or non-consensual behavior, including unauthorized photography, will not be tolerated. Come help build community, find support, and lead the future of the Rainbow Affinity Group.
Talk
Gaming & Interactive
Livestreamed
Recorded
Performance
Real-Time
Rendering
Full Conference
Virtual Access
Sunday
DescriptionIn this talk, we’ll share our experience working with Vulkan bindless techniques and ray tracing on Android smartphones, overcoming hardware constraints, driver issues, and other mobile platform limitations along the way. We'll also present a performance comparison across several Android devices available in 2025.
Real-Time Live!
Gaming & Interactive
Production & Animation
Livestreamed
Recorded
Animation
Games
Geometry
Real-Time
Rendering
Full Conference
Virtual Access
Tuesday
DescriptionNVIDIA’s RTX Mega Geometry is a technology that accelerates Bounding Volume Hierarchy (BVH) building, enabling path tracing of scenes with up to 100x more triangles. For the first time, we can apply global illumination to sub-pixel micro geometry in real time.
Real-Time Live!
Arts & Design
Gaming & Interactive
Research & Education
Livestreamed
Recorded
Art
Education
Games
Geometry
Performance
Real-Time
Scientific Visualization
Full Conference
Virtual Access
Tuesday
DescriptionDesmos started as a free graphing calculator that’s now used by most students when learning math. With 100M+ users around the world, it has also become a tool for creative exploration, revealing the incredible promise of the next generation of technical designers and mathematical artists. We’ll show off their work.
Emerging Technologies
New Technologies
Research & Education
Augmented Reality
Display
Hardware
Real-Time
Full Conference
Experience
DescriptionThis system converts 2D video into 3D holograms in real time. The system consists of a holography processor that generates real-time 8-layer, 30FPS CGH using HBM, a Linux host that extracts depth information from 2D images and transmits it as a packet, and an optical unit that displays the hologram.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionIn this work, we introduce the first real-time framework that integrates yarn-level simulation with fiber-level rendering. The whole system provides real-time performance and has been evaluated through various application scenarios, including knit simulation for small patches and full garments and yarn-level relaxation in the design pipeline.
Real-Time Live!
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the main event on Tuesday night, keep the Real-Time Live! excitement going by getting a closer look at some of the real-time projects on Wednesday morning:
Drawing with Light: AI-Driven Visual Synthesis for Real-Time Laser Installations – Miegakure: a Game Where You Explore and Interact With a 4D World – Real-Time Graphics in Desmos, With Just Math and a Browser – Real Time Path-Tracing with NVIDIA RTX MegaGeometry – PUPPIX - Real Time Live Performed Digital Characters Using Physical Puppet Twins
Take advantage of the opportunity to have your questions answered!
Drawing with Light: AI-Driven Visual Synthesis for Real-Time Laser Installations – Miegakure: a Game Where You Explore and Interact With a 4D World – Real-Time Graphics in Desmos, With Just Math and a Browser – Real Time Path-Tracing with NVIDIA RTX MegaGeometry – PUPPIX - Real Time Live Performed Digital Characters Using Physical Puppet Twins
Take advantage of the opportunity to have your questions answered!
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThis paper presents a method for mapping curved surfaces to the plane without shear, enabling rectangular parameterizations. It introduces a novel approach for computing integrable, orthogonal frame fields. The method improves mesh quality, supports rich user control, and outperforms existing techniques in simulation, modeling, retopology, and digital fabrication tasks.
Technical Workshop
New Technologies
Research & Education
Animation
Dynamics
Simulation
DescriptionThis workshop aims to explore the evolution of subspace methods in physical simulation, tracing their origins from classical engineering formulations to cutting-edge neural techniques. By gathering leading researchers, students, and practitioners, the session will serve as a platform for cross-disciplinary dialogue, education, and community building around model reduction techniques in graphics and simulation.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionReenact Anything introduces a unified framework for semantic motion transfer, covering applications from full-body and face reenactment to controlling the motion of inanimate objects and the camera. Thereby, motions are represented using text/image embeddings of an image-to-video diffusion model and are optimized based on a given motion reference video.
Art Paper
Arts & Design
Production & Animation
Research & Education
Livestreamed
Recorded
Art
Audio
Digital Twins
Education
Performance
Virtual Reality
Full Conference
Virtual Access
Experience
Monday
DescriptionThis practice-based project reimagines Beckett’s Not I in virtual reality, marrying minimalist theatre with immersive technology. A lone, disembodied Metahuman mouth exploits VR’s intense presence while subverting customary embodiment and audience agency. Integrating performing avatars, the work probes authenticity, identity, and authorship, demonstrating how “subtractive dramaturgy” thrives in an additive medium. Findings advance performance studies, XR design, and digital humanities by showing how technology reshapes creativity, embodiment, and storytelling.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThe alignment of text,images,and 3D is very challenging,yet it is crucial and beneficial for many tasks.We explore and reveal the characteristics of the native 3D latent space for 3D generation,make it decomposable and low-rank,thereby enabling efficient learning for multimodal local alignment,achieving precise local enhancement and part-level editing of 3D geometry.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present the first drivable full-body avatar model that reconstructs perceptually realistic relightable appearance.
Spatial Storytelling
Arts & Design
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Augmented Reality
Ethics and Society
Performance
Real-Time
Virtual Reality
Full Conference
Experience
DescriptionRemixing the Flying Words Project is a mixed-reality installation that reimagines an ASL poem through immersive technology. Using motion capture and AI-generated imagery, it enables audiences to experience sign language poetry kinesthetically. Presented in a mixed-reality headset, it transforms linguistic translation into a dynamic, multisensory engagement with spatial storytelling.
Birds of a Feather
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Dynamics
Industry Insight
Full Conference
Experience
DescriptionThis is an annual discussion of Large-Scale Rendering. With render farm infrastructure, both on-premise and hybrid cloud solutions have software, queue management, reliability, and performance woes. Industry experts and new entrants are all welcomed, for a sharing of knowledge, experience, and best practices. Mail joe@jdamato.com for any questions or comments.
This is part of a linked series of Technical Pipeline BoFs, covering the VFX Reference Platform, Renderfarming, Cloud Native, Pipeline, Studio Storage, and MLOps.
This is part of a linked series of Technical Pipeline BoFs, covering the VFX Reference Platform, Renderfarming, Cloud Native, Pipeline, Studio Storage, and MLOps.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe present RenderFormer, a neural rendering pipeline that directly renders an image from a triangle-based representation of scene with full global illumination effects, and that does not require per-scene training or finetuning.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce reservoir splatting, a technique preserving exact primary hits during temporal ReSTIR. This approach makes temporal path resampling more robust under motion, especially for regions with high-frequency detail. We further demonstrate how reservoir splatting naturally enables ReSTIR support for both motion blur and depth of field.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionRedesign spaces effortlessly-ReStyle3D transforms indoor scenes by transferring object-specific styles from a single reference image, preserving 3D coherence. Combining semantic-aware diffusion and depth guidance, it enables photo-realistic virtual staging—faithfully redecorating furniture, textures, and decor. Ideal for interior design, our method outperforms existing approaches in realism, detail fidelity, and cross-view consistency.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Games
Scientific Visualization
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionThis work is a 6Dof XR interactive narrative work. The railway network once reshaped the history and visual culture of northern Chinese cities. Ironically, it also witnessed the slow process of resource depletion in these vast areas. While connecting these towns, the railway tracks also isolated them - on both the realty space and the virtual algorithmic database. We collected hundreds of videos from four northern Chinese cities, and used Gaussian blur and deep learning to create immersive digital scenes, we hope to create an optimistic future archaeological landscape: electronic tracks connect data islands and become distributed virtual narrative spaces.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionTo combine deep learning's generalization with traditional methods' interpretability, we propose CustomBF—a hybrid framework that customizes bilateral filter components per point. By addressing key limitations of the classic bilateral filter, CustomBF achieves robust, interpretable, and effective point cloud denoising across diverse scenarios.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionRigAnything is a transformer-based model that autoregressively generates 3D rigging without templates. It sequentially predicts joints and skeleton topology while assigning skinning weights, working on objects in any pose. It’s 20× faster than existing methods, completing rigging in under 2 seconds with state-of-the-art quality across diverse object types.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Art
Games
Generative AI
Real-Time
Virtual Reality
Full Conference
Experience
DescriptionRising River is an AI-integrated self-analysis VR experience. It began with a question: Can gaming and VR make the therapeutic benefits of Jungian shadow work more accessible to everyone?
The participant will revive a dried-up river by offering words that reflect their inner selves, turning their responses into tangible elements.
The participant will revive a dried-up river by offering words that reflect their inner selves, turning their responses into tangible elements.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe propose a neural representation for 3D assets with complex shading. We precompute shading and scattering on ground-truth geometry, enabling high-fidelity rendering with full relightability, eliminating complex shading models and multiple scattering paths, offering significant speed-ups and seamless integration into existing rendering pipelines.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Production Session
Production & Animation
Livestreamed
Recorded
Animation
Industry Insight
Lighting
Pipeline Tools and Work
Full Conference
Virtual Access
Wednesday
DescriptionEnter the world of Dune: Prophecy and discover how Image Engine helped bring this iconic universe to life through visual effects.
In this session, Image Engine shares how a concept-first approach, streamlined workflows, and technical problem-solving allowed us to deliver some of the series’ most ambitious sequences, ranging from the vast genetic memory library inside a Thinking Machine to massive sand simulations featuring the legendary sandworm.
We’ll break down how our pipeline enabled scalable, collaborative solutions for complex challenges across departments. From the intricacies of animation-driven holograms to the technical demands of simulating cascading sand, attendees will walk away with valuable insights and practical takeaways they can apply to their own work.
Key topics include:
Concept-Driven Approach: How we used early concept art and reference studies to align creative vision with the client, gaining buy-in before production began.
Streamlined Pipeline: See how our FX library card swap pipeline, batchable templates, and attribute-driven setups helped smaller teams work more efficiently without sacrificing quality.
Genetic Library: Explore the design and lighting of Anirul, the vast genetic archive, as well as how our custom tools allowed animation to drive holographic FX that lit the scene and reflected in real time.
Large-Scale Environment FX: Learn how we handled desert and storm simulations at scale, including tricks to manage heavy particle FX while maintaining art direction.
Holograms: Get a look at how we built the complex, multi-camera holographic war table, featuring hundreds of procedurally generated projectors and lights, and aligned outputs across departments for seamless integration.
Whether you're a student, generalist, or pipeline developer, this talk offers an approachable, behind-the-scenes look at how Image Engine tackled complex sequences through early concept alignment, creative problem solving, and efficient pipeline design.
In this session, Image Engine shares how a concept-first approach, streamlined workflows, and technical problem-solving allowed us to deliver some of the series’ most ambitious sequences, ranging from the vast genetic memory library inside a Thinking Machine to massive sand simulations featuring the legendary sandworm.
We’ll break down how our pipeline enabled scalable, collaborative solutions for complex challenges across departments. From the intricacies of animation-driven holograms to the technical demands of simulating cascading sand, attendees will walk away with valuable insights and practical takeaways they can apply to their own work.
Key topics include:
Concept-Driven Approach: How we used early concept art and reference studies to align creative vision with the client, gaining buy-in before production began.
Streamlined Pipeline: See how our FX library card swap pipeline, batchable templates, and attribute-driven setups helped smaller teams work more efficiently without sacrificing quality.
Genetic Library: Explore the design and lighting of Anirul, the vast genetic archive, as well as how our custom tools allowed animation to drive holographic FX that lit the scene and reflected in real time.
Large-Scale Environment FX: Learn how we handled desert and storm simulations at scale, including tricks to manage heavy particle FX while maintaining art direction.
Holograms: Get a look at how we built the complex, multi-camera holographic war table, featuring hundreds of procedurally generated projectors and lights, and aligned outputs across departments for seamless integration.
Whether you're a student, generalist, or pipeline developer, this talk offers an approachable, behind-the-scenes look at how Image Engine tackled complex sequences through early concept alignment, creative problem solving, and efficient pipeline design.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionScaffoldAvatar presents a novel approach for reconstructing ultra-high fidelity animatable head avatars, which can be rendered in real-time. Our method operates on patch-based local expression features and synthesizes 3D Gaussians dynamically by leveraging tiny scaffold MLPs. We employ color-based densification and progressive training to obtain high-quality results and fast convergence.
Appy Hour
Arts & Design
Gaming & Interactive
Augmented Reality
Capture/Scanning
Games
Full Conference
Experience
DescriptionScavengeAR is a mobile AR creature collecting app deployed to SIGGRAPH from 2017 to 2019 with over 1000 Daily Active Users during the conference run. Now it's 2025, and the core team has refactored the app for a future launch. Lets talk about AR development, then and now.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionSecret Level presents 15 original short stories set in classic video game worlds. Platige Image created a Good Conflict episode for the Crossfire. As a storm approaches, two rival mercenary groups collide, each fighting for their vision of the greater good. Their fates hang in the balance.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Art
Augmented Reality
Games
Real-Time
Full Conference
Experience
DescriptionSeed of Light is an art game where museum visitors and mass online players connect through an intertwined plotline, mediated by the custom 'Seed' controller. It depicts an artist's journey through life's chapters, transcending cultural and identity barriers to foster profound empathy and emotional connections across diverse audiences.
Spatial Storytelling
Arts & Design
Production & Animation
Not Livestreamed
Not Recorded
Performance
Virtual Reality
Full Conference
Experience
DescriptionSeeing Yourself on Stage explores the evolution of a revolutionary real-time performer monitoring system in VR. While built for motion capture performance, its applications extend beyond theater—enhancing training simulations, squad-based VR gaming, third-person gameplay, VTubing, and virtual production, offering new solutions for immersive self-monitoring in digital environments.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe introduce a novel segment-based framework for light transport simulation, efficiently assembling paths from disconnected segments. Our method includes innovative segment sampling techniques and corresponding estimation strategies. To demonstrate its strengths, we propose a robust bidirectional path filtering prototype, achieving superior rendering quality and faster convergence than state-of-the-art methods.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe introduce a novel method that integrates unsupervised style from arbitrary references into a text-driven diffusion model to generate semantically consistent stylized human motion. We leverage text as a mediator to capture the temporal correspondences between motion and style, enabling the seamless integration of temporally dynamic style into motion features.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Art
Ethics and Society
Virtual Reality
Full Conference
Experience
DescriptionSentimentVoice is a live VR performance using emotion-tracking AI to amplify immigrant narratives. It transforms surveillance technology into an empathetic storytelling tool, actively listening to immigrant stories, responding to voice and facial expressions. Featuring real stories, interactive VR environments, and AI-driven visuals, the project explores identity, memory, and emotional communication.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce shape-space eigenanalysis to compute eigenfunctions across continuously-parameterized shape families. These eigenfunctions are obtained by minimizing a variational principle. To handle eigenvalue dominance swaps at points of multiplicity, we incorporate dynamic reordering during optimization. The method is discretization-agnostic and differentiable, enabling applications in sound synthesis, locomotion, and elastodynamic simulation.
Talk
Production & Animation
Not Livestreamed
Not Recorded
Animation
Pipeline Tools and Work
Full Conference
Virtual Access
Sunday
DescriptionDreamWorks has long shared rig templates between different feature productions. Re-use avoids the need to constantly re-invent common behavior. This talk presents a new synchronization paradigm, based on Premo's new integrated versioning, that allows data to be efficiently synchronized between productions.
Birds of a Feather
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionShifting Academic Culture: From Parking Brakes to Acceleration
This workshop format session invites educators to explore the shift from "minor chord" thinking—defined by resistance, excuses, and inertia—to a "major chord" mindset of enthusiasm, strategic action, and bold innovation. We'll discuss practical tools to overcome common academic roadblocks, amplify student work, and spark momentum in creative programs. Bring your stories, tactics, and questions for a collaborative conversation about building thriving academic cultures. Whether you're up against budget walls, skeptical committees, or just burnout, this session will offer actionable strategies to pivot, persevere, and lead visible change in your institution.
This workshop format session invites educators to explore the shift from "minor chord" thinking—defined by resistance, excuses, and inertia—to a "major chord" mindset of enthusiasm, strategic action, and bold innovation. We'll discuss practical tools to overcome common academic roadblocks, amplify student work, and spark momentum in creative programs. Bring your stories, tactics, and questions for a collaborative conversation about building thriving academic cultures. Whether you're up against budget walls, skeptical committees, or just burnout, this session will offer actionable strategies to pivot, persevere, and lead visible change in your institution.
Keynote Speaker
Full Conference
Virtual Access
Experience
DescriptionTo a distant spacecraft, the richness of our home planet appears as nothing more than a fraction of a pixel. When NASA’s Voyager 1 spacecraft took its famous Pale Blue Dot photo in 1990, the only planets known were those in the Solar System. Since then, nearly 6,000 planets have been discovered around other stars. These exoplanets are wildly different from our own, so the hunt for another Earth is on. To find it, astrophysicists are developing the technology to directly image a habitable exoplanet. This distant pale blue dot may provide the first glimpse of life beyond Earth and answer the question of: “Are we alone?”. Join us for a behind-the-scenes look at the future of the search for life in the Universe and how it’s reshaping the art and storytelling of Earth and alien worlds.
Dr. Anjali Tripathi is an astrophysicist, who has served as NASA’s inaugural Exoplanet Science Ambassador. As an expert in planet formation and evolution, she has contributed to the design of new space missions at NASA’s Jet Propulsion Laboratory (JPL). Dr. Tripathi is a leading science communicator, regularly featured by the BBC, PBS, and TED, and a film and television consultant for the National Academy of Sciences. She has served on the NASA Sea Level Change Team, been a Research Associate of the Smithsonian, and led data visualization for the L.A. County Department of Public Health COVID Data and Epidemiology Team. She previously led science policy for the White House Office of Science and Technology Policy and the U.S. Department of Agriculture. She earned degrees in physics and astrophysics from Harvard, Cambridge, and MIT.
Dr. Anjali Tripathi is an astrophysicist, who has served as NASA’s inaugural Exoplanet Science Ambassador. As an expert in planet formation and evolution, she has contributed to the design of new space missions at NASA’s Jet Propulsion Laboratory (JPL). Dr. Tripathi is a leading science communicator, regularly featured by the BBC, PBS, and TED, and a film and television consultant for the National Academy of Sciences. She has served on the NASA Sea Level Change Team, been a Research Associate of the Smithsonian, and led data visualization for the L.A. County Department of Public Health COVID Data and Epidemiology Team. She previously led science policy for the White House Office of Science and Technology Policy and the U.S. Department of Agriculture. She earned degrees in physics and astrophysics from Harvard, Cambridge, and MIT.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Education
Full Conference
Experience
DescriptionSIGGRAPH Asia 2025, the 18th ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia, will take place from 15 – 18 December 2025 at the Hong Kong Convention and Exhibition Centre (HKCEC).
Generative Renaissance: This year’s theme explores how AI is transforming creativity, art, and science, leading to new forms of expression and discovery. The event will highlight how generative AI is reshaping industries and sparking new creative possibilities.
Generative Renaissance: This year’s theme explores how AI is transforming creativity, art, and science, leading to new forms of expression and discovery. The event will highlight how generative AI is reshaping industries and sparking new creative possibilities.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Education
Full Conference
Experience
DescriptionNew to SIGGRAPH? Returning after a SIGGRAPH-break? Just curious about what the conference has to offer? If your answer is YES to any of these questions, this session is your friendly introduction to the week ahead.
SIGGRAPH for Beginners will help you make the most of your time at the conference. You'll get practical tips, helpful context, and a chance to ask questions in a relaxed setting.
Topics include:
- How to navigate the conference and its program – Jim Kilmer (Pathfinders)
- Where to find help and support during the week – Student Volunteers Program
- Top 5 sessions you won’t want to miss
- An introduction to ACM SIGGRAPH, the organization behind the event – Mikki Rose (Conference Advisory Group Chair)
Bring your questions and curiosity—and stay until the end for our Meet & Greet, featuring drinks and snacks, to kick off the conference in style!
SIGGRAPH for Beginners will help you make the most of your time at the conference. You'll get practical tips, helpful context, and a chance to ask questions in a relaxed setting.
Topics include:
- How to navigate the conference and its program – Jim Kilmer (Pathfinders)
- Where to find help and support during the week – Student Volunteers Program
- Top 5 sessions you won’t want to miss
- An introduction to ACM SIGGRAPH, the organization behind the event – Mikki Rose (Conference Advisory Group Chair)
Bring your questions and curiosity—and stay until the end for our Meet & Greet, featuring drinks and snacks, to kick off the conference in style!
Frontiers
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionIt’s time for the SIGGRAPH community to have more impact! The convergence of AI, computer graphics, and immersive media is transforming healthcare. But to ensure meaningful adoption, these innovations must also integrate cultural and human-centered perspectives.
With its competences in research, technology and culture, SIGGRAPH should be a core player in this domain. Yet, it remains relatively unknown within the medical community. We must therefore link these worlds and open new opportunities for science, the economy and society.
You can be part of it! Join the opening session to hear inputs from leading experts sharing strategies for obtaining international impact, behind the scenes of innovation ecosystems, and real case studies. You will also be able learn how designers and artists shape human-centered tech adoption. After this, join the networking event supported by the Swiss Consulate and take part in the interactive workshop—help shape SIGGRAPH’s strategy for health innovation!
With its competences in research, technology and culture, SIGGRAPH should be a core player in this domain. Yet, it remains relatively unknown within the medical community. We must therefore link these worlds and open new opportunities for science, the economy and society.
You can be part of it! Join the opening session to hear inputs from leading experts sharing strategies for obtaining international impact, behind the scenes of innovation ecosystems, and real case studies. You will also be able learn how designers and artists shape human-centered tech adoption. After this, join the networking event supported by the Swiss Consulate and take part in the interactive workshop—help shape SIGGRAPH’s strategy for health innovation!
ACM SIGGRAPH 365 - Community Showcase
Research & Education
Not Livestreamed
Not Recorded
Education
Full Conference
Experience
DescriptionThe SIGGRAPH Thesis Fast Forward is a forum for Ph.D. students in computer graphics to present and broadcast their research in 3 minutes or less. Students from around the world introduce us to a wide variety of topics spanning research over the last five years.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionBertil lives with his parents. One day, a little thumb size boy appears under his bed! His name is Nils Karlsson Pussling. The 2 boys bond right away and Nils shows Bertil a fascinating magical world right inside his bedroom walls. Neither of them will ever be lonely again.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionA simple and robust modification to triangle mesh reduction bridges the gap for what artists want in quad-dominant mesh reduction, preserving symmetry, topology, and joints without sacrificing geometric quality, allowing for high-quality level-of-detail meshes at no cost compared to what was done before.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce a novel method for accurate 3D garment reconstruction from single-view images, bridging 2D and 3D representations. Our mapping model creates connections among image pixels, UV coordinates, and 3D geometry, resulting in realistic garments with intricate details and enabling downstream applications like garment retargeting and texture editing.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionThis paper presents a novel and first approach - Sketch2Anim, to automatically translate 2D storyboard sketches into high-quality 3D animations through multi-conditional motion generation.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe propose Sketch3DVE, a sketch-based 3D-aware video editing method to enable detailed local manipulation of videos with significant viewpoint changes. Our approach leverages detailed analysis and editing of underlying 3D scene representations, combined with a diffusion model to synthesize realistic and temporally coherent edited videos.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThis work addresses the challenge of learning robust interaction skills from limited demonstrations. By introducing novel data augmentation techniques for skill transitions and recovery patterns, combined with enhanced reinforcement imitation learning methods, we achieve superior performance in learning interaction skills, demonstrating improved generalization and recovery capabilities across diverse manipulation tasks.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionA man traumatized by his youth spent in a reformatory is devoted to saving kids destined to a life of misery through death. For him, dying young means living forever in the best version of one's self.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionWhen a Lonely Polar Bear Can't Find a Friend... He Makes One!
Set in a rapidly changing world, it tells the story of a polar bear on his quest for companionship.
A hand drawn 2D film, It's infused with humor and emotional depth in the tradition of classic animated films.
Set in a rapidly changing world, it tells the story of a polar bear on his quest for companionship.
A hand drawn 2D film, It's infused with humor and emotional depth in the tradition of classic animated films.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionSOAP awakens the 3D princess from 2D stylized photos. Unlike other works that directly drive the 2D photos, SOAP reconstructs well-rigged 3D avatars, with detailed geometry and all-around texture, from just a single stylized picture.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionIn the context of quasi-Monte Carlo rendering, we introduce a new Sobol' construction and demonstrate that particular pairs of polynomials of the form p and p^2+p+1 in Sobol'-based sampling lead to (1, 2)-sequences. They can be combined to form high-dimensional low discrepancy sequences with good 2D projections.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Artificial Intelligence/Machine Learning
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Graphics Systems Architecture
Hardware
Image Processing
Industry Insight
Lighting
Math Foundations and Theory
Modeling
Physical AI
Pipeline Tools and Work
Real-Time
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionAfter the enthusiastic response to our BOF in 2024, we're back and everyone is invited to join the conversation!
This year we're changing the format slightly, with the choice of joining a few smaller conversations we home to create a more welcoming space, especially for those not as comfortable in speaking in larger groups.
Topics to explore include: Confidence, Conflict, Focus and Productivity, Delegation, Uncertainty, Giving and Receiving Feedback, Power Dynamics, Imposter Syndrome, Resilience, Psychological Safety, Perception of Risk, and more...
All voices, perspectives, experience, insights, questions, and curious minds are very welcome!
This year we're changing the format slightly, with the choice of joining a few smaller conversations we home to create a more welcoming space, especially for those not as comfortable in speaking in larger groups.
Topics to explore include: Confidence, Conflict, Focus and Productivity, Delegation, Uncertainty, Giving and Receiving Feedback, Power Dynamics, Imposter Syndrome, Resilience, Psychological Safety, Perception of Risk, and more...
All voices, perspectives, experience, insights, questions, and curious minds are very welcome!
Spatial Storytelling
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Art
Augmented Reality
Education
Real-Time
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionSpatial p5 is an open-source toolkit that simplifies real-time prototyping for mixed reality. Built on p5.js, it enables artists and designers to create immersive experiences without technical barriers. This talk and demo showcase its development, creative potential, and impact on interactive storytelling through intuitive, browser-based XR experimentation.
Talk
Research & Education
Livestreamed
Recorded
Artificial Intelligence/Machine Learning
Augmented Reality
Physical AI
Real-Time
Scientific Visualization
Spatial Computing
Full Conference
Virtual Access
Sunday
DescriptionMachine-Guided Spatial Sensing presents a novel augmented reality system that integrates active learning and human-in-the-loop interaction to measure environmental fields. Using an HMD and handheld sensor, the method accurately captures flow fields and other quantities, outperforming traditional approaches in speed, efficiency, and ease-of-use.
Real-Time Live!
Research & Education
Livestreamed
Recorded
Artificial Intelligence/Machine Learning
Augmented Reality
Real-Time
Scientific Visualization
Spatial Computing
Full Conference
Virtual Access
Tuesday
DescriptionMachine-Guided Spatial Sensing uses augmented reality and human interaction to efficiently map environmental fields, such as air flows or gas concentrations. Continuous real-time data analysis updates the field map and guides the user's handheld sensor to optimal measurement points, enhancing accuracy.
Spatial Storytelling
Gaming & Interactive
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Virtual Reality
Full Conference
Experience
DescriptionThe Spatial Storybook system automatically converts a monaural audiobook into an immersive binaural presentation. It is comprised of an LLM prompted to reimagine a passage of text as a stage play, including stage directions and descriptions of rooms and wall materials; this information conditions a custom, real-time scene rendering engine.
Spatial Storytelling
Arts & Design
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Generative AI
Pipeline Tools and Work
Real-Time
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionThis session will share:
* Volumetric recording, Gaussian Splatting: Importance of capturing real world & human performance
* Preserving imperfections
* Story progression & spatial mise en scène: impact on emotional connection
* Dialogue and sound recording in XR
* Timeless storytelling methods reconfigured for an evolving medium
* Impact of technological advances on creating powerful experiences: higher emotional engagement
* Volumetric recording, Gaussian Splatting: Importance of capturing real world & human performance
* Preserving imperfections
* Story progression & spatial mise en scène: impact on emotional connection
* Dialogue and sound recording in XR
* Timeless storytelling methods reconfigured for an evolving medium
* Impact of technological advances on creating powerful experiences: higher emotional engagement
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe propose a method for estimating spatiotemporally varying indoor lighting from videos using a continuous light field represented as an MLP. By leveraging 2D diffusion priors fine-tuned to predict lighting jointly at multiple locations, our approach achieves superior performance and zero-shot generalization to in-the-wild scenes.
Computer Animation Festival
Production & Animation
Full Conference
Experience
DescriptionSIGGRAPH 2025 will open by honoring a film that forever changed the course of animation, technology, and storytelling. Pixar’s “Toy Story”, the world’s first fully computer-animated feature film, will be celebrated in a special 30th anniversary event that captures the spirit of innovation, perseverance, and creativity that defines both the film and the SIGGRAPH community.
Presented by SIGGRAPH’s Computer Animation Festival in partnership with ACM SIGGRAPH Pioneers, the tribute will be held on Sunday, 10 August 2025, at the Vancouver Convention Centre. The celebration will begin at 12:30 p.m. PDT with a featured introduction by Ed Catmull, co-founder of Pixar and a pioneering figure in computer graphics. In “Pioneers Featured Speaker: Catmull Story: To SIGGRAPH and Beyond”, Catmull will share personal reflections on the breakthroughs, challenges, and triumphs that made “Toy Story” possible.
Following the talk and a live audience Q&A, attendees will enjoy trivia and giveaways before a special screening of “Toy Story”, transporting audiences back to where the magic — and CG revolution — began.
Image credit: “Toy Story”, Copyright © Disney/Pixar
Presented by SIGGRAPH’s Computer Animation Festival in partnership with ACM SIGGRAPH Pioneers, the tribute will be held on Sunday, 10 August 2025, at the Vancouver Convention Centre. The celebration will begin at 12:30 p.m. PDT with a featured introduction by Ed Catmull, co-founder of Pixar and a pioneering figure in computer graphics. In “Pioneers Featured Speaker: Catmull Story: To SIGGRAPH and Beyond”, Catmull will share personal reflections on the breakthroughs, challenges, and triumphs that made “Toy Story” possible.
Following the talk and a live audience Q&A, attendees will enjoy trivia and giveaways before a special screening of “Toy Story”, transporting audiences back to where the magic — and CG revolution — began.
Image credit: “Toy Story”, Copyright © Disney/Pixar
ACM SIGGRAPH 365 - Community Showcase
Computer Animation Festival
Full Conference
Experience
DescriptionSIGGRAPH 2025 will open by honoring a film that forever changed the course of animation, technology, and storytelling. Pixar’s “Toy Story”, the world’s first fully computer-animated feature film, will be celebrated in a special 30th anniversary event that captures the spirit of innovation, perseverance, and creativity that defines both the film and the SIGGRAPH community.
Presented by SIGGRAPH’s Computer Animation Festival in partnership with ACM SIGGRAPH Pioneers, the celebration will begin with a featured introduction by Ed Catmull, co-founder of Pixar and a pioneering figure in computer graphics. In “Pioneers Featured Speaker: Catmull Story: To SIGGRAPH and Beyond”, Catmull will share personal reflections on the breakthroughs, challenges, and triumphs that made “Toy Story” possible.
Following the talk and a live audience Q&A, attendees will enjoy trivia and giveaways before a special screening of “Toy Story”, transporting audiences back to where the magic — and CG revolution — began.
Press release
All of Ed Catmull's contributions to SIGGRAPH
Image credit: “Toy Story”, Copyright © Disney/Pixar
Presented by SIGGRAPH’s Computer Animation Festival in partnership with ACM SIGGRAPH Pioneers, the celebration will begin with a featured introduction by Ed Catmull, co-founder of Pixar and a pioneering figure in computer graphics. In “Pioneers Featured Speaker: Catmull Story: To SIGGRAPH and Beyond”, Catmull will share personal reflections on the breakthroughs, challenges, and triumphs that made “Toy Story” possible.
Following the talk and a live audience Q&A, attendees will enjoy trivia and giveaways before a special screening of “Toy Story”, transporting audiences back to where the magic — and CG revolution — began.
Press release
All of Ed Catmull's contributions to SIGGRAPH
Image credit: “Toy Story”, Copyright © Disney/Pixar
Art Paper
Arts & Design
Livestreamed
Recorded
Art
Artificial Intelligence/Machine Learning
Full Conference
Virtual Access
Experience
Tuesday
DescriptionInstead of pursuing the concern of AI displacing artists, we emphasise a role for artists in reshaping technology and branching it in new directions. A role that places us less as a user of AI technology, waiting for its creative outputs, but as a maker of what AI can be, perhaps leading us towards an AI that is as unnatural, occult, and esoteric, as it is practical.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe introduce Sphere Carving, a method for automatically computing bounding volumes for conservative implicit surface. SDF queries define a set of spheres, from which we extract intersection points, used to compute a bounding volume with guarantees. Sphere Carving is conceptually simple and independent of the function representation.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce spherical harmonics Hessian and solid spherical harmonics, a variant of spherical harmonics, to compute the spherical harmonics Hessian efficiently and accurately to the computer graphics community. These mathematical tools are used to develop an analytical representation of the Hessian matrix of spherical harmonics coefficients for spherical lights.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe leverage repetitions in 3D scenes to improve reconstruction in low-quality parts due to poor coverage and occlusions. Our methods segments the repetitions, registers them together, and optimizes a shared representation with multi-view information flowing from all repetitions, improving the reconstruction of each individual repetition.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionSplat4D generates high-fidelity 4D content from monocular videos by integrating multi-view rendering, inconsistency identification, a video diffusion model, and asymmetric U-Net refinement. Our framework maintains spatial-temporal consistency while preserving details and following user guidance, achieving state-of-the-art benchmark performance. Applications include text/image-conditioned generation, 4D human modeling, and text-guided content editing.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe combine splines, a classical tool from applied mathematics, with implicit Coordinate Neural Networks to model deformation fields, achieving strong performance across multiple datasets. The explicit regularization from spline interpolation enhances spatial coherency in challenging scenarios. We further introduce a metric based on Moran’s I to quantitatively evaluate spatial coherence.
Keynote Speaker
Livestreamed
Recorded
Full Conference
Virtual Access
Experience
DescriptionAutodesk invites you to explore the transformative role of artificial intelligence in the world of visual effects, animation, and games. Discover how Autodesk is advancing creative workflows with AI, harnessing its potential to empower artists to be their most imaginative, and opening the door to a broader spectrum of creators.
Explore the story of Wonder Dynamics, co-founded by VFX Supervisor Nikola Todorovic and actor Tye Sheridan (known for starring in Steven Spielberg’s “Ready Player One”). Wonder Dynamics is driven by a passion for supporting creators of all skill levels — ensuring AI is used as a tool to empower filmmakers and amplify unheard creative voices. That vision led to Autodesk Flow Studio (formerly Wonder Studio) — an innovative cloud-based platform that accelerates VFX pipelines with cutting-edge AI, democratizing the industry while preserving artistic integrity.
Hear from a panel of visionary experts (guest panelists to be announced) as they discuss how AI is shaping the future of storytelling, amplifying artistry, and driving inclusivity in media and entertainment. Don’t miss this opportunity to explore the exciting future of AI as an indispensable tool in artists’ creative arsenal.
Mike Haley serves as senior vice president of research at Autodesk. In this role, he leads the company’s industrial research that uncovers new technologies for customer-centric transformations while addressing challenges like climate change, automation, and industry convergence.
The Autodesk Research team encompasses scientific research (AI, human-computer interaction, simulation and systems, optimization, geometry, visualization, and robotics), industry research (design, manufacturing, architecture, infrastructure, construction, and media), and strategic foresight. Mike also oversees the Autodesk Technology Centers, working with customers and partners on the future of design and make. This multidisciplinary, integrated approach guides the company’s technology, strategy, and product development. Mike also leads Autodesk’s generative AI efforts.
Mike’s track record includes establishing the company’s AI Lab, driving machine learning competency, and pioneering cloud technology at Autodesk. He holds an MS in computer science from the University of Cape Town, South Africa, and has expertise in computer graphics, machine learning, distributed systems, and mathematical analysis.
Maurice Patel is the VP of media & entertainment strategy (M&E) for Autodesk’s M&E business. Since 1996, Maurice has helped technology companies develop software for creative professionals in the media and entertainment industry. Over his time in Autodesk, Maurice has overseen the product marketing of 3ds Max, Maya, and MotionBuilder, among other products, and has acted as an industry expert interfacing with various customers and leads. He has led teams across Autodesk’s business strategy and marketing segments for M&E. Currently Maurice is helping drive the media and entertainment solutions group strategy. His mission is to aid the group orient its projects and its strategy for film, video games, and television.
Nikola Todorovic is the co-founder of Wonder Dynamics, an Autodesk company. As an entrepreneur, visual effects supervisor, and award-winning filmmaker, Nikola spent most of his career working at the intersection of film and technology. This ultimately led him to dream up Wonder Dynamics with fellow co-founder and actor/producer Tye Sheridan. Together, they created the company’s proprietary AI software, Autodesk Flow Studio (formerly Wonder Studio), a cloud-based 3D animation and VFX platform that combines artificial intelligence (AI) with established tools, helping artists more easily animate, light, and compose 3D characters within live-action scenes.
Explore the story of Wonder Dynamics, co-founded by VFX Supervisor Nikola Todorovic and actor Tye Sheridan (known for starring in Steven Spielberg’s “Ready Player One”). Wonder Dynamics is driven by a passion for supporting creators of all skill levels — ensuring AI is used as a tool to empower filmmakers and amplify unheard creative voices. That vision led to Autodesk Flow Studio (formerly Wonder Studio) — an innovative cloud-based platform that accelerates VFX pipelines with cutting-edge AI, democratizing the industry while preserving artistic integrity.
Hear from a panel of visionary experts (guest panelists to be announced) as they discuss how AI is shaping the future of storytelling, amplifying artistry, and driving inclusivity in media and entertainment. Don’t miss this opportunity to explore the exciting future of AI as an indispensable tool in artists’ creative arsenal.
Mike Haley serves as senior vice president of research at Autodesk. In this role, he leads the company’s industrial research that uncovers new technologies for customer-centric transformations while addressing challenges like climate change, automation, and industry convergence.
The Autodesk Research team encompasses scientific research (AI, human-computer interaction, simulation and systems, optimization, geometry, visualization, and robotics), industry research (design, manufacturing, architecture, infrastructure, construction, and media), and strategic foresight. Mike also oversees the Autodesk Technology Centers, working with customers and partners on the future of design and make. This multidisciplinary, integrated approach guides the company’s technology, strategy, and product development. Mike also leads Autodesk’s generative AI efforts.
Mike’s track record includes establishing the company’s AI Lab, driving machine learning competency, and pioneering cloud technology at Autodesk. He holds an MS in computer science from the University of Cape Town, South Africa, and has expertise in computer graphics, machine learning, distributed systems, and mathematical analysis.
Maurice Patel is the VP of media & entertainment strategy (M&E) for Autodesk’s M&E business. Since 1996, Maurice has helped technology companies develop software for creative professionals in the media and entertainment industry. Over his time in Autodesk, Maurice has overseen the product marketing of 3ds Max, Maya, and MotionBuilder, among other products, and has acted as an industry expert interfacing with various customers and leads. He has led teams across Autodesk’s business strategy and marketing segments for M&E. Currently Maurice is helping drive the media and entertainment solutions group strategy. His mission is to aid the group orient its projects and its strategy for film, video games, and television.
Nikola Todorovic is the co-founder of Wonder Dynamics, an Autodesk company. As an entrepreneur, visual effects supervisor, and award-winning filmmaker, Nikola spent most of his career working at the intersection of film and technology. This ultimately led him to dream up Wonder Dynamics with fellow co-founder and actor/producer Tye Sheridan. Together, they created the company’s proprietary AI software, Autodesk Flow Studio (formerly Wonder Studio), a cloud-based 3D animation and VFX platform that combines artificial intelligence (AI) with established tools, helping artists more easily animate, light, and compose 3D characters within live-action scenes.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
Description3D Gaussian Splatting (3DGS) enables fast 3D reconstruction and rendering but struggles with real-world captures due to transient elements and lighting changes. We introduce SpotLessSplats, which leverages semantic features from foundation models and robust optimization to remove transient effects, achieving state-of-the-art qualitative and quantitative reconstruction quality on casual scene captures.
Emerging Technologies
New Technologies
Robotics
Full Conference
Experience
DescriptionWe demonstrate a robotic avatar concept in which pilots can embody a fish-like form, “swimming” through indoor spaces to interact with others remotely. By mimicking the flapping flight of birds, pilots can control the avatar through body movements. This work opens new opportunities for telepresence with a wing-based approach.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionExisting Gaussian Splatting avatars require desktop GPUs, limiting mobile device use. SqueezeMe converts these avatars into a lightweight representation, enabling real-time animation and rendering on mobile devices. By distilling the corrective decoder into an efficient linear model, SqueezeMe achieves 72 FPS on a Meta Quest 3 VR headset.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Talk
Production & Animation
Research & Education
Livestreamed
Recorded
Dynamics
Simulation
Full Conference
Virtual Access
Wednesday
DescriptionA suite of techniques from the Loki simulation framework addresses collision instabilities in character effects, offering solutions like proximity-tolerant contact projection, compliant gap expansion, strain limiting, and advanced collider management. These tools enable a stable, intuitive workflow for integrating physically based collisions with complex production animations.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionCosserat rods have become increasingly popular for simulating complex thin elastic rods. However, traditional approaches often encounter significant challenges in robustly and efficiently solving for valid quaternion orientations. We introduce Stable Cosserat rods, which can achieve high accuracy with high stiffness levels and maintain stability under large time steps.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionStable-Makeup is a diffusion-based makeup transfer method. It leverages a Detail-Preserving makeup encoder, and content-structure control modules to preserve facial content and structure during transfer. Extensive experiments show that Stable-Makeup outperforms existing methods, offering robust, generalizable performance.
Birds of a Feather
Production & Animation
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionWe invite animators, riggers, and engineers to discuss animation tools (third party and
proprietary), workflows and technology trends in the industry.
proprietary), workflows and technology trends in the industry.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Capture/Scanning
Dynamics
Education
Games
Generative AI
Geometry
Industry Insight
Lighting
Math Foundations and Theory
Physical AI
Pipeline Tools and Work
Scientific Visualization
Simulation
Full Conference
Experience
DescriptionA discussion about the current technological and workflow state of hair in the Visual Effects, Animation, Video Games and VR/AR industries. Covering topics that span the entire pipeline of bringing hair to life.
Birds of a Feather
Production & Animation
Not Livestreamed
Not Recorded
Animation
Full Conference
Experience
DescriptionWe invite animators, riggers, and engineers to discuss rigging tools (third party and
proprietary), workflows and technology trends in the industry.
proprietary), workflows and technology trends in the industry.
Course
Research & Education
Livestreamed
Recorded
Geometry
Rendering
Simulation
Full Conference
Virtual Access
Sunday
DescriptionWe describe grid-free Monte Carlo methods for solving partial differential equations on complex geometries. These methods offer unique opportunities to accelerate engineering design cycles by being easy to parallelize and output-sensitive like Monte Carlo rendering, while bypassing the need for simulation-ready meshes that are burdensome to generate for conventional solvers.
Talk
Arts & Design
Livestreamed
Recorded
Animation
Art
Lighting
Modeling
Full Conference
Virtual Access
Sunday
DescriptionA new variation on an old classic, Steerable Perlin Noise offers anisotropic noise at little extra cost, offering a new dimension of control to the average artist.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionStitch-A-Shape introduces a novel framework for generating B-Rep models by directly addressing both topology and geometry. Using a sequential stitching approach, it assembles 3D shapes from vertices through curves to faces, effectively managing topological and geometric complexities. The framework demonstrates superior performance in shape generation, class-conditional generation, and autocompletion tasks.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present a novel stochastic version of the Barnes-Hut approximation. Regarding the level-of-detail (LOD) family of approximations as control variates, we construct an unbiased estimator of the kernel sum being approximated. Through several examples in graphics, we demonstrate that our method outperforms a GPU-optimized implementation of the deterministic Barnes-Hut approximation.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionStochastic preconditioning adds spatial noise to query locations during neural field optimization; it can be formalized as a stochastic estimate for a blur operator. This simple technique eases optimization and significantly improves quality for neural fields optimization, matching or outperforming custom-designed policies and coarse-to-fine schemes.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionTo reduce the high rendering costs and transmission bandwidth requirements of path tracing-based cloud rendering, we propose a novel streaming-aware rendering framework that is able to learn a joint optimal model integrating two path-tracing acceleration (adaptive sampling and denoising) and video compression technique with client side G-buffer collaboration.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThe StreamME takes live stream video as input to enable rapid 3D head avatar reconstruction. It achieves impressive speed, capturing the basic facial appearance within 10-seconds and reaching high-quality fidelity within 5-minutes. StreamME reconstructs facial features through on-the-fly training, allowing simultaneous recording and modeling without the need for pre-cached data.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionA novel approach for the computational modeling of lignified tissues, such as those found in tree branches and timber, extends strand-based representation to describe biophysical processes at short and long time scales. The computationally fast simulation leverages Cosserat rod physics and enables the interactive exploration of branches and wood breaking.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present a stroke-based method for transforming dynamic 3D scenes with smoke, fire, or clouds into painterly animations. Learning from user-provided exemplars, our system transfers stroke styles—color, width, length, and orientation—while preserving motion and structure. This enables expressive and coherent renderings of complex volumetric media.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionThe paper presents StructRe, a structure rewriting system for 3D shape modeling. It uses an iterative process to rewrite objects, either upwards to more concise structures or downwards to more detailed ones, generating hierarchies. This localized rewriting approach enables probabilistic modeling of ambiguous structures and robust generalization across object categories.
Birds of a Feather
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Animation
Artificial Intelligence/Machine Learning
Dynamics
Simulation
Full Conference
Experience
DescriptionStudio Storage: Storage for VFX & Animation has extreme requirements and only a limited number of proven solutions are available or you can build your own. Industry experts and new entrants are all welcomed, for a sharing of knowledge, experience, and best practices. Mail joe@jdamato.com for any questions or comments.
This is part of a linked series of Technical Pipeline BoFs, covering the VFX Reference Platform, Renderfarming, Cloud Native, Pipeline, Studio Storage, and ML Ops.
This is part of a linked series of Technical Pipeline BoFs, covering the VFX Reference Platform, Renderfarming, Cloud Native, Pipeline, Studio Storage, and ML Ops.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe propose a novel text-to-vector pipeline with style customization that disentangles content and style in SVG generation. Our method represents the first feed-forward text-to-vector diffusion model capable of generating SVGs in custom styles.
Talk
Arts & Design
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Rendering
Full Conference
Virtual Access
Wednesday
DescriptionIn creating the stylized organic environments of The Wild Robot, the goal for Look Dev was to enable 3D Artists to work more similar to traditional artists, capturing the impromptu and organic techniques of 2D art. This required innovative new tools to overcome the constraints of the 3D medium.
Production Session
Production & Animation
Not Livestreamed
Not Recorded
Animation
Modeling
Pipeline Tools and Work
Simulation
Full Conference
Virtual Access
Wednesday
DescriptionSuperman makes the third feature film and fifth project in total on which Wētā FX has had the pleasure of working with James Gunn. In this talk you can hear how the team got stuck in to bringing Superman's latest fight against an arch nemesis to life.
Four-time Academy Award nominee and Senior VFX Supervisor Guy Williams has captained Wētā FX’s work on all these projects, and contributed to VFX tent poles such as Peter Jackson’s The Lord of the Rings and King Kong, Steven Spielberg’s The BFG and Ang Lee’s Gemini Man. He is joined this time by Visual Effects Supervisor Sean Walker who was nominated for an Oscar for his work on Marvel’s Shang-Chi and the Legend of the Ten Rings and has worked on several Planet of the Apes films, The Hobbit trilogy, and many Marvel films, including Avengers: End Game. Also joining them is Sequence VFX Supervisor Jo Davison, who was CG Supervisor on Guardians of the Galaxy: Vol. 3 which earned her a Visual Effects Society Award, and during her 15 years at Wētā FX has worked on a swathe of high-profile projects including Avatar, The Adventures of Tintin and Shang-Chi and the Legend of the Ten Rings.
These three experienced supervisors will be showcasing our work on the project, with particular focus on two key sequences, each made up of around 300 VFX shots each.
Jo Davison, will explore bringing colossal monsters to life, sharing how the team tackled textures at immense scales to enable maximum visual impact across a close up creature interactions and wider shots. A range of FX simulations were needed for the powers of the creature, making scale the subject of a range of considerations. The Wētā FX team did a huge amount of digital city work for this sequence and the epic cinematography required for such scale meant the vast majority of the work is fully CG.
The second massive undertaking for the team was supervised by Sean Walker. A surreal and fantastical universe, constructed from a selection of crystalline metallic forms, grown mathematically by a custom Houdini script. This sparked an entirely new level of creativity, as nine distinct environments were brought together - and allowed for additional fun across the creature’s department. The challenges in this universe were to make an otherworldly and unbelievable realm behave in a way that still felt physically plausible to the audience. Sean will explore the challenges and innovative solutions used across postproduction to bring this fantastical realm to life.
Having both worked with Guy on previous James Gunn projects, it allowed for added creativity outside of a strict VFX delivery - with the team helping to flesh out sequences from pre-vis to final composite. The team will draw from their extensive knowledge and share valuable insights into collaborative filmmaking.
Four-time Academy Award nominee and Senior VFX Supervisor Guy Williams has captained Wētā FX’s work on all these projects, and contributed to VFX tent poles such as Peter Jackson’s The Lord of the Rings and King Kong, Steven Spielberg’s The BFG and Ang Lee’s Gemini Man. He is joined this time by Visual Effects Supervisor Sean Walker who was nominated for an Oscar for his work on Marvel’s Shang-Chi and the Legend of the Ten Rings and has worked on several Planet of the Apes films, The Hobbit trilogy, and many Marvel films, including Avengers: End Game. Also joining them is Sequence VFX Supervisor Jo Davison, who was CG Supervisor on Guardians of the Galaxy: Vol. 3 which earned her a Visual Effects Society Award, and during her 15 years at Wētā FX has worked on a swathe of high-profile projects including Avatar, The Adventures of Tintin and Shang-Chi and the Legend of the Ten Rings.
These three experienced supervisors will be showcasing our work on the project, with particular focus on two key sequences, each made up of around 300 VFX shots each.
Jo Davison, will explore bringing colossal monsters to life, sharing how the team tackled textures at immense scales to enable maximum visual impact across a close up creature interactions and wider shots. A range of FX simulations were needed for the powers of the creature, making scale the subject of a range of considerations. The Wētā FX team did a huge amount of digital city work for this sequence and the epic cinematography required for such scale meant the vast majority of the work is fully CG.
The second massive undertaking for the team was supervised by Sean Walker. A surreal and fantastical universe, constructed from a selection of crystalline metallic forms, grown mathematically by a custom Houdini script. This sparked an entirely new level of creativity, as nine distinct environments were brought together - and allowed for additional fun across the creature’s department. The challenges in this universe were to make an otherworldly and unbelievable realm behave in a way that still felt physically plausible to the audience. Sean will explore the challenges and innovative solutions used across postproduction to bring this fantastical realm to life.
Having both worked with Guy on previous James Gunn projects, it allowed for added creativity outside of a strict VFX delivery - with the team helping to flesh out sequences from pre-vis to final composite. The team will draw from their extensive knowledge and share valuable insights into collaborative filmmaking.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionVarious symbolic objects such as a fish, an apple, an hourglass, a diamond, and a heart, are nicely cooked and displayed on the dining table for a god-like creature. The eternity we yearn for, the meaning we seek, is just one of the daily meals he enjoys.
Talk
Production & Animation
Not Livestreamed
Not Recorded
Animation
Performance
Full Conference
Virtual Access
Sunday
DescriptionAnimating bird wing fold poses often requires labor-intensive counter animations. Our innovative surface constraint wing fold system, developed for The Wild Robot, enables animators to maintain folded wings seamlessly during dynamic movements. This system streamlines the process, saving time and allowing animators to focus on compelling performances.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionThe “Get an Airbnb” campaign compares the hiccups of hotels to the luxuries of an Airbnb. “Surrounded” takes the miniature world we’ve developed for Airbnb in an exciting new direction, positioning the Airbnb into an immersive forest.
Birds of a Feather
Research & Education
Not Livestreamed
Not Recorded
Ethics and Society
Full Conference
Experience
DescriptionFrom climate change to biodiversity loss and resource exhaustion, human activities are impacting Earth’s limits and Computer Graphics is no exception. How can our research practices and organizations evolve to respect these limits?
After the success of last year’s BoF, we continue building a community of people who want to think about the broader impacts of our research and how to collectively work towards a more sustainable future.
This interactive meetup session will allow attendees to share experiences and questions on related topics in small groups, regardless of their current levels of involvement or expertise in sustainable approaches to research.
After the success of last year’s BoF, we continue building a community of people who want to think about the broader impacts of our research and how to collectively work towards a more sustainable future.
This interactive meetup session will allow attendees to share experiences and questions on related topics in small groups, regardless of their current levels of involvement or expertise in sustainable approaches to research.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Ethics and Society
Full Conference
Experience
DescriptionThe "Sway" reimagines bamboo, an ancient tool of documentation, as a dynamic and symbolic medium to explore contemporary digital reflections. By integrating the natural movement of bamboo with generative digital processes and dynamic capture techniques, it encapsulates evolving narratives through real-time data and participatory audience interactions. Bamboo transforms into a space where diverse voices converge, offering a profound lens to examine the mediated and often polarized perceptions of conflict in the digital age. The work invites viewers to critically reconsider how technology reshapes the ways we record, interpret, and emotionally engage with the complexities of war, memory, and our surroundings.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionSwiftSketch, a diffusion-based model with a transformer-decoder, generates high-quality vector sketches from images in under a second. It progressively denoises stroke coordinates sampled from a Gaussian distribution, effectively generalizing across various object classes. Training uses the ControlSketch Dataset, a new synthetic high-quality image-sketch pairs created by our ControlSketch optimization method.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionThe paper presents a tile-based rendering pipeline for modeling with implicit volumes, using blobtrees and smooth CSG operators. It requires no preprocessing when updating primitives and ensures efficient ray processing with sphere tracing. The method uses a low-resolution A-buffer and bottom-up tree traversal for scalable performance.
Art Paper
Arts & Design
Gaming & Interactive
Livestreamed
Recorded
Art
Audio
Augmented Reality
Real-Time
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Experience
Tuesday
DescriptionSynedelica challenges traditional approaches to mixed reality by transforming physical environments through a synesthetic experience. This artwork emphasizes the potential for immersive technology to mediate reality itself, fostering social interaction and shared experiences. By reimagining how we perceive and interact with our surroundings, Synedelica opens new perspectives at the intersection between virtual and physical. Our approach encourages the SIGGRAPH community to explore the innovative capacity of intuitive and serendipitous design.
Talk
Production & Animation
Livestreamed
Recorded
Artificial Intelligence/Machine Learning
Capture/Scanning
Computer Vision
Pipeline Tools and Work
Full Conference
Virtual Access
Sunday
DescriptionAdvancements in regression-based computer vision models have automated parts of VFX motion capture. However, complex shots often require manual intervention. By integrating user-specified cues into models, new tools improve tracking accuracy, blending automation with human expertise. This approach streamlines workflows and reduces time required for challenging VFX scenarios.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Art Paper
Arts & Design
New Technologies
Research & Education
Livestreamed
Recorded
Art
Artificial Intelligence/Machine Learning
Audio
Digital Twins
Ethics and Society
Generative AI
Real-Time
Full Conference
Virtual Access
Experience
Tuesday
DescriptionThis project explores how AI can preserve and reinterpret cultural memory, raising profound questions about the role of technology in connecting past and future. By transforming transient, everyday digital interactions into meaningful archives, it invites reflection on how today’s voices might shape the narratives of tomorrow. This work challenges the SIGGRAPH community to view the ephemeral as a valuable resource for cultural preservation, offering fresh perspectives on the intersection of art, technology.
Educator's Forum
Gaming & Interactive
Research & Education
Livestreamed
Recorded
Education
Games
Full Conference
Virtual Access
Experience
Wednesday
DescriptionIn this paper, we present a singular course on teaching fundamental game programming skills by building game engines from scratch. The course is designed to provide a pathway for students in game development as an entry level course, and provides students a way to learn industry specific technologies later on.
Appy Hour
Gaming & Interactive
Research & Education
Artificial Intelligence/Machine Learning
Education
Full Conference
Experience
DescriptionThe University of Victoria is developing a project with the Royal BC Museum that will change the way that exhibits engage visitors, both in person and online. Maps, satellite data and museum collections are exposed in eXtended Reality and interacted with using conversational AI. Come play with our Unity Application!
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionBy combining a continuous, UVD tangent space 3DGS model with a UNet deformation network while maintaining adaptive densification, we present a novel high-detail 3D head avatar model that preserves even finer detail like pores and eyelashes at 4K resolution.
Emerging Technologies
New Technologies
Research & Education
Augmented Reality
Digital Twins
Display
Haptics
Real-Time
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionTeleTouch is a human-to-human teleoperation system enabling direct tactile communication. It uses a fingernail-mounted vibration motor with a 6-DOF sensor for touch sensing and a high-resolution, wearable electro-tactile display for feedback. AR integration synchronizes hand movements for intuitive remote interaction.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionTetWeave is a novel isosurface representation that jointly optimizes a tetrahedral grid and directional distances for gradient-based mesh processing like multi-view 3D reconstruction. It dynamically builds adaptive grids via Delaunay triangulation, ensuring watertight, manifold meshes. By resampling high-error regions and promoting fairness, it achieves high-quality results with minimal memory requirements.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionAnimPortrait3D is a novel method for text-based, realistic, animatable 3DGS avatar generation with morphable model alignment. To address ambiguities in diffusion predictions during 3D distillation, we introduce key strategies: initializing a 3D avatar with robust appearance and geometry, and leveraging a ControlNet to ensure accurate alignment with the underlying model.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe develop a method to compress textures and UVs for meshes in a content-aware way. We combine this with overlapping and folding symmetric UV charts, and demonstrate our approach on a dataset from Sketchfab. We outperform prior work in visual similarity to the original mesh.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Talk
Arts & Design
Production & Animation
Livestreamed
Recorded
Animation
Art
Simulation
Full Conference
Virtual Access
Thursday
DescriptionScenes in which multiple characters come to life to achieve a cohesive performance present a unique set of animation challenges. While the techniques and workflows evolve, there are constant underlying principles. With examples, we present a distillation of the essence of the art of crowds animation through Disney Animation films.
Talk
Production & Animation
Livestreamed
Recorded
Animation
Lighting
Full Conference
Virtual Access
Thursday
DescriptionThe directors of cinematography of “Moana 2” talk through the lighting design and the camera language used across three of the songs and how it supports the progression of Moana's physical and emotional journey, touching on themes of "Heightened Reality", "Chaotic Theatrics" and "Playful abstraction".
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionA vast ocean at the firmament. Humans suffering in the darkness underneath, longing for the warm light of the Lumathans - giant sea creatures worshiped as gods and the only light source in the world. Will the explorer SINH manage to reach the ocean and steal the Lumathan's light?
Talk
Production & Animation
Livestreamed
Recorded
Animation
Industry Insight
Lighting
Pipeline Tools and Work
Full Conference
Virtual Access
Wednesday
DescriptionLighting plays is key in Disney Animation films. For “Moana 2,” we developed a new lighting workflow in Houdini, empowering artists with more control while reducing creative barriers. This talk explores how we enabled new workflows, mirrored successful experiences, and eased the transition with new tools in a legacy system.
Talk
New Technologies
Not Livestreamed
Not Recorded
Art
Augmented Reality
Capture/Scanning
Display
Geometry
Graphics Systems Architecture
Modeling
Pipeline Tools and Work
Real-Time
Rendering
Simulation
Virtual Reality
Full Conference
Virtual Access
Wednesday
DescriptionFor the HBO series Dune: Prophecy, we sought to innovate beyond our traditional layouts and previsualization pipeline. We explored a novel approach by integrating Gaussian Splat technology to enhance the quality and efficiency of our CG camera layouts. This talk will delve into the tangible benefits achieved through this methodology.
Frontiers
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Digital Twins
Education
Ethics and Society
Generative AI
Image Processing
Industry Insight
Performance
Physical AI
Real-Time
Scientific Visualization
Simulation
Virtual Reality
Full Conference
Experience
DescriptionHuman creativity has never been more challenged: Through the advent of AI-based storytelling and creative tools, new forms of computational creativity emerge, which give rise to rapid advances across animation, storytelling, and computer graphics. Storyboards can now be created within seconds through AI-based platforms, animations are prompted into existence, and image inputs allow for digital doubles to take the lead in feature films. However, risks persist across authorship, accreditation, royalties, but also authenticity, individual human expression, and handcrafted, and digital artistry. Following brief presentations, this workshop invites participants to brainstorm their responses to a rapidly evolving field.
Immersive Pavilion
Gaming & Interactive
New Technologies
Production & Animation
Animation
Performance
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionThe Grinning Man VR experience is an interactive musical performance of the song 'Labyrinth', from the hit London West End stage show, motion captured and animated in virtual reality. The performance is approximately 5 minutes long and uses head and eye-tracking to create the experience of liveness.
Educator's Forum
Arts & Design
Livestreamed
Recorded
Art
Education
Fabrication
Full Conference
Virtual Access
Experience
Wednesday
DescriptionThis paper demonstrates an exercise that exploits a low barrier to entry methodology by introducing photogrammetry, mesh editing, and 3D printing into a studio arts-based curriculum, making visible the influence and impact of each process on the final construct.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present the Mokume dataset for solid wood texturing, comprising nearly 190 samples from various species. Using this dataset, we propose an inverse modeling pipeline to infer volumetric wood textures from surface photographs, employing inverse procedural texturing and neural cellular automata (NCA).
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionThe Mooning is an animated mocumentary that reveals the truth behind the 1969 moon landing.
Art Paper
Arts & Design
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Experience
Tuesday
DescriptionAfter all the presentations, attendees are encouraged to engage in a Q&A session to discuss the topics presented. This will be followed by an Art Papers wrap-up, summarizing key insights, and a preview by the SIGGRAPH 2026 Art Papers chair, offering a glimpse into the upcoming 2026 Art Papers program.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionThis scientific visualization explores the iconic Pillars of Creation in the Eagle Nebula and the various ways that stars and dust are intertwined within our galaxy, vibrant nebulae, and the birth of individual stars. Data from research papers and several NASA space telescopes underlies and informs the 3D models.
Birds of a Feather
Production & Animation
Not Livestreamed
Not Recorded
Pipeline Tools and Work
Full Conference
Experience
DescriptionThe Pipeline Conference continues to evolve and organize new events. We are always looking for help, and will use this in-person opportunity to discuss future events and the organization as a whole. If you're interested in having a say on how the conference and related events proceed in the future, come be part of the conversation.
Art Paper
Arts & Design
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Art
Digital Twins
Modeling
Scientific Visualization
Simulation
Full Conference
Virtual Access
Experience
Tuesday
DescriptionHow would our attitudes to death and dying change if we could see how a human body is reabsorbed into the environment? The Posthumous World is a project about death and our relationship with the planet. At its centre will be a new artwork - a poetic meditation on a body’s journey to re-join the ecosphere, which is also a scientifically accurate, visual simulation of how a body decays after burial. The body in question will be the artist’s own.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionA midcentury misogynist gets what he deserves when he’s forced to spend a day in heels.
Spatial Storytelling
Arts & Design
New Technologies
Not Livestreamed
Not Recorded
Performance
Robotics
Full Conference
Experience
DescriptionScylla System is a performance-based exploration of human-drone interactivity, where a dancer improvises within a complex choreography of 10 flying drones. Investigating agency, narrative, and the uncanny, the work tests the boundaries of improvisation and control while exploring drones as tools for enhancing dancers' adaptability and spatial awareness.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Birds of a Feather
Production & Animation
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionIt's time to get HtoA, MtoA, C4DtoA, KtoA and MaxtoA plugin users together and get a temperature check on what they love or maybe don't love about their respective Arnold Renderer plugins.
The BOF will be divided into three 20 minute segments.
1). Hand raised surveys on questions about plugin usage, ACES usage, CPU vs GPU, which versions are currently being used, if Operators are in use, etc.
2). Personal anecdotes about lessons learned with the Arnold plugins.
3). UX and Feature Requests for Arnold that Solid Angle can listen to.
The BOF will be divided into three 20 minute segments.
1). Hand raised surveys on questions about plugin usage, ACES usage, CPU vs GPU, which versions are currently being used, if Operators are in use, etc.
2). Personal anecdotes about lessons learned with the Arnold plugins.
3). UX and Feature Requests for Arnold that Solid Angle can listen to.
Talk
Arts & Design
Livestreamed
Recorded
Art
Capture/Scanning
Display
Hardware
Image Processing
Full Conference
Virtual Access
Sunday
DescriptionThis study innovates slit-scan photography with an automated system integrating quadruple exposure controls (aperture, shutter, ISO, slit) and servo-driven mechanisms. Utilizing large-format cameras and adaptive ND filters, it achieves 3D spatiotemporal imaging, merging technical precision with philosophical exploration of time-space continuity, advancing interdisciplinary artistic innovation.
ACM SIGGRAPH 365 - Community Showcase
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionWe all understand that networking is critical in our field. And it’s equally important, if not more so, for recent graduates trying to break into the industry. Developing a strong and supportive community via an official ACM SIGGRAPH Student Chapter can achieve that! But having a student chapter at your university doesn’t just help the students directly, it can also indirectly aid in improving your program’s recruitment, retention, and graduation benchmarks. Learn how easy it is to start a student chapter and hear directly from a panel of current and former faculty advisors on how having a student chapter has benefited their students and universities. Though the session is tailored to educators, any attendee will find value in this session.
Production Session
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Animation
Artificial Intelligence/Machine Learning
Lighting
Modeling
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Monday
DescriptionFor the HBO Original Limited Series THE PENGUIN, 3107 VFX shots were created across 8 episodes. In this direct sequel to the feature film THE BATMAN, the city is on its knees, making way for Oswald “Oz” Cobb to rise up as the new kingpin of Gotham City.
On-set filming techniques included ground breaking interactive lighting for the visceral Oz action sequences. AI-driven workflows were employed to heighten the distress of a flood ravaged Gotham City. Young Victor witnesses the loss of his entire family in a flood, facilitated by complex simulations, striking the emotional core of viewers. Oz engages in a car chase filmed dry, but then immersed in torrential rain with a complex mix of classic 2d elements with sophisticated fx animation. New hybrid techniques in facial tracking augmented a progression of 10 years of visible torment for Sofia Falcone, as she was captive in Arkham Asylum. A car explosion ruptures the streets creating a massive sinkhole, with the help of new procedural destruction tools. All of the VFX were designed to service the characters and to tell their stories, to a degree that truly make the results no longer VFX work, but storytelling work. Our goal was for the viewer to not see the VFX, but to feel them. Join us as our team of industry leading VFX supervisors present the VFX of THE PENGUIN.
On-set filming techniques included ground breaking interactive lighting for the visceral Oz action sequences. AI-driven workflows were employed to heighten the distress of a flood ravaged Gotham City. Young Victor witnesses the loss of his entire family in a flood, facilitated by complex simulations, striking the emotional core of viewers. Oz engages in a car chase filmed dry, but then immersed in torrential rain with a complex mix of classic 2d elements with sophisticated fx animation. New hybrid techniques in facial tracking augmented a progression of 10 years of visible torment for Sofia Falcone, as she was captive in Arkham Asylum. A car explosion ruptures the streets creating a massive sinkhole, with the help of new procedural destruction tools. All of the VFX were designed to service the characters and to tell their stories, to a degree that truly make the results no longer VFX work, but storytelling work. Our goal was for the viewer to not see the VFX, but to feel them. Join us as our team of industry leading VFX supervisors present the VFX of THE PENGUIN.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Birds of a Feather
Arts & Design
Research & Education
Not Livestreamed
Not Recorded
Industry Insight
Full Conference
Experience
DescriptionThoughtful3D is hosting an in-person industry meetup. Creative professionals will engage in an informal, lively conversation about the state of the 3D industry. Professionals, students, and anyone curious are all welcome to join us in this conversation!
Thoughtful3D is a mentorship community created by Conor Woodard & Mike 'Cash' Cacciamani. We have over 30 years of combined experience in the animation and visual effects industry. Thoughtful3D Mentorships focus on industry insight, individual growth, and community support.
Thoughtful3D is a mentorship community created by Conor Woodard & Mike 'Cash' Cacciamani. We have over 30 years of combined experience in the animation and visual effects industry. Thoughtful3D Mentorships focus on industry insight, individual growth, and community support.
Spatial Storytelling
Arts & Design
Not Livestreamed
Not Recorded
Art
Haptics
Performance
Real-Time
Virtual Reality
Full Conference
Experience
DescriptionThresholds: stories of our inner selves is a live contemporary dance and extended reality performance, where the audience participates in different overlapping narratives by dancing with both real and virtual actors, immersed in a multisensory experience.
Production Session
Production & Animation
Not Livestreamed
Not Recorded
Industry Insight
Full Conference
Virtual Access
Wednesday
DescriptionJoin DNEG VFX Supervisor Stephen James, DFX Supervisor Melaina Mace, and FX Supervisor Roberto Rodricks for a behind-the-scenes look at the visual effects that brought the incredible episodic series ‘The Last of Us’ to life!
Following the success of its award-winning work on the first season, DNEG was proud to return as a key visual effects partner for the highly anticipated second season. To kick off the session, the team will take the audience back to the first season where they reimagined, recreated, and brought to life iconic locations and environments from the video game, delivering over 530 visual effects shots across 71 sequences.
Now – in the series’ second season – Stephen, Melaina, and Roberto will discuss the show’s evolution and how the iconic visuals have developed and changed since the first season.
This season, the DNEG team imagined and built a post-apocalyptic Seattle, taking what they had learned from the first season and pushing the environment destruction and vegetation overgrowth even further. For ‘The Last of Us’ season two, DNEG’s artists delivered some of the company’s most complex and detailed environment, FX and compositing work to date.
The series culminates in the series’ most ambitious episode yet, which sees DNEG’s fully CG post-apocalyptic city environment hit by a huge storm, demanding some of the most detailed and complex ocean and water FX work that DNEG has ever delivered.
This talk will offer an in-depth and comprehensive look at the ground-breaking VFX work that created the locations, environments, and FX that brought the ‘The Last of Us’ season two to life, cementing its standing as one of the most successful video-game adaptions ever.
Following the success of its award-winning work on the first season, DNEG was proud to return as a key visual effects partner for the highly anticipated second season. To kick off the session, the team will take the audience back to the first season where they reimagined, recreated, and brought to life iconic locations and environments from the video game, delivering over 530 visual effects shots across 71 sequences.
Now – in the series’ second season – Stephen, Melaina, and Roberto will discuss the show’s evolution and how the iconic visuals have developed and changed since the first season.
This season, the DNEG team imagined and built a post-apocalyptic Seattle, taking what they had learned from the first season and pushing the environment destruction and vegetation overgrowth even further. For ‘The Last of Us’ season two, DNEG’s artists delivered some of the company’s most complex and detailed environment, FX and compositing work to date.
The series culminates in the series’ most ambitious episode yet, which sees DNEG’s fully CG post-apocalyptic city environment hit by a huge storm, demanding some of the most detailed and complex ocean and water FX work that DNEG has ever delivered.
This talk will offer an in-depth and comprehensive look at the ground-breaking VFX work that created the locations, environments, and FX that brought the ‘The Last of Us’ season two to life, cementing its standing as one of the most successful video-game adaptions ever.
Spatial Storytelling
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Education
Performance
Real-Time
Rendering
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionTimbreSpace is an XR music sandbox where users sculpt sound in an immersive spatial environment. Merging AI-driven analysis, interactive sampling, and generative audio, it transforms audio into evolving soundscapes. Our demo showcases intuitive, embodied sound design, enabling playful, expressive music creation beyond traditional DAW interfaces.
Production Session
New Technologies
Production & Animation
Livestreamed
Recorded
Artificial Intelligence/Machine Learning
Generative AI
Full Conference
Virtual Access
Thursday
DescriptionJoin Production Supervisor Kevin Baillie and Metaphysic VFX Supervisor Jo Plaete for a deep dive into how artist-empowering artificial intelligence enabled unprecedented workflows on Robert Zemeckis’s new film, Here. Told from a single perspective that transcends time, Here follows characters played by Tom Hanks, Robin Wright, Paul Bettany, and Kelly Reilly across multiple decades of their lives. While these monumental age spans were crucial to the narrative, the production faced an immense technical challenge: how to maintain the actors’ performances and emotional realism while radically altering their appearances.
Baillie and Plaete will explain how the project began with a competitive screen test. At first, traditional 3D and motion-capture methods were deemed impractical for the required volume of shots and the nuance demanded by the close-up facial performances. Metaphysic’s early proof of concept, transforming the 67-year-old Tom Hanks into his Big-era twenties, demonstrated that an AI-based approach could bridge massive age gaps without sacrificing the authenticity of the actors’ expressions. Once the filmmakers chose this route, the Metaphysic team rapidly scaled, bringing together AI engineers, data scientists, VFX artists, and compositors who refined the technology into a production-ready pipeline that centered on the principle of empowering filmmakers and artists.
A central aspect of this pipeline was real-time on-set face swapping, in which a specialized server equipped with powerful GPUs received a direct feed from the main camera, processed it through neural networks, and returned a de-aged image to the director’s monitor. This setup gave Robert Zemeckis and the actors near-instant feedback, only a few frames of delay, allowing them to adjust on the spot. Tom Hanks and Robin Wright rehearsed in front of a “Youth Mirror” system that let them see their younger faces in real time, helping them modulate posture, eye lines, and subtle expressions. Baillie will recount how the production team integrated these tools into daily shooting schedules, and Plaete will offer insights into the technical hurdles of achieving high-fidelity performance transfer on set, highlighting how the AI’s flexibility liberated artists to iterate and refine with minimal technical friction.
The session also explores how the technology matured in post-production, where higher resolutions and additional refinements were required for final shots. By working with “plate prep” and proprietary compositing workflows, the VFX team preserved the liveliness of each performance, even when rewinding several decades, while avoiding the “uncanny valley.” Attendees will learn how neural networks were trained on massive libraries of archival footage for Tom Hanks, Robin Wright, and other cast members, capturing the shifts in bone structure and skin quality across each stage of life. Plaete will describe the “visual data science” approach his team used to tune these models, emphasizing that successful outputs demanded not just code but also a keen artistic and intuitive understanding of the actors’ faces, underscoring how human creativity remains paramount when wielding artist-empowering AI.
Throughout the session, both speakers will emphasize that while neural networks are powerful, they function best as a tool for artists.
Baillie and Plaete will explain how the project began with a competitive screen test. At first, traditional 3D and motion-capture methods were deemed impractical for the required volume of shots and the nuance demanded by the close-up facial performances. Metaphysic’s early proof of concept, transforming the 67-year-old Tom Hanks into his Big-era twenties, demonstrated that an AI-based approach could bridge massive age gaps without sacrificing the authenticity of the actors’ expressions. Once the filmmakers chose this route, the Metaphysic team rapidly scaled, bringing together AI engineers, data scientists, VFX artists, and compositors who refined the technology into a production-ready pipeline that centered on the principle of empowering filmmakers and artists.
A central aspect of this pipeline was real-time on-set face swapping, in which a specialized server equipped with powerful GPUs received a direct feed from the main camera, processed it through neural networks, and returned a de-aged image to the director’s monitor. This setup gave Robert Zemeckis and the actors near-instant feedback, only a few frames of delay, allowing them to adjust on the spot. Tom Hanks and Robin Wright rehearsed in front of a “Youth Mirror” system that let them see their younger faces in real time, helping them modulate posture, eye lines, and subtle expressions. Baillie will recount how the production team integrated these tools into daily shooting schedules, and Plaete will offer insights into the technical hurdles of achieving high-fidelity performance transfer on set, highlighting how the AI’s flexibility liberated artists to iterate and refine with minimal technical friction.
The session also explores how the technology matured in post-production, where higher resolutions and additional refinements were required for final shots. By working with “plate prep” and proprietary compositing workflows, the VFX team preserved the liveliness of each performance, even when rewinding several decades, while avoiding the “uncanny valley.” Attendees will learn how neural networks were trained on massive libraries of archival footage for Tom Hanks, Robin Wright, and other cast members, capturing the shifts in bone structure and skin quality across each stage of life. Plaete will describe the “visual data science” approach his team used to tune these models, emphasizing that successful outputs demanded not just code but also a keen artistic and intuitive understanding of the actors’ faces, underscoring how human creativity remains paramount when wielding artist-empowering AI.
Throughout the session, both speakers will emphasize that while neural networks are powerful, they function best as a tool for artists.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThe goal of this work is to train lip sync animation models that can run in real-time and on-device. We design a two-stage knowledge distillation framework to distill large, high-quality models. Our results show that we can train small models with low latency and a comparatively small loss in quality.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionTokenVerse extracts complex visual elements from images by identifying semantic directions in per-token modulation space of DiT models for each word in the image caption. It's capable of combining concepts from multiple sources by adding corresponding directions, enabling flexible generation of new combinations including abstract concepts like lighting and poses.
Stage Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Computer Vision
Generative AI
Physical AI
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionAs video generation technology rapidly advances, the need for robust evaluation frameworks becomes critical. While automated metrics are mostly sufficient for traditional AI modalities, video generation models require human experts to assess temporal coherence, context, motion realism, visual quality, and aesthetics in ways that automated methods cannot easily replicate yet. Additionally, current benchmarks for visual content are not always actionable and understandable for the AI developers, being too subjective and lacking formal criteria.
At Toloka, together with industry professionals we have developed the Mainstream Movies video evaluation toolkit, a comprehensive evaluation framework for cutting-edge VideoGen models to close the gap between creative users and engineers. Our approach combines domain expertise with systematic and detailed evaluation protocols to deliver actionable insights for AI developers.
We will demonstrate how our toolkit addresses the unique challenges of video generation evaluation, showcase evaluation results from leading VideoGen models, and discuss the methodology that enables scalable, professional-grade video assessment.
At Toloka, together with industry professionals we have developed the Mainstream Movies video evaluation toolkit, a comprehensive evaluation framework for cutting-edge VideoGen models to close the gap between creative users and engineers. Our approach combines domain expertise with systematic and detailed evaluation protocols to deliver actionable insights for AI developers.
We will demonstrate how our toolkit addresses the unique challenges of video generation evaluation, showcase evaluation results from leading VideoGen models, and discuss the methodology that enables scalable, professional-grade video assessment.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionTopological Offsets is a method for generating offset surfaces that are topologically equivalent to an offset infinitesimally close to the surface. By construction, the offsets are manifold, watertight, self-intersection-free, and strictly enclose the input. Tested on Thingi10k, it supports applications like manifold extraction, layered offsets, and robust finite offset computation.
Talk
Research & Education
Livestreamed
Recorded
Ethics and Society
Hardware
Full Conference
Virtual Access
Sunday
DescriptionWe surveyed 888 SIGGRAPH papers from 2018-2024 and gathered author-reported GPU models. By contextualizing the hardware reported in papers with available data of consumers' hardware, we demonstrate that research is consistently developed and tested on new high-end devices that do not reflect the state of the consumer-level market.
Talk
Arts & Design
New Technologies
Research & Education
Livestreamed
Recorded
Artificial Intelligence/Machine Learning
Generative AI
Geometry
Pipeline Tools and Work
Full Conference
Virtual Access
Thursday
DescriptionHow can generative AI seamlessly integrate into professional 3D workflows? This talk explores how vision-language models (VLMs) can automate tedious editing tasks, generate structured 3D scenes, and perform object placement while preserving editability. I’ll discuss key findings, open challenges, and future applications across industries like robotics, game design, and VFX.
Talk
Production & Animation
Research & Education
Livestreamed
Recorded
Artificial Intelligence/Machine Learning
Capture/Scanning
Computer Vision
Full Conference
Virtual Access
Thursday
DescriptionWe improve Digital Domain's video-driven animation transfer technique by introducing automatic corrections as a post-process optimization. We minimize the difference between our face swap model output (extended for light invariance) and predicted animation parameters in a differentiable pipeline. Our method is now being integrated into Digital Domain's facial capture workflow.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe challenge the comprehensive neural material representation by thoroughly considering the essential aspects of the complete appearance. We introduce an int8-quantized model that keeps high fidelity while achieving an order of magnitude speedup compared to previous methods, and a controllable structure-preserving synthesis strategy, along with accurate displacement effects.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe demonstrate that stereoacuity is remarkably resilient to foveated rendering and remains unaffected with up to 2× stronger foveation than commonly used. To this end, we design a psychovisual experiment and derive a simple perceptual model that determines the amount of foveation that does not affect stereoacuity.
Frontiers
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionGaussian splats are a rapidly emerging method for the fast and efficient creation of photorealistic 3D visualizations, and is particularly suitable for real-time applications. Today, a growing number of software solutions support the capture, visualization, editing, and compression of Gaussian splats. However, as different companies adopt varying formats, the risk of ecosystem fragmentation becomes a possibility.
In this 90-minute Frontier Workshop we will discuss the current technologies, formats, and use cases and investigate the potential for standardization to enable interoperability and sustainable growth.
The workshop will cover:
- A Gaussian Splats 101
- Using Gaussian Splats for Digital Twins
- The state of Gaussian Splats on the Web
- Community engagement via a Panel Discussion
By fostering collaboration among users, tool developers, and engine vendors, this workshop seeks to guide the evolution of Gaussian splat interoperability and help remove pain points, drive adoption and prevent fragmentation
In this 90-minute Frontier Workshop we will discuss the current technologies, formats, and use cases and investigate the potential for standardization to enable interoperability and sustainable growth.
The workshop will cover:
- A Gaussian Splats 101
- Using Gaussian Splats for Digital Twins
- The state of Gaussian Splats on the Web
- Community engagement via a Panel Discussion
By fostering collaboration among users, tool developers, and engine vendors, this workshop seeks to guide the evolution of Gaussian splat interoperability and help remove pain points, drive adoption and prevent fragmentation
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionIn a world ravaged by an incurable illness, a devoted husband is forced to make an impossible choice as his wife’s life hangs in the balance.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionThis work proposes a dynamic calibration system for inertial motion capture, which can dynamically remove non-static IMU drift and sensor-body offset during usage, enable user-friendly calibration (without T-pose and IMU heading reset), and ensure long-term robustness.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionRecent methods have been developed to reconstruct 3D hair strand geometry from images. We introduce an inverse hair grooming pipeline to transform these unstructured hair strands into procedural hair grooms controlled by a small set of guide strands and artist-friendly grooming operators, enabling easy editing of hair shape and style.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe propose TransparentGS, a fast inverse rendering pipeline for transparent objects based on 3D-GS. The main contributions are three-fold: efficient transparent Gaussian primitives for specular refraction, GaussProbe to encode ambient light and nearby contents, and the IterQuery algorithm to reduce parallax errors in our probe-based framework.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionIn a dark alley, a scrawny rat has no choice but to fight with a pigeon for a small slice of pizza. Without a second thought they throw themselves in a vertiginous chase from the top to the bottom of the street.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionIn a near future dominated by the metaverse, AI agents investigate cybercrime. Violet, an AI detective, is assigned to question Mia, a defiant youth. But upon hearing Violet’s name, Mia reveals a shocking truth. In this sealed room, what revelation awaits? A neo-noir suspense short with a bold visual style.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionFirst flight. No parents. Total panic. A terrified boy just wants to survive takeoff, but the plane—and its deranged passengers—have other plans.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionThis paper presents UltraMeshRenderer, a GPU out-of-core method for real-time rendering of 3D scenes with billions of vertices and triangles. It features a balanced hierarchical mesh, coherence-based LOD selection, and parallel in-place GPU memory management, achieving efficient data transfer and memory use with significant improvements over existing out-of-core techniques.
Immersive Pavilion
Gaming & Interactive
New Technologies
Research & Education
Education
Ethics and Society
Haptics
Simulation
Virtual Reality
Full Conference
Experience
Description“︁Unbalanced”︁ is a multisensory VR simulation that immerses users in the embodied experience of working mothers managing a crying infant, household chores, and job demands. Through interactive stressors and audio-haptic feedback, it fosters empathy and reflection on gendered labor in education, DEI training, and mental health contexts.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWarped-area reparameterization is a powerful technique to compute differential visibility. The key is constructing a velocity field that is continuous in the domain interior and agrees with defined velocities on boundaries. We present a robust and efficient unbiased estimator for differential visibility, using a fixed-step walk-on-spheres and closest silhouette queries.
Art Gallery
Arts & Design
Full Conference
Experience
DescriptionUnbound Horizons – Wing Series (2018–2025, glass sculpture, programmed light installation) is an interactive installation composed of 12 glass seagull sculptures. The work evokes the fluid, collective motion of birds in flight through shifting patterns of light that animate the sculptures in a continuous, harmonious rhythm. These light movements, designed to resemble natural murmuration phenomena, create a dynamic visual experience that transforms throughout the day and night. Blending traditional glass craftsmanship with subtle interactivity, the installation invites contemplation of movement, light, and the delicate balance between nature and art.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Graphics Systems Architecture
Image Processing
Industry Insight
Math Foundations and Theory
Modeling
Physical AI
Pipeline Tools and Work
Rendering
Scientific Visualization
Spatial Computing
Full Conference
Experience
DescriptionUNC Department of Computer Science SIGGRAPH Reunion Luncheon
By invitation only. To receive an invitation email: er@cs.unc.edu
Hosted by Praneeth Chakravarthula and Henry Fuchs
Hybrid event
By invitation only. To receive an invitation email: er@cs.unc.edu
Hosted by Praneeth Chakravarthula and Henry Fuchs
Hybrid event
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe quantify uncertainty for SVBRDF acquisition from multi-view captures using entropy. The otherwise heavy computation is accelerated in the frequency domain, yielding a practical, efficient method. We apply uncertainty to improve SVBRDF capture by guiding camera placement, inpainting uncertain regions, and sharing information from certain regions on the object.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present a new SPH approach to replicate the behavior of droplets and other smaller scale fluid bodies. For this, we develop a new implicit surface tension formulation and implement a Coulomb friction force at the fluid-solid interface. A strong coupling between both forces and pressure is achieved through a unified solving mechanism.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
Description3D2EP transforms 3D shapes into expressive, editable primitives by extruding 2D profiles along 3D curves. This approach creates compact, interpretable representations that support intuitive editing and flexible redesign. It delivers high fidelity and efficiency, outperforming existing methods across digital design, asset creation, and customization workflows.
Spatial Storytelling
Arts & Design
Gaming & Interactive
Not Livestreamed
Not Recorded
Games
Haptics
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionJoin us for a 30-minute live demo with real-time commentary, as we delve into the creative and technical journey behind First Encounters, a groundbreaking mixed reality (MR) experience that has captivated both the press and the public.
Course
Production & Animation
Livestreamed
Recorded
Animation
Geometry
Pipeline Tools and Work
Full Conference
Virtual Access
Sunday
DescriptionBased on real production examples, this Universal Scene Description (USD) course will expand upon previously presented best practices for pipeline infrastructure and integration. Presenters will walk through how they are more powerfully leveraging USD, building flexible, context-driven workflows, while balancing optimizations for consumer and author performance.
Birds of a Feather
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Pipeline Tools and Work
Real-Time
Rendering
Full Conference
Experience
DescriptionJoin the discussion with the developers and users of Universal Scene Description (USD), Hydra and OpenSubdiv.
Talk
Production & Animation
Research & Education
Livestreamed
Recorded
Pipeline Tools and Work
Full Conference
Virtual Access
Sunday
DescriptionFor the past three years, we have used VMs in our computer lab disk image to facilitate a hybrid Linux and Windows environment that is flexible, maintainable, and artist-friendly. We illustrate the potential of VM-enabled, OS-agnostic lab environments for enhancing small studio or educational workflows and maximizing resource utilization.
Birds of a Feather
Gaming & Interactive
Research & Education
Not Livestreamed
Not Recorded
Education
Games
Full Conference
Experience
DescriptionSDL is a popular open-source library used in games and other interactive software. Its latest version includes a new GPU API that provides a cross-platform modern graphics interface with less of a learning curve than Vulkan. This makes SDL GPU a perfect candidate for teaching students a modern approach to interactive graphics.
Join SDL GPU project lead Evan Hemsley alongside professors Sanjay Madhav (USC), Mike Shah (Yale), and Matt Whiting (USC), as they discuss the design of SDL GPU and present first steps towards integrating the library into interactive graphics curriculum.
Join SDL GPU project lead Evan Hemsley alongside professors Sanjay Madhav (USC), Mike Shah (Yale), and Matt Whiting (USC), as they discuss the design of SDL GPU and present first steps towards integrating the library into interactive graphics curriculum.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose a novel ICP framework that jointly optimizes a shared template and instance-wise deformations. Our approach automatically captures common shape features from input shapes, achieving state-of-the-art accuracy and consistency while eliminating the need to carefully select a preset template mesh.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionThis paper shows how to express variational time integration for a large class of elastic energies as an optimization problem with a “hidden” convex substructure. Our integrator improves the performance of elastic simulation tasks, while conserving physical invariants up to tolerance/numerical precision.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe present analytical formulas for evaluating Green and biharmonic 2D coordinates and their gradients and Hessians, for 2D cages made of polynomial arcs.
We present results of 2D image deformations by direct interaction with the cage and through variational solvers.
We demonstrate the flexibility
We present results of 2D image deformations by direct interaction with the cage and through variational solvers.
We demonstrate the flexibility
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe introduced a new surface reconstruction method from points without normals. The method robustly handles undersampled regions and scales to large input sizes.
Emerging Technologies
New Technologies
Fabrication
Hardware
Modeling
Full Conference
Experience
DescriptionWe present a novel dual-extruder clay printer with a continuously steerable concentric nozzle that produces graded or high-contrast textured surfaces in a single pass before firing. Our design includes a compact implementation featuring a unique plunger system with two concentric reservoirs.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionVariance reduction techniques are widely used to reduce the noise of Monte Carlo integration. However, these techniques are typically designed with the assumption that the integrand is scalar-valued. To address this, we introduce ratio control variations, an estimator that leverages a ratio-based approach instead of the conventional difference-based control variates.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionDuring World War I, a family is torn apart by the horrors of war.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Birds of a Feather
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Animation
Industry Insight
Pipeline Tools and Work
Full Conference
Experience
DescriptionHear the latest on the VFX Reference Platform and discuss software versioning challenges and
opportunities with your peers from both studios and software vendors.
opportunities with your peers from both studios and software vendors.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe propose VideoAnydoor, a zero-shot video object insertion framework with high-fidelity detail preservation and precise motion control, where a pixel warper and a image-video mix-training strategy are designed to warp the pixel details according to the trajectories. VideoAnydoor demonstrates significant superiority over existing methods and naturally supports various downstream applications.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Thursday
DescriptionVideoPainter introduces a dual-branch framework for video inpainting with a lightweight context encoder that integrates with pre-trained diffusion transformers. Its ID resampling strategy maintains identity consistency across any-length videos, while VPData and VPBench provide the largest segmentation-mask dataset with captions. The system achieves state-of-the-art performance in video inpainting and editing.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionThis paper presents VirCHEW Reality, a face-worn haptic device for virtual food intake in VR. It uses pneumatic actuation to simulate food textures, enhancing the chewing experience. User studies demonstrated its effectiveness in providing distinct kinesthetic feedback and improving virtual eating experiences, with applications in dining, healthcare, and entertainment.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionV3DG achieves real-time rendering of massive 3D Gaussians in large, composed scenes through a novel LOD approach.
Inspired by Nanite, V3DG processes detailed 3D assets into clusters at various granularities offline, and selectively renders 3D Gaussians at runtime—flexibly balancing rendering speed and visual fidelity based on user-defined tolerances.
Inspired by Nanite, V3DG processes detailed 3D assets into clusters at various granularities offline, and selectively renders 3D Gaussians at runtime—flexibly balancing rendering speed and visual fidelity based on user-defined tolerances.
Educator's Forum
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Education
Image Processing
Lighting
Real-Time
Virtual Reality
Full Conference
Virtual Access
Experience
Wednesday
DescriptionRecent imaging advances have granted CGI enthusiasts unprecedented and affordable access to stereophotogrammetric 3D scanning techniques. This technological democratization has enabled dynamic "gateway" engagements (such as our “Virtualizing the Stanley”), which enable students to explore artistic theory through advanced, but accessible, CGI focused educational activities.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce ViSA (Virtual Stunt Actors), an interactive animation system using deep reinforcement learning to generate realistic ballistic stunt actions. It efficiently produces dynamic scenes commonly seen in films and TV dramas, such as traffic accidents and stairway falls. A novel action space design enables scene generation within minutes.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionThis is a collection of bird sounds reinterpreted into a visual sound structures that reflect certain aspects of the subject matter. Each one is meticulously produced in a 3D program called Houdini. The artist hopes to inspire the audience about the beauty of nature and the importance of habitat protection.
Course
Research & Education
Livestreamed
Recorded
Scientific Visualization
Full Conference
Virtual Access
Monday
DescriptionThinking systematically about existing visualization systems provides good springboard for designing new ones. This course is focused on data and task abstractions, and the design choices for visual encoding and interaction; it will not cover algorithms. It encompasses techniques and data types spanning visual analytics, information visualization, scientific visualization.
Course
New Technologies
Livestreamed
Recorded
Augmented Reality
Performance
Real-Time
Rendering
Virtual Reality
Full Conference
Virtual Access
Thursday
DescriptionA course introducing challenges of VR graphics and details of different optimization techniques to tackle those challenges, which are used in main stream VR products.
Course
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Livestreamed
Recorded
Artificial Intelligence/Machine Learning
Augmented Reality
Digital Twins
Display
Games
Simulation
Virtual Reality
Full Conference
Virtual Access
Sunday
DescriptionCybersickness remains a persistent challenge in VR/XR, hindering user experience and adoption. This course bridges research and industry, equipping developers, designers, and researchers with science-backed strategies to mitigate cybersickness through optimized locomotion, interaction, and environment design. Engage with case studies and activities to create more comfortable, accessible VR/XR experiences.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionVR-Doh, an intuitive VR-based 3D modeling system that lets you sculpt and manipulate soft objects and edit 3D Gaussian Splatting scenes in real time. Combining physics-based simulation and expressive interaction, VR-Doh empowers both novices and experts to create rich, deformable, simulation-ready models with natural hand-based input.
Production Session
Production & Animation
Not Livestreamed
Not Recorded
Animation
Capture/Scanning
Lighting
Simulation
Full Conference
Virtual Access
Wednesday
DescriptionStarting with an exciting and challenging client brief – an infected horde thawing from frozen stasis to overrun the town of Jackson, and a tight timeline for this ambitious work, meant our Wētā FX team really had something to sink our teeth into.
VFX Supervisor Nick Epstein and Animation Supervisor Dennis Yoo will take you through how Wētā FX worked with the production VFX team to previs, build, animate, and integrate a thousand-strong horde to the gritty realism of The Last of Us Season 2.
Usually crowd work suggests compromises, but as this talk will show, for this artfully chaotic sequence there were none. Every infected was designed and built to hold up full screen, with full cloth and hair simulation, and a full set of face shapes. Essentially, every crowd character was a hero character, and that presented a lot of challenges.
Taking you onset, Nick will explore how Wētā's Previs was developed with and utilised by the production team to capture some incredibly complex action with confidence that it would translate to a predictable, yet flexible final product.
Extremely variable shooting conditions required innovative solutions, including the development of a robust depth extraction toolkit which enabled varying weather patterns to be inserted into any plate, and the entirely CG horde to easily be intermingled with live action performers.
Flexibility and variation was one of the most important design factors to consider to bring the sequence to life. For this, we required a ‘mix and match’ system that would allow any piece of clothing to be dynamically refitted to any infected, regardless of proportions, and crucially this needed to be represented upfront in animation as well as renders later.
Dennis will outline the challenging combination of motion capture, keyframe animation, and ragdoll dynamics required to achieve a realistic, but also somewhat inhuman – infected – cadence to horde performances.
This talk will detail the process for building a vast digital assets wardrobe based on client provided scans, and adapting these across the infected horde using a procedural texturing system in lighting, usually utilised earlier in the pipeline by lookdev.
In addition to this, the episode needed extensive cloth and hair simulation at a scale previously not undertaken at Wētā. This was further complicated by the dynamic refitting of garments, as well as the use of ragdoll dynamics, resulting in some (sometimes) comically terrifying situations for our creature team to wrestle through.
We developed a Nuke based system for ‘weather fixes’ driven by challenging conditions during the Jackson siege shoot. Finally, we’ll talk about how our environment/DMP team handled full replacements - from the snowfields and mountains surrounding Jackson in different weather conditions, to full CG forested snowscapes.
Last but certainly not least, is the return of the iconic Bloater. The new unforgiving environment and intense weather conditions meant that we had to carefully rethink the complex character build, and how to achieve all the menacing details…
VFX Supervisor Nick Epstein and Animation Supervisor Dennis Yoo will take you through how Wētā FX worked with the production VFX team to previs, build, animate, and integrate a thousand-strong horde to the gritty realism of The Last of Us Season 2.
Usually crowd work suggests compromises, but as this talk will show, for this artfully chaotic sequence there were none. Every infected was designed and built to hold up full screen, with full cloth and hair simulation, and a full set of face shapes. Essentially, every crowd character was a hero character, and that presented a lot of challenges.
Taking you onset, Nick will explore how Wētā's Previs was developed with and utilised by the production team to capture some incredibly complex action with confidence that it would translate to a predictable, yet flexible final product.
Extremely variable shooting conditions required innovative solutions, including the development of a robust depth extraction toolkit which enabled varying weather patterns to be inserted into any plate, and the entirely CG horde to easily be intermingled with live action performers.
Flexibility and variation was one of the most important design factors to consider to bring the sequence to life. For this, we required a ‘mix and match’ system that would allow any piece of clothing to be dynamically refitted to any infected, regardless of proportions, and crucially this needed to be represented upfront in animation as well as renders later.
Dennis will outline the challenging combination of motion capture, keyframe animation, and ragdoll dynamics required to achieve a realistic, but also somewhat inhuman – infected – cadence to horde performances.
This talk will detail the process for building a vast digital assets wardrobe based on client provided scans, and adapting these across the infected horde using a procedural texturing system in lighting, usually utilised earlier in the pipeline by lookdev.
In addition to this, the episode needed extensive cloth and hair simulation at a scale previously not undertaken at Wētā. This was further complicated by the dynamic refitting of garments, as well as the use of ragdoll dynamics, resulting in some (sometimes) comically terrifying situations for our creature team to wrestle through.
We developed a Nuke based system for ‘weather fixes’ driven by challenging conditions during the Jackson siege shoot. Finally, we’ll talk about how our environment/DMP team handled full replacements - from the snowfields and mountains surrounding Jackson in different weather conditions, to full CG forested snowscapes.
Last but certainly not least, is the return of the iconic Bloater. The new unforgiving environment and intense weather conditions meant that we had to carefully rethink the complex character build, and how to achieve all the menacing details…
Art Paper
Arts & Design
Livestreamed
Recorded
Art
Fabrication
Full Conference
Virtual Access
Experience
Monday
DescriptionThis paper presents a procedural data generation method for Jacquard weaving that uses matrix computations to create textiles with complex shaded patterns and a triple-layer structure. Employing this method, the authors creatively applied noise functions to weave design and produced a textile artwork in collaboration with a traditional craft technique.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Real-Time
Simulation
Full Conference
Experience
DescriptionFungi are everywhere, in the air, water, and soil, where they support and mediate between living and non-living. We Are Entanglement invites visitors into an immersive, interactive environment in which humans become part of the networks of communication through fungal networks beneath the floor of a forest. The artwork combines procedural modeling, generative AI, and dynamic simulation of vast numbers of living organisms. It is grounded in an imperative of drawing attention to the importance of nonconscious cognition and interspecies communication in biological and machine senses, as a reminder of the essential broader world around us.
Frontiers
Arts & Design
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Education
Ethics and Society
Math Foundations and Theory
Scientific Visualization
Full Conference
Experience
DescriptionVision Bursts: Panelist delivers a 10-minute “vision burst” — not a research paper, but an idea about Indigenous knowledge and the future of technology. Why: Storytelling sparks imagination and honors Indigenous traditions of direct storytelling.
Show & Tell: Seeds of Innovation — Panelist presents a visual, physical, or conceptual artifact that embodies their idea., making ideas tangible, memorable, and sensory.
Lightning Round: Imagine If... — A dynamic round where the moderator offers “Imagine if…” prompts (e.g., "Imagine if AI listened to the Earth") and each panelist responds with a short response / dream, encouraging bold, fast visions of the future.
Show & Tell: Seeds of Innovation — Panelist presents a visual, physical, or conceptual artifact that embodies their idea., making ideas tangible, memorable, and sensory.
Lightning Round: Imagine If... — A dynamic round where the moderator offers “Imagine if…” prompts (e.g., "Imagine if AI listened to the Earth") and each panelist responds with a short response / dream, encouraging bold, fast visions of the future.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionWhen a teenage boy visits his Gramps at a seemingly mundane boring assisted living facility, he comes to find that they have much more in common than he thought. “Wednesdays with Gramps” is a story about connection, communication, and commonality, without saying a word.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionNexus Studios’ Fx Goby draws a parallel between romantic love and the passion athletes feel for their sports in this launch film for the Olympic Games coverage. Fx and the Nexus Studios team led an artful choreography of 58 shots and 36 athletes in 60 seconds of breathtaking film.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Monday
DescriptionWe studied preferences for different contrasts and peak luminances in HDR. To do this, we collected a new HDR video dataset, developed tone mappers, and built an HDR haploscope that can reproduce high luminance and contrast. Data was fit to a model which is used for applications like display design.
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce Gaussian-enhanced Surfels (GESs), a bi-scale representation combining opaque surfels and Gaussians for high-fidelity radiance field rendering. GES is entirely sorting free, enabling high-fidelity view-consistent rendering with ultra fast speeds.
Emerging Technologies
Gaming & Interactive
New Technologies
Research & Education
Augmented Reality
Display
Games
Hardware
Simulation
Virtual Reality
Full Conference
Experience
DescriptionWe present wide field-of-view VR and passthrough MR headsets with compact form factors that achieve state-of-the-art 180-degree horizontal and 120-degree vertical fields of view.
Birds of a Feather
Research & Education
Not Livestreamed
Not Recorded
Education
Full Conference
Experience
DescriptionThe WiGRAPH Rising Stars in Computer Graphics program is designed to encourage people of underrepresented genders to become research leaders in computer graphics industry and academia. Rising Stars is a two-year program tailored for researchers within two years of entering the job market. The program will pair each participant with a senior mentor and invite them to join two workshops co-located with SIGGRAPH 2024 and SIGGRAPH 2025.
Part I of the workshop will feature panel discussions and lightning talks from both junior and senior rising stars. Participation in this program is by invitation only.
Part I of the workshop will feature panel discussions and lightning talks from both junior and senior rising stars. Participation in this program is by invitation only.
Birds of a Feather
Research & Education
Not Livestreamed
Not Recorded
Education
Full Conference
Experience
DescriptionThe WiGRAPH Rising Stars in Computer Graphics program is designed to encourage people of underrepresented genders to become research leaders in computer graphics industry and academia. Rising Stars is a two-year program tailored for researchers within two years of entering the job market. The program will pair each participant with a senior mentor and invite them to join two workshops co-located with SIGGRAPH 2025 and SIGGRAPH 2026.
Part II of the workshop will be a panel for senior cohorts. Participation in this program is by invitation only.
Part II of the workshop will be a panel for senior cohorts. Participation in this program is by invitation only.
Birds of a Feather
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionThe WiGRAPH Rising Stars in Computer Graphics program is designed to encourage people of underrepresented genders to become research leaders in computer graphics industry and academia. Rising Stars is a two-year program tailored for researchers within two years of entering the job market. The program will pair each participant with a senior mentor and invite them to join two workshops co-located with SIGGRAPH 2025 and SIGGRAPH 2026.
Part III of the workshop will be a social event for Rising Stars to connect with academic professionals, building relationships and expanding networks. Participation in this program is by invitation only.
Part III of the workshop will be a social event for Rising Stars to connect with academic professionals, building relationships and expanding networks. Participation in this program is by invitation only.
Birds of a Feather
Research & Education
Not Livestreamed
Not Recorded
Education
Full Conference
Experience
DescriptionThe WiGRAPH Rising Stars in Computer Graphics program is designed to encourage people of underrepresented genders to become research leaders in computer graphics industry and academia. Rising Stars is a two-year program tailored for researchers within two years of entering the job market. The program will pair each participant with a senior mentor and invite them to join two workshops co-located with SIGGRAPH 2025 and SIGGRAPH 2026.
Part IV of the workshop will be a social event for Rising Stars to connect with industry professionals, building relationships and expanding networks. Participation in this program is by invitation only.
Part IV of the workshop will be a social event for Rising Stars to connect with industry professionals, building relationships and expanding networks. Participation in this program is by invitation only.
Technical Paper
Research & Education
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionOur work is a lightweight static global illumination baking solution that achieves competitive lighting effects while using only approximately 5% of the memory required by mainstream industry techniques. By adopting a vertex-probe structure, we ensure excellent runtime performance, making it suitable for low-end devices.
Course
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Display
Education
Ethics and Society
Games
Generative AI
Modeling
Scientific Visualization
Simulation
Full Conference
DescriptionMisjudging others leads to misjudging the world around us, with unfortunate consequences. When visualizing others, our design choices can reinforce these misbeliefs, or correct them. This course explores the surprising interplay between visual representation and social psychology, and how equity-forward design promotes clear, constructive visualizations of people and social outcomes.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionThe day the radiation disappears, Simon rushes to the heart of the zone, taking his colleague Agathe with him, in the hope of rediscovering a lost past.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Ethics and Society
Full Conference
Experience
DescriptionJoin Women of SIGGRAPH Conversations (WOSC) for our annual food, mingling, and panel event, around the theme of "Resilience".
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe introduce xADA, a generative model for creating expressive and realistic animation of the face, tongue, and head directly from speech audio.
The animation maps directly onto MetaHuman compatible rig controls enabling integration into industry-standard content creation pipelines.
xADA generalizes across languages, and voice styles, and can animate non-verbal sounds.
The animation maps directly onto MetaHuman compatible rig controls enabling integration into industry-standard content creation pipelines.
xADA generalizes across languages, and voice styles, and can animate non-verbal sounds.
Frontiers
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Augmented Reality
Computer Vision
Digital Twins
Industry Insight
Scientific Visualization
Virtual Reality
Full Conference
Experience
DescriptionThis SIGGRAPH Frontiers workshop brings together leading minds in immersive technology and clinical practice to explore what it truly takes to translate XR from lab demos to life-saving tools. Through concise presentations, clinical case studies, and a live, collaborative design session, attendees will engage directly with surgeons and physicians to uncover real-world needs, constraints, and opportunities. Rather than pitching solutions, this session fosters dialogue—inviting developers, researchers, and medical experts to co-design the future of XR in medicine. If you’re interested in meaningful impact, this is where innovative graphics meet practical care.
Educator's Forum
Arts & Design
New Technologies
Research & Education
Livestreamed
Recorded
Art
Digital Twins
Real-Time
Full Conference
Virtual Access
Experience
Wednesday
DescriptionXR Performance is a cross-disciplinary course exploring how extended reality (XR) reshapes storytelling, audience interaction, and performance. Through hands-on projects in motion capture, virtual production, and immersive sound design, students develop technical fluency while critically examining XR’s impact on media, expanding the possibilities of digital and physical space.
Educator's Forum
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Augmented Reality
Education
Games
Performance
Real-Time
Virtual Reality
Full Conference
Virtual Access
Experience
Wednesday
DescriptionWe describe the XRLive Project, a cross-discipline, experiential learning opportunity, built upon the Vertically Integrated Project (VIP) approach, that focuses on the production of live musical, theatrical, and dance performances using advanced technologies such as VR/AR/XR and motion capture.
Production Session
Production & Animation
Not Livestreamed
Not Recorded
Animation
Art
Pipeline Tools and Work
Virtual Reality
Full Conference
Virtual Access
Tuesday
DescriptionHave you ever wondered what it would be like to fly? To soar among the clouds on an adventure with Peter Pan?
That was the question Walt Disney Imagineering and Walt Disney Animation Studios set out to answer with Peter Pan’s Never Land Adventure, the new attraction which opened in June of 2024 in Tokyo DisneySea’s Fantasy Springs. The development of this major new ride-through adventure took over seven years, with hundreds of artists, technicians, and software developers partnering to get it off the ground.
In this session, our panelists will discuss the collaborative efforts between Walt Disney Imagineering and Walt Disney Animation Studios as they crafted the visually immersive, stereoscopic experience. We’ll go into detail about the visual and story development process, the creation of 3D assets based on the original hand-drawn theatrical film, and the technical innovations created throughout the project. This undertaking was a unique opportunity where Disney Animation artists got to take part in the magic that Walt Disney Imagineering creates every day.
Join us as we discuss the “faith, trust and pixie dust” that ensured a trip to Never Land became a reality.
That was the question Walt Disney Imagineering and Walt Disney Animation Studios set out to answer with Peter Pan’s Never Land Adventure, the new attraction which opened in June of 2024 in Tokyo DisneySea’s Fantasy Springs. The development of this major new ride-through adventure took over seven years, with hundreds of artists, technicians, and software developers partnering to get it off the ground.
In this session, our panelists will discuss the collaborative efforts between Walt Disney Imagineering and Walt Disney Animation Studios as they crafted the visually immersive, stereoscopic experience. We’ll go into detail about the visual and story development process, the creation of 3D assets based on the original hand-drawn theatrical film, and the technical innovations created throughout the project. This undertaking was a unique opportunity where Disney Animation artists got to take part in the magic that Walt Disney Imagineering creates every day.
Join us as we discuss the “faith, trust and pixie dust” that ensured a trip to Never Land became a reality.
Sessions
Production Session
Full Conference
Virtual Access
Wednesday
Art Paper
Arts & Design
Full Conference
Virtual Access
Experience
Monday
Technical Paper
Research & Education
Full Conference
Virtual Access
Monday
Talk
Full Conference
Virtual Access
Thursday
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
Appy Hour
Full Conference
Experience
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Full Conference
Experience
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Full Conference
Experience
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Full Conference
Experience
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Full Conference
Experience
Technical Paper
Research & Education
Full Conference
Virtual Access
Wednesday
Technical Paper
Research & Education
Full Conference
Virtual Access
Monday
Technical Paper
Research & Education
Full Conference
Virtual Access
Thursday
Technical Paper
Research & Education
Full Conference
Virtual Access
Wednesday
Technical Paper
Research & Education
Full Conference
Virtual Access
Tuesday
Talk
Full Conference
Virtual Access
Sunday
Technical Paper
Research & Education
Full Conference
Virtual Access
Monday
Talk
Full Conference
Virtual Access
Sunday
Technical Paper
Research & Education
Full Conference
Virtual Access
Monday
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Games
Modeling
Rendering
Full Conference
Experience
Educator's Forum
Full Conference
Virtual Access
Experience
Wednesday
Art Paper
Arts & Design
Full Conference
Virtual Access
Experience
Monday
Talk
Full Conference
Virtual Access
Tuesday
Talk
Full Conference
Virtual Access
Thursday
Art Paper
Arts & Design
Full Conference
Virtual Access
Experience
Tuesday
Technical Paper
Research & Education
Full Conference
Virtual Access
Wednesday
Technical Paper
Research & Education
Full Conference
Virtual Access
Tuesday
Technical Paper
Research & Education
Full Conference
Virtual Access
Wednesday
Technical Paper
Research & Education
Full Conference
Virtual Access
Wednesday
Technical Paper
Research & Education
Full Conference
Virtual Access
Tuesday
Computer Animation Festival
Not Livestreamed
Not Recorded
Computer Animation Festival
Not Livestreamed
Not Recorded
Emerging Technologies
New Technologies
Full Conference
Experience
Emerging Technologies
New Technologies
Full Conference
Experience
Emerging Technologies
New Technologies
Full Conference
Experience
Emerging Technologies
New Technologies
Full Conference
Experience
Talk
Full Conference
Virtual Access
Wednesday
Talk
Full Conference
Virtual Access
Sunday
Exhibition
Full Conference
Experience
Exhibits Only
Exhibition
Full Conference
Experience
Exhibits Only
Exhibition
Full Conference
Experience
Exhibits Only
Technical Paper
Research & Education
Full Conference
Virtual Access
Thursday
Technical Paper
Research & Education
Full Conference
Virtual Access
Wednesday
Talk
Full Conference
Virtual Access
Tuesday
Educator's Forum
Full Conference
Virtual Access
Experience
Tuesday
Technical Paper
Research & Education
Full Conference
Virtual Access
Wednesday
Talk
Full Conference
Virtual Access
Wednesday
Technical Paper
Research & Education
Full Conference
Virtual Access
Thursday
Technical Paper
Research & Education
Full Conference
Virtual Access
Monday
Labs
Full Conference
Experience
Technical Paper
Research & Education
Full Conference
Virtual Access
Tuesday
Technical Paper
Research & Education
Full Conference
Virtual Access
Wednesday
Immersive Pavilion
New Technologies
Full Conference
Experience
Immersive Pavilion
New Technologies
Full Conference
Experience
Immersive Pavilion
New Technologies
Full Conference
Experience
Immersive Pavilion
New Technologies
Full Conference
Experience
Technical Paper
Research & Education
Full Conference
Virtual Access
Monday
Job Fair
Full Conference
Experience
Exhibits Only
Job Fair
Full Conference
Experience
Exhibits Only
Technical Paper
Research & Education
Full Conference
Virtual Access
Monday
Educator's Forum
Full Conference
Virtual Access
Experience
Wednesday
Technical Paper
Research & Education
Full Conference
Virtual Access
Wednesday
Technical Paper
Research & Education
Full Conference
Virtual Access
Thursday
Technical Paper
Research & Education
Full Conference
Virtual Access
Monday
Technical Paper
Research & Education
Full Conference
Virtual Access
Thursday
Talk
Production & Animation
Full Conference
Virtual Access
Sunday
Technical Paper
Research & Education
Full Conference
Virtual Access
Thursday
Technical Paper
Research & Education
Full Conference
Virtual Access
Wednesday
Talk
Full Conference
Virtual Access
Thursday
Technical Paper
Research & Education
Full Conference
Virtual Access
Wednesday
Technical Paper
Research & Education
Full Conference
Virtual Access
Thursday
Technical Paper
Research & Education
Full Conference
Virtual Access
Tuesday
Technical Paper
Research & Education
Full Conference
Virtual Access
Monday
Technical Paper
Research & Education
Full Conference
Virtual Access
Thursday
Talk
Full Conference
Virtual Access
Wednesday
Technical Paper
Research & Education
Full Conference
Virtual Access
Wednesday
Technical Paper
Research & Education
Full Conference
Virtual Access
Thursday
Talk
Full Conference
Virtual Access
Wednesday
Technical Paper
Livestreamed
Recorded
Full Conference
Virtual Access
Experience
Sunday
Technical Paper
Research & Education
Full Conference
Virtual Access
Wednesday
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Talk
Full Conference
Virtual Access
Sunday
Real-Time Live!
Full Conference
Virtual Access
Tuesday
Technical Paper
Research & Education
Full Conference
Virtual Access
Monday
Technical Paper
Research & Education
Full Conference
Virtual Access
Tuesday
Technical Paper
Research & Education
Full Conference
Virtual Access
Wednesday
Talk
Full Conference
Virtual Access
Thursday
Technical Paper
Research & Education
Full Conference
Virtual Access
Thursday
Technical Paper
Research & Education
Full Conference
Virtual Access
Wednesday
Technical Paper
Research & Education
Full Conference
Virtual Access
Monday
Talk
Full Conference
Virtual Access
Sunday
Technical Paper
Research & Education
Full Conference
Virtual Access
Monday
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Technical Paper
Research & Education
Full Conference
Virtual Access
Wednesday
Technical Paper
Research & Education
Full Conference
Virtual Access
Monday
Technical Paper
Research & Education
Full Conference
Virtual Access
Tuesday
Talk
Full Conference
Virtual Access
Sunday
Technical Paper
Research & Education
Full Conference
Virtual Access
Monday
Technical Paper
Technical Papers Closing Session
5:15pm - 5:30pm PDT Thursday, 14 August 2025 West Building, Rooms 211-214Arts & Design
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Technical Paper
Research & Education
Full Conference
Virtual Access
Thursday
Art Paper
Arts & Design
Full Conference
Virtual Access
Experience
Tuesday
Technical Paper
Research & Education
Full Conference
Virtual Access
Thursday
Technical Paper
Research & Education
Full Conference
Virtual Access
Monday
Technical Paper
Research & Education
Full Conference
Virtual Access
Thursday
Technical Paper
Research & Education
Full Conference
Virtual Access
Thursday
Technical Paper
Research & Education
Full Conference
Virtual Access
Tuesday
Technical Paper
Research & Education
Full Conference
Virtual Access
Monday
Birds of a Feather
Research & Education
Education
Full Conference
Experience
Try a different query.